added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2017-06-15T18:36:37.021Z
2017-06-08T00:00:00.000
6817314
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2017.00176/pdf", "pdf_hash": "44f517377de58e5e307221720b0ba5b6193fc338", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42657", "s2fieldsofstudy": [ "Biology", "Psychology" ], "sha1": "254d81ffa74e665e6e9a4418c9a95bbd8fca91fe", "year": 2017 }
pes2o/s2orc
Targeting Microglial Activation States as a Therapeutic Avenue in Parkinson’s Disease Parkinson’s disease (PD) is a chronic and progressive disorder characterized neuropathologically by loss of dopamine neurons in the substantia nigra, intracellular proteinaceous inclusions, reduction of dopaminergic terminals in the striatum, and increased neuroinflammatory cells. The consequent reduction of dopamine in the basal ganglia results in the classical parkinsonian motor phenotype. A growing body of evidence suggest that neuroinflammation mediated by microglia, the resident macrophage-like immune cells in the brain, play a contributory role in PD pathogenesis. Microglia participate in both physiological and pathological conditions. In the former, microglia restore the integrity of the central nervous system and, in the latter, they promote disease progression. Microglia acquire different activation states to modulate these cellular functions. Upon activation to the M1 phenotype, microglia elaborate pro-inflammatory cytokines and neurotoxic molecules promoting inflammation and cytotoxic responses. In contrast, when adopting the M2 phenotype microglia secrete anti-inflammatory gene products and trophic factors that promote repair, regeneration, and restore homeostasis. Relatively little is known about the different microglial activation states in PD and a better understanding is essential for developing putative neuroprotective agents. Targeting microglial activation states by suppressing their deleterious pro-inflammatory neurotoxicity and/or simultaneously enhancing their beneficial anti-inflammatory protective functions appear as a valid therapeutic approach for PD treatment. In this review, we summarize microglial functions and, their dual neurotoxic and neuroprotective role in PD. We also review molecules that modulate microglial activation states as a therapeutic option for PD treatment. INTRODUCTION Parkinson's disease (PD) is a common movement disorder and the second most prevalent neurodegenerative disorder worldwide, that affects nearly 2% of the elderly population. PD is characterized by loss of dopaminergic neurons in the substantia nigra pars compacta (SNpc) and consequently reduced dopamine (DA) levels in the basal ganglia, causing motor dysfunction (Figure 1). Lewy bodies, intracellular proteinaceous inclusions, are the pathological hallmark within PD brains. Lewy bodies contain fibrillar α-synuclein among other proteins (Spillantini et al., 1997). It is evident that the immune system is involved in PD risk by genome-wide association studies (GWAS), implicating the human leukocyte antigen (HLA) locus with sporadic PD (Hamza et al., 2010) and, as well, by the neuropathology in PD brains demonstrating highly activated microglial and T-cells (McGeer et al., 1988;Imamura et al., 2003). Several inflammatory markers have been identified in SNpc of PD brains including cytokines and neurotrophins (Nagatsu et al., 2000;Hunot and Hirsch, 2003). Also, widespread microglial activation is a concomitant in PD neuropathology (Gerhard et al., 2006). A meta-analysis of antiinflammatory drug trials revealed an association between nonsteroidal anti-inflammatory drugs (NSAIDs) use and reduced risk for developing PD possibly implicating neuroinflammatory processes in the disease (Gagne and Power, 2010). Evidence supports the conclusion that microglia, the brain resident macrophage-like immune cells, participate in the inflammatory response of the disease (Qian and Flood, 2008;Long-Smith et al., 2009;Moehle and West, 2015). In addition, other observations implicate peripheral immune cells in PD (Saunders et al., 2012;Funk et al., 2013;Chen et al., 2015). Together these data indicate that inflammation and microglial activation contribute to the in pathogenesis of PD. Hence, immunomodulation might be a possible therapeutic avenue for PD. The distribution of microglial M1/M2 phenotypes depends on the stage and severity of the disease. Understanding stagespecific switching of microglial phenotypes and the capacity to manipulate these transitions within appropriate time windows might be beneficial for PD therapy. In this review, we will outline different microglial activation states and provide evidence of M1/M2 activation states in PD. We will also discuss how manipulation of M1/M2 activation may be of potential therapeutic value. MICROGLIA FUNCTIONS In the central nervous system (CNS), the innate immune response is predominantly mediated by microglia and astrocytes. Microglia play a vital role in both physiological and pathological conditions. Tissue-specific macrophages can be found in most tissues of the body, whereas microglia are present distinctly in the brain. Microglia are derived from primitive yolk sac myeloid progenitors that seed the developing brain parenchyma (Alliot et al., 1999;Ginhoux and Jung, 2014). Microglia represent 10-15% of the total population of cells within the brain and manifest different morphologies across anatomic regions (Lawson et al., 1990;Mittelbronn et al., 2001). Microglia appear to be involved in several regulatory processes in the brain that are crucial for tissue development, maintenance of the neural environment and, response to injury and promoting repair. Similar to peripheral macrophages, microglia directly respond to pathogens and maintain cellular homeostasis by purging said pathogens, as well as dead cells and pathological gene products (Gehrmann et al., 1995;Bruce-Keller, 1999;Stevens et al., 2007;Tremblay et al., 2010;Olah et al., 2011;Paolicelli et al., 2011). In addition, microglial function can be altered by interactions with neurons, astrocytes, migrating T-cells, and the blood-brain barrier itself . Under physiological conditions, microglia acquire a neuralspecific, relatively inactive phenotype (Schmid et al., 2009) where they sample and inspect the local environment and other brain cells types (Davalos et al., 2005;Nimmerjahn et al., 2005). In a healthy brain, resting quiescent microglia exhibit a ramified morphology, with relatively long cytoplasmic protrusions, a stable cell body and little or no movement (Figure 2). Quiescent microglia extend processes into their surrounding environment (Nimmerjahn et al., 2005). This resting stage is partly maintained by signals conveyed by neuronal and astrocyte-derived factors (Neumann et al., 2009;Ransohoff and Cardona, 2010). The maintenance of this inactive state is regulated by several intrinsic factors like Runx1 (Runt-related transcription factor 1) and Irf8 (Interferon regulatory factor 8), and extrinsic factors such as TREM2 (triggering receptor also expressed on myeloid cells-2), chemokine CX3CR1 and CD200R (Kierdorf and Prinz, 2013). In the normal CNS environment, healthy neurons provide signals to microglia via secreted and membrane bound factors, such as CX3CL1, neurotransmitters, neurotrophins and CD22 (Sunnemark et al., 2005;Frank et al., 2006;Lyons et al., 2007;Pocock and Kettenmann, 2007). In addition, microglia express elevated levels of microRNA-124 which in turn reduces CD46, MHC-II (major histocompatibility complex II) and CD11b, to maintain the quiescent state (Conrad and Dittel, 2011). MICROGLIAL ACTIVATION: THE DUAL ROLES OF MICROGLIA As peripheral macrophages respond to endogenous stimuli promoting both pathogenic and protective functions, so do microglia. Upon exposure to endogenous stimuli microglia become activated. Among the gene products released by microglia are various substances including pro-inflammatory cytokines, neurotoxic proteins, chemokines, anti-inflammatory cytokines, and neurotrophic factors (Mahad and Ransohoff, 2003;Block et al., 2007;Benarroch, 2013;Nakagawa and Chiba, 2015). Microglia also display signaling immunoreceptors such as Toll-like receptors (TLRs), scavenger receptors (SRs), nucleotide binding oligomerization domains (NODs) and NODlike receptors (Ransohoff and Brown, 2012). Fundamentally, the two polar states of microglia, the M1 and M2 phenotypes are associated phenomenologically with injury and homeostasis, respectively (as described below). Differential states of microglial activation within an injured tissue evolve during an inflammatory epoch (Graeber, 2010). LPS induces M1 activation via TLRs, which recognize specific patterns of microbial macromolecules. LPS binds to TLR4 on the cell surface that is coupled to MD2 (myeloid differentiation protein 2) (TLR/MD2) with participation of co-receptors CD14 and LBP (LPS-binding protein). LPS binding to TLR4 results in activation through MyD88 (myeloid differentiation primary response protein 88) and TRIF (TIR domain-containing adaptor inducing IFN-β), and transcription factors such as NF-κB (nuclear factor kappa B), STAT5, and IRFs (Takeda and Akira, 2004). This causes transcriptional upregulation of M1-associated cytokines, chemokines and other genes. Alternative M1 activation stimulation through granulocyte-macrophage colonystimulating factor (GM-CSF) has been demonstrated recently (Lacey et al., 2012). However, unlike LPS and IFN-γ, GM-CSF is reported to instigate pleomorphic activation states that shows characteristics of both M1 and M2 phenotypes (Weisser et al., 2013). Figure 3 (left side) provides further details on M1 activation. M2 Polarization State The alternative M2 microglial activation state is involved in various events including immunoregulation, inflammation dampening, and repair and injury resolution. M2 microglia is morphologically characterized by enlarged cell bodies (Figure 2). M2 microglial activation produces an array of mediators such as anti-inflammatory cytokines, extracellular matrix proteins, glucocorticoids, and other substances. Presently, the mechanism of M2 activation in microglia is poorly understood compared to macrophages. It is believed that microglia can develop diverse M2 phenotypes similar to macrophages (Morgan et al., 2005;Herber et al., 2006;Schwartz et al., 2006). The characteristics of M2 polarization of microglia parallel that of the macrophages (Chhor et al., 2013;Freilich et al., 2013) producing IL-4 and IL-10 stimulation through Arg1 (arginase 1), Ym1 (chitinase-like protein), Fizz1 (found in inflammatory zone), and PPAR (peroxisome proliferatoractivated receptor) (Michelucci et al., 2009). M2 macrophage activation is sub-classified into M2a, M2b, and M2c, and these FIGURE 3 | Overview of microglial M1 and M2 signaling in neurodegeneration, and potential targets for neuroprotection. The left side of the figure compartment shows M1 microglial phenotype and its major signaling pathways. LPS binds to TLR4 on the cell surface which is coupled to MD2 (TLR/MD2) with participation of co-receptors CD14 and LBP (LPS-binding protein), activates interleukin-1 receptor-associated kinases (IRAKs) through MyD88 and TRIF that causes translocation of transcription factors such as NF-κB, STAT5, activator protein-1 (AP1), and interferon regulatory factors (IRFs) to the nucleus. M1 activation by IFN-γ occurs through IFN-γ receptors 1 and 2 (IFN-γR1/2) leading to the recruitment of Janus kinase 1 and 2 (JAK1/2) that phosphorylate and translocate STAT1 and IRFs to the nucleus. M1 activation stimulation through granulocyte-macrophage colony-stimulating factor (GM-CSF) occurs when GM-CSF binds to its receptor GM-CSF-R, which activates rat sarcoma oncoproteins (RAS), JAK2, and SFK, and causes translocation of STAT5 to the nucleus. The translocation of NF-κB, STAT1, STAT5, AP1, and IRFs to the nucleus causes upregulation of intracellular iNOS and cell surface markers (CD86, CD16/32, MHC-II). M1 stimulation also causes transcriptional upregulation of M1-associated pro-inflammatory cytokines (IL-1β, IL-6, IL-12, TNF-α) and chemokines (CCL2, CXCL10). The right side of the figure compartment shows various M2 microglial phenotypes and major signaling pathways involved. M2 activation can be classified into M2a, M2b, and M2c. M2a state is induced mainly by IL-4. IL-4 binds to IL-4R, which stimulates JAK1 or JAK3 that causes translocation of STAT6 to the nucleus leading to transcription of M2a-associated genes including IL-10, cell surface markers (CD206, scavenger receptors, SRs), and intracellular components such as suppressor of cytokine signaling 3 (SOCS3), Ym1 (chitinase-like protein) and Fizz1 (found in inflammatory zone). The M2b activation state, which has some M1 response characteristics, is stimulated when TLRs fuse Fcγ receptors, which then bind to IgG (B cells) to derive the M2b phenotype. M2b activation results in secretion of IL-10 and, cell surface markers (CD86, MHC-II). M2c activation is induced by IL-10 which stimulates the IL-10 Receptor 1 and 2 subunits that activates JAK1 leading to the translocation of STAT3 to the nucleus. STAT3 translocation inhibits M1-associated pro-inflammatory cytokines and upregulation of IL-10, TGF-β and the cell surface marker CD206. M2c state plays an important role in immunoregulation, matrix deposition and tissue remodeling. The bottom half of the figure shows potential therapeutic microglial targets for neuroprotection. (1) JAK/STAT inhibition: M1 phenotype is induced via the JAK/STAT signaling pathway and inhibition of this pathway may suppress the downstream M1-associated pro-inflammatory genes. (2) Histone deacetylase (HDAC) inhibitors: Histone acetylation is increased in M1 state that may lead to the expression and release of pro-inflammatory cytokines. HDAC inhibitors prevent neurodegeneration by shifting microglia toward protective M2 phenotype and anti-inflammatory mechanisms. Polarization Transitions The transition from the M1 pro-inflammatory state to the regulatory or anti-inflammatory M2 phenotype is thought to assist improved functional outcomes and restore homeostasis (Orihuela et al., 2016). Recently, histone H3K27me3 demethylase Jumonji was shown to be essential for M2 polarization and downregulation of the M1 phenotype (Tang et al., 2014). The induction of M1 phenotype is a relatively standard response during injury. For peripheral immune cells it is thought that M1 polarization is terminal and the cells die during the inflammatory response (Orihuela et al., 2016). Although a shift from M1 to the M2 phenotype is considered rare for peripheral immune cells, the microglia can shift from M1 to M2 phenotype when exposed to IL-10, glatiramer acetate, beta interferons, PPARγ agonists and other molecules discussed in the later section. Although, the M1 and M2 microglial phenotypes vastly differ in their function, different subpopulations in an injury environment may express specific phenotypes resulting in concurrent expression of M1and M2-related factors or mixed M1/M2 phenotypes (Ziegler-Heitbrock et al., 2010;Pettersen et al., 2011;Vogel et al., 2013). The potential to pharmacologically promote a microglial M1 to M2 shift may have therapeutic implications in the setting of neurodegenerative diseases associated with neuroinflammation. MICROGLIA-MEDIATED INFLAMMATION IN PD The involvement of innate immunity in PD was first proposed by McGeer et al. (1988) when brain from PD patients showed high levels of reactive microglia that were positive for human leukocyte antigen-D related (HLA-DR) in the substantia nigra and putamen. GWAS indicate that variants in the HLA region are linked to sporadic PD (Hamza et al., 2010;Hill-Burns et al., 2011). Activated microglia in PD brain appear responsible for exacerbating neurodegeneration (McGeer and McGeer, 2004), and the exposure of human neuromelanin discharged from dead DA neurons cause chemotaxis and increases the pro-inflammatory substances in microglial cultures (Wilms et al., 2007). M1 activation-associated inflammatory markers such as MHC-II (Imamura et al., 2003), TNF-α and IL-6 (Boka et al., 1994;Imamura et al., 2003) have been reported in patients with PD. Recent positron emission tomography (PET) studies show that PD patients have cortical microglial activation and lower brain glucose metabolism early in the disease, and imply that microglial activation may be a contributing factor in the disease progression (Edison et al., 2013). PET with inflammatory ligands show elevation in several areas of the basal ganglia involved in PD pathology (Gerhard et al., 2006;Edison et al., 2013;Iannaccone et al., 2013). TLR2 is increased in postmortem PD brain tissue, which correlates with pathological α-synuclein deposition. The neuronal TLR2 rather than glial expression of TLR2 is significantly elevated in PD brain and correlates with disease progression. In addition, TLR2 is strongly localized in α-synuclein positive Lewy bodies (Dzamko et al., 2017). These observations highlight the crucial role of neuroinflammation in PD pathogenesis. PERIPHERAL INFLAMMATION IN PD "The dual hit theory" of PD development states that a neurotropic pathogen enters the brain by nasal and/or gastric route by axonal transport, the latter via the vagus nerve (Braak et al., 2003;Hawkes et al., 2009). There is evidence that some forms of α-synuclein can be transmitted from the gut to the brain (Pan-Montojo et al., 2010;Ulusoy et al., 2013;Holmqvist et al., 2014). Instillation of rotenone into the rodent stomach exhibits progressive pathological α-synuclein inclusions in the enteric nervous system, the vagus nerve and subsequently in the brain stem (Pan-Montojo et al., 2010). Vagotomy prevents transport of pathological proteins from the gut to CNS (Phillips et al., 2008;Pan-Montojo et al., 2012). A recent study in Danish patients reveals that those who underwent full truncal vagotomy had lower risk for PD, suggesting that the vagus nerve might be critically involved in PD pathogenesis (Svensson et al., 2015). Another clinical study reports that serum levels of the pro-inflammatory cytokine IL-1β discriminated asymptomatic LRRK2-G2019S carriers from controls and suggests that peripheral inflammation is greater in a percentage of subjects carrying LRRK2-G2019S mutation (Dzamko et al., 2016). The major peripheral immune cells, T-lymphocytes and B-lymphocytes, are not found in the CNS in normal biological conditions. However, with peripheral inflammation such as infection or injury, blood monocytes, and tissueresident immune cells are activated and secrete variety of pro-inflammatory mediators including TNF-α, IL-6, and IL-1β. These pro-inflammatory mediators cross the bloodbrain barrier leading to the activation of brain resident microglia, which then triggers a neuroinflammatory cascade. The blood-brain barrier is considered to be impermeable to external pathogens and circulating macrophages, hence serving as an additional line of defense to the CNS. Nevertheless, damage to the integrity of the blood-brain barrier renders the brain vulnerable. PET studies on patients with PD reveal dysfunctional blood-brain barrier (Kortekaas et al., 2005). Damage to the blood-brain barrier in rats induce degeneration of dopaminergic neurons in the substantia nigra and activate glial cells (Rite et al., 2007). CD8 + and CD4 + T cells are observed in the postmortem human PD brain and MPTP (1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine) mouse model of PD during its neurodegenerative phase suggesting T cellmediated dopaminergic toxicity as a putative mechanism (Brochard et al., 2009). Moreover, rats with ulcerative colitis are more susceptible to LPS-induced dopaminergic neuron loss, blood-brain barrier permeability, microglial activation, and generation of pro-inflammatory mediators suggesting that peripheral inflammation may increase the risk of PD (Villaran et al., 2010). A recent study in the acute MPTP mouse model where nigrostriatal pathologies are not robust show that administration of chemokines [regulated on activation, normal T cell expressed and secreted (RANTES) and eotaxin] that facilitate T cell trafficking can lead to marked nigral α-synuclein pathology, loss of dopaminergic neurons and striatal neurotransmitter depletion, glial-associated inflammation, and motor impairments. However, systemic administration of pro-inflammatory cytokines, TNF-α and IL-1β, did not induce such disease pathologies in this acute MPTP model (Chandra et al., 2017). Taken together these studies suggest a more direct link between peripheral inflammation and, potential to elicit and affect the timing of PD onset. GENETIC FACTORS LINKED TO INFLAMMATION IN PD Mutations in leucine-rich repeat kinase (LRRK2, PARK8) are linked with autosomal dominantly inherited PD and is the greatest known genetic contributor to PD. LRRK2-G2019S mutation in the kinase domain is the most common mutation in both familial and sporadic form of the disease (Goldwurm et al., 2005). GWAS show that the genetic locus containing the LRRK2 gene presents a risk factor for sporadic PD (Singleton and Hardy, 2011). Interestingly, GWAS implicate LRRK2 as a major susceptibility gene in inflammatory bowel diseases that involve chronic inflammation (Barrett et al., 2008;Liu and Lenardo, 2012). LRRK2 is reported to be a target gene for IFN-γ, a M1-activation-associated pro-inflammatory cytokine. LRRK2 expression is elevated in human intestinal tissue of patients with Crohn's disease inflammation (Gardet et al., 2010). LRRK2 shows high expression in immune cells including microglia and inhibition of LRRK2 function reduces M1-associated inflammation and PD neurodegeneration (Moehle et al., 2012;Daher et al., 2014Daher et al., , 2015Russo et al., 2015). Collectively, these studies point toward the importance of LRRK2 function in M1activation responses in PD animal models (Moehle and West, 2015). Three different missense mutations (A530T, A30P, and E46K) within the open reading frame or duplication or triplication of the wild type α-synuclein gene (SNCA, PARK1) are associated with autosomal dominant PD (Polymeropoulos et al., 1997;Kruger et al., 1998;Singleton et al., 2003;Zarranz et al., 2004). Fibrillar forms of α-synuclein are a major component of the Lewy bodies in both sporadic and familial PD. α-Synuclein, a cytoplasmic protein, can be expressed in microglia and may be involve modulation and pre-sensitization of microglial activation (Barkholt et al., 2012;Roodveldt et al., 2013;Zhang et al., 2017). Most activated microglia in PD patient brains are associated with α-synuclein-positive Lewy neurites (Imamura et al., 2003), and there is significant correlation between the expression of M1 activation-associated marker MHC-II and α-synuclein deposition in the substantia nigra of PD patients (Croisier et al., 2005). Previous studies by our group and others using in vitro and animal models show that both wild type and mutant α-synuclein can modulate microglial activation leading to neuroinflammation. Our previous work shows that α-synuclein activates microglia in a dose-dependent manner in cultured cells and an early microglial activation event occurs in mice overexpressing wild type α-synuclein (Su et al., 2008). Another study reveals that mutant α-synuclein can directly interact with cultured microglia releasing pro-inflammatory substances and mice overexpressing mutant α-synuclein exhibit microglial activation at a very early age (Su et al., 2009). In addition, we show that misfolded α-synuclein induces microglial activation and the release of pro-inflammatory cytokines in BV2 microglial cells (Beraud et al., 2013). Overexpression of α-synuclein in BV2 microglial cells increase pro-inflammatory mediators (TNF-α, IL-6, nitric oxide, COX-2) and induce a reactive microglial phenotype (Rojanathammanee et al., 2011). Interestingly, TLR4, which is activated by LPS to induce microglial M1-phenotype, is reported to mediate α-synuclein-induced microglial phagocytosis, upregulation of pro-inflammatory cytokine expression and ROS generation in primary microglial cultures (Fellner et al., 2013). In addition, α-synuclein is reported to play a crucial role in modulating microglial activation states in postnatal brain derived microglial cultures (Austin et al., 2006). Moreover, neuroinflammation with activated microglia and increased pro-inflammatory cytokines (TNF-α, IL-1β, IFN-γ) precedes α-synuclein-mediated neuronal cell death in rats delivered with mutant A53T human α-synuclein (Chung et al., 2009). There is a rich literature on the role of α-synuclein in the progression of PD by inducing microglia activation and neuroinflammation which is reviewed elsewhere . INFLAMMATION IN PD ANIMAL MODELS LPS, a bacterial endotoxin from cell wall of Gram-negative bacteria, induces M1-polarization of microglia through activation the pattern recognition TLR4. LPS administration into rodent brains recapitulate certain characteristics of sporadic PD including progressive degeneration of nigrostriatal dopaminergic pathway and motor anomalies. A single injection or 2-week infusion of LPS in the supranigral region in rat brain causes rapid microglia activation followed by dose and time-dependent degeneration of nigrostriatal dopaminergic circuitry (Castano et al., 1998;Gao et al., 2002;Liu, 2006;Dutta et al., 2008). Direct injection of LPS shows dopaminergic neuron loss specifically in SNpc but not in ventral tegmental area which also houses dopaminergic neurons (Kim et al., 2000). This specific neurotoxicity in SNpc may be attributed to the high proportion of microglia in SNpc compared with other brain regions (Lawson et al., 1990), that may trigger inflammatory events leading to the degeneration of nigrostriatal pathway. Moreover, injection of a TLR3 agonist in the substantia nigra of adult rats induces a sustained inflammatory reaction in the substantia nigra (SN) and dorsolateral striatum, and also increases the vulnerability of midbrain dopaminergic neurons to a subsequent neurotoxic trigger (Deleidi et al., 2010). A transgenic mouse PD model that overexpresses human wild type α-synuclein, Thy1-aSyn (line 61) (Chesselet et al., 2012), shows microglial activation as early as 1 month and persists until 14 months of age . Increased levels of TNF-α, TLRs (TLR1, TLR2, TLR4, and TLR8), MHC-II, CD4, and CD8 are observed in Thy1-aSyn mice at different ages. This study also reveals that despite expression of α-synuclein globally in the brain only the regions containing cell bodies and axon terminals of nigrostriatal pathway show early inflammatory response. Another transgenic rodent model overexpressing doubly mutated (A53T and A30P) human α-synuclein show glial mitochondria alterations (Schmidt et al., 2011). In a PD mouse model that overexpresses human α-synuclein by recombinant adeno-associated virus vector, serotype 2 (rAAV2)-mediated transduction in the SNpc, inflammatory responses such as microglial activation and greater infiltration of B and T lymphocytes are observed in addition to dopaminergic neurodegeneration (Theodore et al., 2008). These studies show that animal models overexpressing human or mutant α-synuclein exhibit microglial activation and neuroinflammation. MPTP, a meperidine analog byproduct, is a neurotoxin that causes acute and irreversible human parkinsonism (Davis et al., 1979;Langston et al., 1983). MPTP is a lipophilic compound that can actively cross the blood-brain barrier and gets oxidized by monoamine oxidase to the toxic cation, MPP + (1-methyl-4-phenylpyridinium) in the glial cells (Langston et al., 1984;Markey et al., 1984). MPP + utilizes the DA transporter (DAT) to get into the dopaminergic neurons. MPP + accumulates in the mitochondria and inhibits the mitochondrial complex I in the electron transport chain (ETC) (Nicklas et al., 1985;Ramsay et al., 1986) resulting in reduced ATP levels and production of ROS (Hasegawa et al., 1990;Chan et al., 1991;Hantraye et al., 1996;Przedborski et al., 1996;Fabre et al., 1999;Pennathur et al., 1999). In animal models, MPTP induces inflammatory responses that lead to neurodegeneration. MPTP causes microglial activation and an increase in M1-associated pro-inflammatory cytokines such as IL-6, IFN-γ and TNF-α. The glial response to MPTP is reported to peak before dopaminergic neuron loss (Czlonkowska et al., 1996;Smeyne and Jackson-Lewis, 2005). In support of these findings, it is reported that mice lacking IFN-γ or TNF-α signaling show resistance to MPTP-induced neurodegeneration (Sriram et al., 2002;Mount et al., 2007). Mice treated with MPTP show T-cell (CD4 + ) infiltration into the substantia nigra and the MPTP-induced dopaminergic neuron loss is attenuated in T-cell deficient mice suggesting a pro-inflammatory role for T-cells (CD4 + ) in MPTP neurotoxicity (Brochard et al., 2009). In addition, treatment with anti-inflammatory agents such as minocycline (Du et al., 2001), ibuprofen (Swiatkiewicz et al., 2013), flavonoid pycnogenol (Khan et al., 2013) and peptide carnosine (Tsai et al., 2010) and inhibition of pro-inflammatory mediators (Watanabe et al., 2004;Zhao et al., 2007), reduce inflammation and prevent neurodegeneration in MPTP animal models. REGULATORS OF MICROGLIAL ACTIVATION STATES Microglial activation, astrogliosis and invasion of activated peripheral immune cells trigger deleterious events in the brain that lead to neuronal loss and progression of PD (Hirsch and Hunot, 2009). These observations led to several animal studies and clinical trials to test a variety of established antiinflammatory molecules (see Table 2). Acetylsalicylic acid, a COX-1/COX-2 inhibitor, exhibits neuroprotective effects in in vitro and in MPTP animal models of PD (Teismann and Ferger, 2001). A prospective clinical study shows that consumption of non-aspirin NSAIDs may delay or prevent the onset of PD (Chen et al., 2003). A Cochrane collaboration study which analyzed several prevention trials and observational antiinflammatory studies reveals that while ibuprofen use might reduce the risk of developing PD, there is no existing evidence that supports NSAID use in PD prevention (Rees et al., 2011). A clinical study in PD cases shows an association between use of ibuprofen and lesser PD risk. However, this association is not shared by other NSAIDs studied (Gao et al., 2011). Similarly, minocycline, a tetracycline antibiotic that showed promising anti-inflammatory properties in PD animal models (Du et al., 2001;He et al., 2001;Wu et al., 2002), did not provide any symptomatic improvement in PD patients (Ninds Net-Pd Investigators, 2008). See Table 2 for the list of anti-inflammatory agents studied in PD. Hence, NSAID use appears to provide benefits in PD susceptibility in some cases (Wahner et al., 2007;Samii et al., 2009;Gagne and Power, 2010;Steurer, 2011), however, this beneficial effect of NSAIDs were not replicated in several other studies (Shaunak et al., 1995;Bornebroek et al., 2007;Samii et al., 2009;Becker et al., 2011). One study even suggests that anti-inflammatory drug treatment may be detrimental if given at the later stage of the NSAIDs Clinical trials Anti-inflammatory Delay or prevent onset of PD Chen et al., 2003;Wahner et al., 2007;Samii et al., 2009;Gagne and Power, 2010;Steurer, 2011 Clinical trials Anti-inflammatory Exacerbate PD symptoms/Does NOT improve PD risk Shaunak et al., 1995;Bornebroek et al., 2007;Samii et al., 2009;Becker et al., 2011 Minocycline Mouse MPTP Mouse 6-OHDA disease (Keller et al., 2011). This therapeutic approach aiming to counteract general neuroinflammation has failed in several disease therapies as reviewed elsewhere (Pena-Altamira et al., 2016). Collectively, these studies indicate that the non-specific inflammatory blockade is unlikely to be beneficial for the disease treatment. Concurrently, the data on the dual role of microglial activation has led to the emergence of the novel therapeutic strategy in other inflammatory diseases such as rheumatoid arthritis, ankylosing spondylitis, and multiple sclerosis (MS) (Rau, 2002;Braun et al., 2007;Wilms et al., 2010;Noda et al., 2013). This approach of the M1 to the M2 phenotype shift might be beneficial in neuroprotection compared to completely blocking microglial activation through anti-inflammatory drugs. Hence, a more reasonable approach of more specific treatment related to the M1/M2 activation stage by inhibiting the M1 responses and/or promoting the shift of M1 to M2 phenotypic responses is necessary in PD. Targeting M1 Polarization State: Inhibition of Pro-inflammatory M1 Phenotype M1 activation of microglia results in a pro-inflammatory and pro-killing output. To inhibit the pro-inflammatory damage through M1 activation of microglia, its downstream signaling pathways could be targeted. M1 phenotype is induced by IFN-γ via the JAK/STAT signaling pathway and targeting this pathway may arrest M1 activation. In fact, studies show that inhibition of the JAK/STAT pathway leads to suppression of the downstream M1-associated genes in several disease models including experimental allergic encephalomyelitis models and myeloproliferative neoplasms Mascarenhas et al., 2014). Another approach to suppress M1 activation would be to target the pro-inflammatory cytokines such as TNF-α, IL-1β and IFN-γ, and decrease its ability to interact with its receptors on other cell types. TNF has been targeted through different approaches in PD animal models to suppress M1-associated toxicity. A single injection of lentivirus-expressing dominant negative TNF (DN-TNF) into the rat substantia nigra concomitantly with 6-hydroxydopamine (6-OHDA) lesion in the striatum attenuates dopaminergic neuron loss and behavioral anomalies in rats (McCoy et al., 2008). In another study intended to examine the role of TNF in delayed and progressive neurodegeneration model, rats administered with DN-TNF in the substantia nigra 2 weeks after the 6-OHDA lesion show no further dopaminergic neuron loss even after 5 weeks of 6-OHDA suggesting that TNF is an essential mediator of inflammation and hence a promising therapeutic target in PD (Harms et al., 2011). In addition, adeno-associated virus (AAV)mediated transduction of dopaminergic neurons with human ras homolog enriched in brain with S16H mutation, [hRheb(S16H)], attenuates nigrostriatal toxicity in 6-OHDA rat model of PD (Kim et al., 2011(Kim et al., , 2012. This protective effect is mediated by the production of cAMP response element-binding protein (p-CREB), glial cell line-derived neurotrophic factor (GDNF), and brain-derived neurotrophic factor (BDNF) in unilateral MPP + neurotoxin PD model (Nam et al., 2015). PPARs are actively involved in microglial activation and inflammatory pathways. PPAR agonists are postulated to be beneficial for PD and other neurodegenerative diseases (Agarwal et al., 2017). The administration of a PPARγ agonist, rosiglitazone, arrests degeneration in both striatum and SNpc by decreasing TNF-α production and modulating microglial polarization in MPTPp (MPTP + probenecid) progressive mouse model of PD (Carta et al., 2011;Pisanu et al., 2014). Pioglitazone, a PPARγ agonist, prevents tyrosine hydroxylase (TH)-positive neuron loss in substantia nigra and partially averts striatal DA loss in MPTP mice model of PD. Pioglitazone treatment decreases microglial activation, iNOS production and nitric oxide-mediated toxicity in both striatum and substantia nigra (Dehmer et al., 2004). However, a recent clinical trial concluded that pioglitazone did not modify progression in early PD (Ninds Exploratory Trials in Parkinson Disease (Net-Pd) Fs-Zone Investigators, 2015). On the other hand, pioglitazone and rosiglitazone are currently being evaluated in clinical trials for its potential to reduce progression of AD. In addition, treatment of LPS/IFN-γ-activated neuronal and glial cultures with a PPARγ endogenous ligand, 15-deoxy-12,14prostaglandin J 2 , inhibits pro-inflammatory response through the CD200-CD200R1 dependent mechanism (Dentesano et al., 2014). Unpublished data from our group show that administration of a PPAR agonist protects dopaminergic neurons in SNpc and neurites in striatum in MPTP mouse model of PD. This PPAR agonist reduces LPS-induced M1associated pro-inflammatory cytokine IL-1β in BV2 cells and primary astrocytes in a PPARα-independent manner. Thus, PPAR agonists are potential molecules for curing PD through their property to inhibit M1 microglia-induced neuroinflammation. Alterations in expression of cannabinoid receptors and endocannabinoid concentrations are illustrated in PD pathogenesis (Garcia et al., 2015). The endocannabinoid system includes the cannabinoid receptors CB1 and CB2, the endogenous ligands (anandamide and 2-arachidonoylglycerol, 2-AG), and their regulatory enzymes. CB1 receptors are abundant in neurons whereas CB2 receptors are most specifically expressed in glia (Onaivi, 2006;Lanciego et al., 2011;Fernandez-Suarez et al., 2014;Sierra et al., 2015). The expression of CB2 receptors significantly increases during microglial activation (Maresz et al., 2005;Yiangou et al., 2006) and CB2 receptors are reported to be localized in substantia nigra, and significantly downregulated in PD patients (Garcia et al., 2015). A naturally occurring CB2 receptor agonist, β-caryophyllene (BCP), prevented nigral DA-neuron and striatal-TH loss, reduced glial activation and inhibited pro-inflammatory cytokines in rat rotenone model of PD (Javed et al., 2016). Another study shows that a non-selective cannabinoid agonist protects loss of DA-neurons in the substantia nigra and DA in the striatum of MPTP intoxicated mice. In addition, this cannabinoid agonist also reduces MPTP-induced motor deficits and microglial activation (Price et al., 2009). Modification of the endocannabinoid system, to reduce pro-inflammatory toxicity, may provide a novel therapeutic avenue for PD treatment. In the MPTP mouse model of PD, Tanshinone-I prevents dopaminergic neurotoxicity, improves motor deficits and striatal neurotransmitters (Wang et al., 2015). Ghrelin, a stomachderived endogenous ligand for growth hormone secretagogue receptor 1a (GHS-R1a), prevents loss of nigral dopaminergic neurons and striatal neurites, and improves motor performance in MPTP mouse model of PD. Ghrelin reduces toxicant-induced microglial activation, production of pro-inflammatory cytokines (IL-1β, TNF-α) and iNOS levels in MPTP mice (Jiang et al., 2008;Moon et al., 2009). Piperine, a naturally occurring bioactive molecule, attenuates the loss of TH-positive neurons in the substantia nigra and MPTP-induced motor anomalies. In addition, piperine decreases MPTP-induced microglial activation, pro-inflammatory IL-1β expression and apoptosis in these mice . See Table 3 for the summary of other potential molecules that act by inhibiting M1 activation in PD models. Targeting M2 Polarization State The molecules with the capability to activate the antiinflammatory M2 phenotype or promote the transition of proinflammatory M1 phenotype to anti-inflammatory M2 could be useful in the treatment of PD. Anti-inflammatory molecules such as IL-10 and beta interferons produce neuroprotection by altering the M1 and M2 balance. Cerebral infusion of AAVexpressing human IL-10 in a MPTP mouse model of PD decreases the expression of pro-inflammatory iNOS and importantly enhances the levels of anti-inflammatory mediators including IFN-γ and transforming growth factor-β (TGF-β). Infusion of AAV-expressing human IL-10 prevents the loss of striatal DA and reduces TH transcriptome levels suggesting neuroprotection in MPTP intoxicated mice (Schwenkgrub et al., 2013;Joniec-Maciejak et al., 2014). Treatment with pioglitazone, a PPARγ agonist, causes a phenotypic conversion of microglia from the pro-inflammatory M1 state to the anti-inflammatory M2 state. This conversion is strongly linked to increase in phagocytosis of misfolded protein deposits resulting in the reduction of amyloid levels and an associated reversal of contextual memory deficits in AD mice (Mandrekar-Colucci et al., 2012). As mentioned before, pioglitazone treatment decreases microglial activation, iNOS production and NO-mediated nigrostriatal toxicity in MPTP mouse model (Dehmer et al., 2004). The endocannabinoid system is implicated in PD pathogenesis (Garcia et al., 2015). In a chronic MPTPp model of PD, the administration of an inhibitor that prevents degradation of 2-AG (JZL184), an endocannabinoid ligand, prevents MPTPp-induced motor impairment and protects the nigrostriatal pathway (Fernandez-Suarez et al., 2014). MPTPp mice treated with JZL184 exhibits microglial phenotypic changes and restorative microglial activation along with increased TGF-β and GDNF levels. Glatiramer acetate is a Food and Drug Administration (FDA) approved drug for MS treatment (English and Aloi, 2015) and its neuroprotective effect is mediated by activation of the microglial M2 phenotype (Giunti et al., 2014). Glatiramer acetate protects dopaminergic neurons in a MPTP mouse model by helping the recruitment of T lymphocytes in the SN, while inhibiting microglial activation and upregulation of GDNF expression (Benner et al., 2004;Burger et al., 2009). Glatiramer acetate also exhibits neuroprotection in Alzheimer's disease animal models where its treatment induces microglial co-localization with amyloid fibrils and a switch in microglial phenotype that produces insulin like growth factor 1 . Dimethyl fumarate (DMF), an approved drug for MS treatment, protects against the depletion of striatal DA and its transporters and, reduces MPTP-induced increase in IL-1β and COX-2 activity in MPTP mouse model of PD. DMF also modulates microglial activation states and restores nerve growth factor levels to provide neuroprotection in MPTP-intoxicated mice (Campolo et al., 2017). Other molecules that are reported to possess neuroprotective properties by inducing M2 microglial activation are listed in Table 3. AAV, adeno-associated virus; DA, dopamine; IL, interleukin; MPTP, 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine; 6-OHDA, 6-hydroxydopamine; TH, tyrosine hydroxylase; TNF, tumor necrosis factor. CONCLUDING REMARKS AND FUTURE DIRECTIONS The critical role of microglia in most neurodegenerative pathologies including PD is increasingly documented through many studies. Until recently, microglial activation in pathological conditions was considered to be detrimental to neuronal survival in the substantia nigra of PD brains. Recent findings highlight the crucial physiological and neuroprotective role of microglia and other glial cells in neuropathological conditions. Studies on anti-inflammatory treatments targeting neuroinflammation in PD and other diseases by delaying or blocking microglial activation failed in many trials due to the lack of a specific treatment approach, possibly the stage of disease and an incorrect understanding of mechanisms underlying microglial activation. With the updated knowledge on different microglial activation states, drugs that can shift microglia from a pro-inflammatory M1 state to anti-inflammatory M2 state could be beneficial for PD. The M1 and M2 microglial phenotypes probably need further characterization, particularly in PD pathological conditions for better therapeutic targeting. We support targeting of microglial cells by modulating their activation states as a novel therapeutic approach for PD. AUTHOR CONTRIBUTIONS HF conceived of the project, SS and HF wrote the manuscript.
v3-fos-license
2018-04-03T05:32:52.747Z
2017-09-19T00:00:00.000
25384408
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0184990&type=printable", "pdf_hash": "e7183b5be289a17d2c81247be2adf8ed981065c0", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42658", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "0cb75440b176de04cd89f313bd628abcc1ef51e6", "year": 2017 }
pes2o/s2orc
Autoimmune disease mouse model exhibits pulmonary arterial hypertension Background Pulmonary arterial hypertension is often associated with connective tissue disease. Although there are some animal models of pulmonary hypertension, an autoimmune disease-based model has not yet been reported. MRL/lpr mice, which have hypergammaglobulinemia, produce various autoimmune antibodies, and develop vasculitis and nephritis spontaneously. However, little is known about pulmonary circulation in these mice. In the present study, we examined the pulmonary arterial pressure in MRL/lpr mice. Methods and results We used female MRL/lpr mice aged between 12 and 14 weeks. Fluorescent immunostaining showed that there was no deposition of immunoglobulin or C3 in the lung tissue of the MRL/lpr mice. Elevation of interferon-γ and interleukin-6 was recognized in the lung tissue of the MRL/lpr mice. Right ventricular systolic pressure, Fulton index and the ratio of right ventricular weight to body weight in the MRL/lpr mice were significantly higher than those in wild type mice with same background (C57BL/6). The medial smooth muscle area and the proportion of muscularized vessels in the lung tissue of the MRL/lpr mice were larger than those of the C57BL/6 mice. Western blot analysis demonstrated markedly elevated levels of prepro-endothelin-1 and survivin as well as decreased endothelial nitric oxide synthase phosphorylation in the lung tissue of the MRL/lpr mice. Terminal deoxynucleotidyl-transferase-mediated dUTP nick end-labeling assay showed the resistance against apoptosis of pulmonary arterial smooth muscle cells in the MRL/lpr mice. Conclusion We showed that MRL/lpr mice were complicated with pulmonary hypertension. MRL/lpr mice appeared to be a useful model for studying the mechanism of pulmonary hypertension associated with connective tissue diseases. Introduction Pulmonary hypertension often complicates connective tissue disease (CTD) and determines its prognosis.Recently, the survival of patients with CTD-associated pulmonary hypertension (CTD-PH) has been improved by using targeted pulmonary vasodilators or active immunosuppressive therapy [1].However, the outcome is still insufficient and the mechanism of CTD-PH remains unclear [2]. The characteristics of the pulmonary arteries in CTD-PH are supposed to be similar to those of idiopathic pulmonary arterial hypertension (IPAH), and they consist of vasoconstriction and organic lumen narrowing due to abnormal proliferation of endothelial or smooth muscle cells. Immunologically, T lymphocytes differentiate into T helper (Th) 1, Th2, Th17, and regulatory T cells, and imbalance of Th1/Th2/Th17 and regulatory T cells contributes to the pathogenesis of CTD [3,4].In addition, interleukin (IL)-6 is known to be a key molecule in pulmonary arterial remodeling in pulmonary hypertension [5].However, detailed mechanisms of CTD-PH have remained still unclarified. According to the Nice classification, CTD-PH is classified into Group 1 (pulmonary arterial hypertension) as IPAH because the treatment methods are similar to those for IPAH [6].However, CTD-PH also has characteristics of Group 1' (pulmonary vein occlusion), Group 2 (pulmonary hypertension due to left sided heart disease), and Group 3 (pulmonary hypertension due to lung diseases) because it sometimes accompanies pulmonary vein occlusion, fibrosis of the left ventricular myocardium, and interstitial pneumonia.Further, CTD-PH, except in case of scleroderma, can be expected the improvement by immunosuppressive therapy [1,7], which is another way in which CTD-PH differs from IPAH.Thus, to approach clinical CTD-PH, an experimental model of CTD that spontaneously develops pulmonary hypertension is necessary in addition to monocrotaline-administered mice and vascular endothelial growth factor (VEGF) inhibition with hypoxic exposure mice which are popular as animal models of pulmonary arterial hypertension [8,9]. MRL/lpr mice spontaneously develop vasculitis and glomerulonephritis due to hypergammaglobulinemia and expression of various autoantibodies.They are widely used as models for lupus nephritis and Sjoegren's syndrome [10]. However, little is known about the onset of pulmonary hypertension in these mice. In the current study, we examined the hemodynamics and histopathological features of pulmonary vessels, the expression of molecules associated with pulmonary vasoconstriction and vasodilatation, as well as medial smooth muscle cell apoptosis in MRL/lpr mice. Animals and ethics statement MRL/lpr mice (#000485) were purchased from Jackson lab (Bar Harbor, ME, USA).We used female MRL/lpr and C57BL/6 mice aged between 12 and 14 weeks (Body weight range was from 19.7 to 32.5 g).As positive controls for fluorescent immunostaining of C3 and immunoglobulin, kidneys of 23-week-old MRL/lpr mice were used.Mice were housed with food and water ad libitum at room temperature under a 12 h: 12 h light-dark cycle.The investigations conform to the Guidelines for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH publication, 8 th Edition, 2011).Our research protocol was approved by the Fukushima Medical University Animal Research Committee.All efforts were made to minimize the suffering of the animals.All of the mice were sacrificed by cervical dislocation after the experiments. Measurements of right ventricular pressure and ventricular weight Anesthesia was performed by intraperitoneal injection of Tribromoethanol (0.25 mg/g of body weight).A 1.2F micromanometer catheter (Transonic Scisense Inc., London, ON, Canada) was inserted from the right jugular vein, and right ventricular pressure was measured and analyzed by LabScribe3 software (IWORX, Dover, NH, USA).In order to evaluate right ventricular hypertrophy, right ventricle (RV) was dissected from the left ventricle (LV) including septum (S), and the RV/LV+S weight ratio and RV/body weight ratio were calculated [11,12]. Histological analysis After measurement of RV pressure, the lungs were fixed with 4% paraformaldehyde, embedded in paraffin, and sectioned to 3 μm.After Elastica-Masson (EM) staining or immunostaining of α-smooth muscle actin (α-SMA) (Santa Cruz Biotechnology Inc., Santa Cruz, CA, USA), pulmonary arteries (external diameter of 20-50 μm) were randomly selected (60-90 vessels per individual mouse).The medial wall area (the area between the internal and external lamina) was measured by Image J 1.48 (National Institutes of Health, Bethesda, MD, USA) and was divided by the vessel area (the area surrounded by the external lamina) [12].Each vessel (external diameter < 25 μm) was classified as non-muscular, partially muscular or fully muscular.The percentage of muscularized pulmonary vessels was determined by dividing the sum of partially and fully muscular vessels by the total number of vessels [8,12].Measurements were performed blinded to mouse information. Deposition of immunoglobulin (IgG) and C3 in the lung tissue was stained using fluorescein isothiocyanate (FITC)-labeled primary antibody [10].Briefly, frozen sections (8 μm) fixed with acetone were washed with phosphate buffered saline (PBS) for 5 minutes 3 times and blocked with PBS containing 3% bovine serum albumin (BSA) at room temperature for 1 hour.The sections were stained by FITC-conjugated goat anti-mouse C3 antibody (MP Biomedicals, Solon, OH, USA) or FITC-conjugated rat anti-mouse IgG antibody (BioLegend, San Diego, CA, USA) diluted 1:100 with PBS containing 1% BSA for 1 hour at room temperature.After washing 3 times with PBS and deionized water, fluorescent images were captured with a fluorescence microscope (BZ-X700, KEYENCE Co., Osaka, Japan) at fixed exposure times.We used kidney tissue from 23-week-old MRL/lpr mice as a positive control. Assessment of cytokines in lung tissue of MRL/lpr mice Levels of cytokines in the lung tissue of the MRL/lpr mice were measured using mouse Th1/ Th2/Th17 cytokine kit (BD Biosciences, San Jose, CA, USA).Detectable cytokines by this kit were Th1-related cytokines (IL-2, interferon (IFN) -γ, tumor necrosis factor (TNF)), Th2-related cytokines (IL-4, IL-6, IL-10) and Th17-related cytokine (IL-17A).The lung tissue samples were solubilized with lysis buffer (10 mM Tris, 2 mM EDTA, 20 μg/ml antipain, 20 μg/ml leupeptin, 1 μM DTT and 1 μM PMSF).The protein concentrations in the lysates were then measured using the Bradford method and adjusted to 3 mg/ml.Capture beads conjugated with antibodies specific for each cytokine were added to the lysate of the lung tissue.These samples were incubated with phycoerythrin-conjugated antibody for 2 hours at room temperature in the dark.After a sandwich complex was formed, fluorescent intensity was measured by flow cytometry (BD FACS Canto II, BD Bioscience) and analyzed by Flow Jo™ v10.3 Software. Apoptosis of pulmonary smooth muscle cells To estimate the apoptosis of pulmonary smooth muscle cells, we performed a terminal deoxynucleotidyl-transferase-mediated dUTP nick end-labeling (TUNEL) assay (Promega, Madison, WI, USA) according to the manufacture's instruction.We randomly selected at least 10 fields in each specimen and counted nuclei in the medial smooth muscle layer.The result was expressed as a percentage of the number of TUNEL-positive nuclei in the total number of nuclei. Statistical analysis Data are expressed as mean ± SD, and statistical analyses were performed using Mann-Whitney U test.A value of P < 0.05 was considered statistically significant. Deposition of C3 and immunoglobulin It has previously been reported that deposition of C3 and IgG is recognized on renal glomeruli in MRL/lpr mice [10], therefore, in the present study, we investigated whether similar findings were found in lung tissue.Fluorescent immunostaining revealed that deposition of C3 and IgG were not detected in the lung tissue (Fig 1b , 1c, 1e and 1f), whereas those were clearly visualized in renal glomeruli as positive controls (Fig 1a and 1d). Cytokines in lung tissue of MRL/lpr mice Next, we assessed cytokine profiles of the C57BL/6 mice and the MRL/lpr mice.In the lung tissue of the MRL/lpr mice, the levels of IFN-γ and IL-6 were significantly higher compared with those of the C57BL/6 mice (2.0 ± 1.1 vs. 0.58 ± 0.44 pg/mg protein, P < 0.05, 2.9 ± 3.4 vs. 0.6 ± 0.51 pg/mg protein, P < 0.05).Although there was no statistical significance, IL-17A and TNF were higher whereas IL-10 was lower in the MRL/lpr mice.IL-4 in the lung tissue of the MRL/lpr mice was almost undetectable in this assay.IL-2 levels were nearly equivalent in the two groups (Fig 2). Pulmonary medial wall thickening and vessel muscularization in MRL/lpr mice Since the pulmonary medial wall thickening and muscularization of the peripheral pulmonary arteries are the major pathogenesis of pulmonary arterial hypertension [12,15], we next observed the lung sections with EM staining and immunostaining of α-SMA.The medial smooth muscle layer of the MRL/lpr mice was significantly greater than that of the C57BL/6 mice (Fig 4A).In addition, the proportion of fully muscular vessels in the MRL/lpr mice was significantly higher than that in the C57BL/6 mice, whereas the proportion of non-muscular vessels in the MRL/lpr mice was lower (Fig 4B).The percentages of partially muscular vessels were nearly equivalent between the two groups. Vasodilation and vasoconstriction related molecules in lung tissue of MRL/lpr mice Vasoconstriction is caused by the imbalance between vasoconstrictive factor and vasodilator; the former includes ET-1 as well as oxidative stress, and an example of the latter is nitric oxide (NO).The expression and activation of eNOS are important in the production of NO [16,17].Therefore, we investigated the expression and activation of eNOS in the lung tissues of the MRL/lpr mice by western blotting.The level of eNOS expression of the MRL/lpr mice did not differ significantly between the two groups; however, eNOS phosphorylation was significantly decreased in the MRL/lpr mice (Fig 5A).Western blot using a primary antibody against an ET-1 epitope demonstrated the bands around 28-kDa.Since a mature form of ET-1 has a small molecular weight (2.5-kDa), these 28-kDa bands were considered to show prepro-ET-1 as reported previously [18].The expression of prepro-ET-1 in the lung tissue of the MRL/lpr mice was significantly elevated compared to that of the C57BL/6 mice (Fig 5B).These results suggest that vasoconstriction due to NO impairment and increased ET-1 production is one of the pathogenic mechanisms of pulmonary hypertension in MRL/lpr mice. Expression of survivin and apoptosis of pulmonary arterial smooth muscle cells in lung tissue of MRL/lpr mice It has been reported that survivin plays an important role in pulmonary arterial smooth muscle cells proliferation and resistance against apoptosis [19,20].Therefore, we investigated survivin expression in the lung tissue of the MRL/lpr mice by western blotting.As shown in Fig 6A, survivin was upregulated in the lung tissue of the MRL/lpr mice.TUNEL staining revealed decreased TUNEL-positive nuclei in the medial smooth muscle layer of the MRL/lpr mice (22.6 ± 5.1 vs. 28.8 ± 2.9%, P < 0.05), (Fig 6B).These results suggested that increased survivin expression was one of the mechanisms of pulmonary arterial medial wall thickening in the MRL/lpr mice. Discussion In the present study, we reported for the first time elevated RVSP, RV hypertrophy, medial wall thickening, and muscularization of pulmonary arteries in MRL/lpr mice.The phosphorylation disorder of eNOS, elevation of prepro-ET-1 and the upregulation of survivin were considered as the possible molecular mechanisms. It has been reported that the serum levels of anti-dsDNA antibodies in MRL/lpr mice are already high at the age of 14 weeks, and further elevation of anti-dsDNA antibody, as well as the decline of C3 level, is also observed after 14 weeks [10].Thus, the MRL/lpr mice used in the current study were at a relatively early stage of lupus; however, pulmonary vascular lesions were already advanced.In addition, we demonstrated that the deposition of IgG and C3 to pulmonary vessels was not directly related to the onset or progression of pulmonary vascular remodeling in the present study.Cytokine profile of the lung tissue of the MRL/lpr mice exhibited a shift of Th1/Th2 balance toward Th1 polarization as well as IL-17A elevation.However, it is still unclear how cytokine imbalance contributes to the onset of pulmonary hypertension in CTD.Mechanisms need to be elucidated further. The pulmonary vessel morphology of the MRL/lpr mice in the current study presented almost the same findings as pulmonary arterial hypertension, which was characterized by medial wall thickening due to smooth muscle proliferation and the transition from non-muscular artery to muscular artery.However, the plexiform lesion, which is histopathologically the most advanced condition, was not observed.This result might be related to the fact that the degree of the RVSP elevation in the MRL/lpr mice was relatively low compared to other popular models, such as mice administered with monocrotaline, or VEGF inhibitor-hypoxia [8,9].In addition, left ventricular fibrosis, interstitial pneumonia, and pulmonary vein lesions, which are often found in CTD, especially scleroderma, were not observed in the MRL/lpr mice of the current study.These facts suggest that the MRL/lpr mouse was a disease model closer to SLE or Sjogren's syndrome, not scleroderma, not only in nephritis but also in pulmonary hypertension. Fagan et al. reported that more than 50% of eNOS is required to maintain normal pulmonary vascular tone [16].In the current study, when compared to the C57BL/6 mice, although the eNOS expression of the MRL/lpr mice did not differ significantly, eNOS phosphorylation was significantly lower.Since prepro-ET-1 is excessively produced in the lung tissue of the MRL/lpr mice and ET-1 has been reported to have an inhibitory effect on eNOS activation [21], decreased eNOS activation in MRL/lpr mice might be caused by ET-1 overexpression. Although the role of survivin is important for pulmonary medial wall thickening in pulmonary hypertension [19,22,23], there have been no studies on the association between CTD-PH and survivin.We showed that survivin expression was upregulated as similar to other pulmonary hypertension models in the lung tissue of the MRL/lpr mice.In addition, since it has been reported that the expression of survivin is promoted by ET-1 [20], there is a possibility that a similar mechanism is involved in the lung tissue of MRL/lpr mice. Study limitations There were some limitations in this study.1 The effect of pulmonary hypertension on mouse survival was not detected since identification of the cause of death requires a long time and a large number of mice, and is difficult.2 NO production in the pulmonary artery was not be measured due to the technical difficulty in isolation of endothelial cells or the pulmonary artery. Conclusion In the present study, we demonstrated for the first time that MRL/lpr mice spontaneously developed pulmonary arterial hypertension caused by an imbalance of vasodilation and vasoconstriction, as well as organic vessel stenosis.In addition, eNOS, ET-1 and survivin were found to play pivotal roles in the mechanism of pulmonary hypertension in MRL/lpr mice ( Fig 7).Although further studies are required to elucidate the mechanisms of CTD-PH, MRL/lpr mice may be a useful model for the investigation of its pathophysiology.
v3-fos-license
2022-10-24T15:05:33.542Z
2022-10-24T00:00:00.000
253091922
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.emerald.com/insight/content/doi/10.1108/GKMC-02-2022-0045/full/pdf?title=observing-spanning-and-shifting-boundaries-working-with-data-in-non-clinical-practice", "pdf_hash": "4cf7e9b92c04c89cd439c20636be31ff5dacab62", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42659", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "sha1": "026e0f72ad171c41d8f32ce8fc11fbddc988d839", "year": 2022 }
pes2o/s2orc
Observing, spanning and shifting boundaries: working with data in non-clinical practice Purpose – Effective use of data across public health organisations (PHOs) is essential for the provision of health services. While health technology and data use in clinical practice have been investigated, interactions with data in non-clinical practice have been largely neglected. The purpose of this paper is to consider what constitutes data, and howpeoplein non-clinical rolesin a PHOinteractwith data intheir practice. Design/methodology/approach – This mixed methods study involved a qualitative exploration of how employees of a large PHO interact with data in their non-clinical work roles. A quantitative survey was administered to complementinsightsgainedthrough qualitative investigation. Findings – Organisational boundaries emerged as a de fi ning issue in interactions with data. The results explain howdata work happens through observing, spanningand shifting of boundaries.The paper identi fi es fi ve key issues that shape data work in relation to boundaries. Boundary objects and processes are considered,as well as therolesof boundaryspanners andshifters. Research limitations/implications – The study was conducted in a large Australian PHO, which is not completely representative of the unique contexts of similar organisations. The study has implications for research in informationandorganisationalstudies,opening fi elds ofinquiryforfurther investigation. Practical implications – Effective systems-wide data use can improve health service ef fi ciencies and outcomes.There are also implications forthe provision ofservices byother healthandpublic sectors. Originality/value – The study contributes to closing a signi fi cant research gap in understanding interactions with data in the workplace, particularly in non-clinical roles in health. Research analysis connects concepts of knowledge boundaries, boundary spanning and boundary objects with insights into information behaviours in the health workplace. Boundary processes emerge as an important concept to understand interactions with data. The result is a novel typology of interactions with data in relation to organisational boundaries. Introduction Big data developmentsunderpinned by societal, organisational and technological changeshave been the focus of considerable attention in most industries and research disciplines. In health, technological and clinical aspects of big data have been viewed as particularly important. Health services, however, are supported by many non-clinical departments, employing a considerable workforce that is essential for the functioning of health systems. The COVID-19 pandemic has brought to the fore the critical importance and connectedness of health-related data and work processes, from basic data gathered locally to multinational data sets. As a significant part of data work is performed by non-clinical staff, it is important to understand how non-clinicians interact with data. With this background, the study into data use in non-clinical practice was conducted at a public state-based health system in Australia. Health services are provided by 120,000 fulltime equivalent positions on staff for the population of 7.9 million. Like most health systems, the organisation recognises that clinical and non-clinical staff need to work together to maximise data use and enhance patient care in the current information environment (Lavelle et al., 2019). Complex health systems depend on the effective use and flow of information through all organisational parts. We must understand data use not only at the patient's bedside but also in offices of many different professional groups, including accountants, educators, data analysts and linen suppliers, to name some. Understanding the broad interactions with data is critical to inform organisational decisions, including education and training. The main focus of this paper is on data use in the workplace. Relevant literature and methodological details are followed by a description of types of data used in non-clinical practice, with a particular focus on organisational boundaries. Finally, the implications of the research results are discussed. Literature review We consider the current state of research into data use in the workplace, particularly in health, and review the literature concerning the dynamics of evolving knowledge boundaries in organisational contexts that emerged as a key issue in our research. The concept of community of practice is also considered. Data use in the workplace Data use in organisations, in general, and in the non-health public sector, in particular, has been investigated. The concept of using data for accountability, organisational learning and achieving equity are applicable to the public health sector, but have not been interrogated to the same extent as in education and business . Existing literature focuses on data use in the public health sector tends to be specific to clinical health outcomes, cost-effectiveness or efficient health services delivery (Carvalho et al., 2019). While the public health sector is under constant pressure to reduce costs and improve health-care services (Wu et al., 2016), the potential of data use in health and its benefits are numerous. Optimised health and clinical data analysis allow the identification of health patterns that can contribute to disease prevention and cure, improving patient safety and quality of life. Increased digitisation enables the capacity and capability for data analysis (Carvalho et al., 2019). In parallel, reliable, efficient and agile data analytics systems are being developed to cope with the exponential growth in the volume, complexity and sources of health data (Wu et al., 2016). Data collection and storage in health care organisations are evolving with advancements in automation, artificial intelligence, deep learning and robotics. A considerable strategic investment is warranted for data management and GKMC analysis to extract and acquire intelligence. With all the significant changes, the potential for public health services to capture and use data is largely untapped (Raguseo, 2018). The emphasis on data-related competencies, coupled with skills development and demands for organisational support, are emerging rapidly. Auffray et al. (2016) purport that data plays a more important role in health than in most industries and that the workforce is the most important success factor in optimising data use. While it seems intuitive that clinicians would use data from patient and client interactions for quality improvement, the skills, data literacy and organisational structures needed to collect, analyse and operationalise changes to policy or practice are hard to achieve within the scope of clinicians' practice. The complexity of data work requires support from different specialisations. Therefore, it is imperative that health services employ or engage data experts, giving them access and opportunities to liaise with clinicians to gather and interpret data to suit the clinical context and needs of health services (Wills, 2014). A report from the American Medical Informatics Association stressed the importance of a co-ordinated approach to data stewardship principles and effective approaches, including investment in workforce training (Hripcsak et al., 2014). Raghupathi and Raghupathi (2014) reported that the managerial issues of ownership, governance and standards are pivotal data-related considerations. To benefit from the opportunities afforded by technological advancements, data needs to be embedded in everyday work practices. Understanding of information landscapes in health organisations is crucial to ensure the most effective interventions supporting datarelated practices. Scant studies in this area rarely include the use of information by nonclinical staff. Bossen et al. (2014) investigated the role of medical secretaries in the care of records in health care infrastructures. The authors pointed out the relative invisibility of these work roles despite their importance in new workflows due to the fewer boundaries in digital work environments. While insights from existing literature are relevant, there is a significant gap in our understanding of how current research findings apply to non-clinical work practices. Furthermore, studies conducted in other contexts are insufficient to understand data use in multi-professional, highly complex PHOs. It is particularly important that a limited understanding of data-related practices of a significant proportion of the health workforce impedes evidence-based improvements in practice. Knowledge boundaries The concept of knowledge boundaries in organisations is particularly relevant in the context of this study and warrants special attention. Current data developments are part of a continuum of technological, cultural and organisational transformations where knowledge boundaries constantly evolve. In the previous section, we noted that the literature about data use in health identified the importance of co-ordinated organisational approach. Complex organisations, however, depend on established and new knowledge developed and practised by many organisational units and specialised groups. Carlile (2002) commented that specialised knowledge within organisations is problematic as the knowledge that drives innovation in one function can hamper innovation across functions. Carlile (2002, p. 442) noted: It is at these "knowledge boundaries" that we find the deep problems that specialized knowledge poses to organizations. The irony is that these knowledge boundaries are not only a critical challenge, but also a perpetual necessity because much of what organizations produce has a foundation in the specialization of different kinds of knowledge. Observing, spanning and shifting boundaries Although knowledge boundaries are a necessity and perpetual challenge, they may be hard to detect and understand as they are determined by invisible processes. All organisations, particularly large health systems, rely on accepted practices. Established procedures and processes, however, are based on years of accumulated tacit knowledge (Gerson and Star, 1986) that may not be obvious to newcomers and outsiders. Even common data-sharing systems, such as information technology (IT) or data management systems, are often invisible infrastructures (Steger et al., 2018). As Steger et al. explain in an overview of previous studies, common infrastructures are often taken for granted and embedded in use to the extent that they become invisible. According to Steger et al., they may also support the functions of one group while impeding the work of another or enable sharing at one time while blocking it in the future. These contradictions are highly applicable in the public sector, which is typically complex and based on practices established over many years. In large PHOs, in particular, there are many groups with various specialised knowledges and ways of working. Furthermore, health thrives on both innovation and strict regulation. Work around boundaries is significant in dealing with differences and negotiating new ways of working (Oldenhof, Stoopendaal, Putters, 2016). Work across boundaries, however, tends to be resisted. Carlile (2002, p. 442) reflected on work across functional divisions: [. . .] I describe knowledge as localized, embedded, and invested in practice (Bourdieu, 1977, Lave, 1988. This specialization of "knowledge in practice" (Carlile, 2002) makes working across functional boundaries and accommodating the knowledge developed in another practice especially difficult. People typically find it costly to change their knowledge and skills, considering the time and effort invested in learning. People from different functions need to be willing to adjust their knowledge and be capable of influencing the change of knowledge in another function (Carlile, 2002). Carlile (2004, p. 557) noted that ". . .as novelty increases, the amount of effort required to adequately share and assess knowledge also increases." Expertise plays a part in the change. Broniatowski and Magee (2017, p. 14) found that "knowledge boundaries are not present when experts are able to recognize that their specialized knowledge does not apply". Broniatowski and Magee found that in most novel situations, experts are able to avoid the "competency trap" when they realise the limitations of their knowledge. Common knowledge, shared objects, methods and trade-off methodologies all play a part in working across boundaries (Carlile, 2004). The role of boundary spanners in crossing knowledge boundaries is well recognised in the literature. Haas (2015Haas ( , p. 1034) overviewed literature on boundary spanners, gatekeepers and knowledge brokers and defined "boundary spanners as links between a unit and its environment". Haas described gatekeepers as a subcategory of boundary spanners and knowledge brokers as people who do not belong to any of the groups to which they provide information. Long et al. (2013, p. 2) described brokers as people who "reach across a structural hole". Boundary spanning, according to Long et al., is a form of brokerage and includes the crossing of organisational, departmental and disciplinary boundaries to aid knowledge exchanges. Professional tribes and silos are common in health care (Long et al., 2013), so it is important to consider health-specific exchanges on boundaries. Fox (2011) noted that medical technologies can act as inhibitors or facilitators of innovation depending on how they connect with existing knowledge, roles and understanding of the scope of practice. Several studies considered boundary-spanning in health practice in relation to particular roles (Fick-Cooper et al., 2019;Meyer et al., 2014;Swaithes et al., 2021). GKMC Learning in practice and by doing play an important part in crossing boundaries in health. Johannessen's (2018) investigation of professional boundary-blurring work, where nurses learn and apply medical knowledge, found that learning by doing and participation in a community of practice facilitated professional crossover. This type of learning gave agency to nurses and helped in solving practical problems, but Johannessen acknowledged that there were reservations and issues of legitimacy inherent in crossing professional boundaries. The concept of logic bootstrapping is used by Burton-Jones et al. (2020) to explain how institutional entrepreneurs achieve their goals in a health setting. Entrepreneurs' goals are, by their nature, outside current practices. Burton-Jones et al. explained that proponents of competing logic start a process of act-learn-adjust, similar to bootstrapping, to negotiate a change. The concept of boundary objects is used to explain how some of these contradictions are resolved in practice: Boundary objects are objects which are both plastic enough to adapt to local needs and the constraints of the several parties employing them, yet robust enough to maintain a common identity across sites. They are weakly structured in common use, and become strongly structured in individual-site use. These objects may be abstract or concrete. They have different meanings in different social worlds but their structure is common enough to more than one world to make them recognizable, a means of translation. The creation and management of boundary objects is a key process in developing and maintaining coherence across intersecting social worlds. (Star and Griesemer, 1989, p. 393) Star and Griesemer (1989) identify types of boundary objects, including repositories, such as libraries and museums; diagrams or atlases that enable symbolic communication; and standardised forms, which help to overcome local uncertainties. Star (2010) explained that boundary objects arise from information needs, and information and work requirements. They "are a sort of arrangement that allow different groups to work together without consensus" (Star, 2010, p. 602). Rehm and Goel (2014) consider the use of a boundary cluster where artefacts may not be boundary objects in their own right. The selection of boundary objects is highly political and is important as it may signify a professional identity. Fox (2011, p. 81) described the cultural meaning of boundary objects at the time when the practice of sterilisation was introduced: "Sterile clothes, masks, heatsterilized instruments were boundary objects because they had the secondary function of assigning surgeons the role of healer, both within their own community and perhaps in a wider lay community also". Kimble et al. (2010) note that the political nature of boundary objects requires that brokers are involved in object selection. Over time, standardisation of boundary objects begins, usually initiated by administrators and regulators. However, "all standardized systems throw off or generate residual categories. These are categories that include 'not elsewhere categorized', 'none of the above', or 'not otherwise specified'. As these categories become inhabited by outsiders or others, those within may begin to start other boundary objects [. . .] and a cycle is born" (Star, 2010, p. 614). The literature on organisational knowledge boundaries highlights mechanisms and processes that inhibit and aid boundary crossing. It also highlights the significance of practice and local contexts in which exchanges happen in complex dynamics between human and inanimate players. Communities of practice In thirty years since Jean Lave and Etienne Wenger formulated the concept of communities of practice (CoP), this idea has been a cornerstone of thinking about knowledge processes Observing, spanning and shifting boundaries and learning in organisations and professional settings. People who gather freely to develop their knowledge in a particular domain and improve their practice form a CoP. Wenger (2004) recognised three elements of CoP: domain, community and practice, with each having significance. Brown and Duguid (2001) stressed the importance of practice, which had been often neglected in favour of community. CoPs often exist in organisations, but they are fluid formations (Brown and Duguid, 1991) and need to be distinguished from formal work groups, teams and informal networks (Wenger and Snyder, 2000). CoPs have a critical role in continuing the history of the practice and ensuring growth beyond its boundaries (Wenger, 2010). Considering CoPs as complex learning systems, Wenger (2010, 182) saw a "profound paradox" in the "coexistence of depth within the practices and active boundaries across practices". This contradicting processes are mutually dependent and enhance each other, according to Wenger. Organisational structures and power relations also determine the existence of CoPs, their nature and boundaries (Fuller et al., 2005). While CoPs are important for professional learning, it is important to remember that they are not always the main form of support for organisational learning, and many important practices exist and develop without CoPs. Rationale Significant attention in research and practice has been given to big data and associated technological and organisational changes. In health, the focus has been predominantly on clinical work over everyday interactions in different settings. Data use in workplaces is ubiquitous, underpinning prominent organisational changes and work results. However, there remains a substantial gap in our understanding of everyday interactions with data. The limited understanding has particular significance in health, where a large proportion of employees work in non-clinical roles, supporting and enabling clinical work. As information and consequent data flow rarely happen in silos, it is critical for the health care sector to develop a deeper understanding of the dynamic of data use in non-clinical practice. While studies of other work groups exist and may be applicable, it is impossible to recognise similarities and differences in information behaviours without the research-based understanding of non-clinical practice in health as a reference point. In the context of connected data with far-reaching impacts on health care, the current research gap is significant. Research into interactions with data in non-clinical roles will contribute to much-needed evidence for decision-making in health organisations. Because non-clinical work has many similarities with other organisational settings, especially in the public sector, results are likely to be relevant and applicable to other contexts. This novel research needs to connect with the existing understanding of knowledge development and organisational change to support theoretical and practical advancements. Methodology The study was designed to explore issues underpinning employees' interactions with data in the workplace in non-clinical health settings. This paper will focus on the following questions: Q1. What constitutes data in non-clinical practice within a public health organisation (PHO)? Q2. How do people in non-clinical roles in a PHO interact with data? The individual, project and organisational issues underpinning interactions with data were explored in semi-structured interviews. Study design The exploratory study used a predominantly qualitative methodology. A survey was used as a quantitative research method to complement insights gained through qualitative explorations and enable data triangulation. "Data", the central concept in the study, has several possible variations in meaning. The following definition of data clarifies the intended meaning of this project: "Information is any pattern of organization and data is information selected for further processing" (Sukovic, 2008, p. 81). The scope of this definition includes additional details: Information means a pattern of organization, which can be contained in any physical manifestation, and it is given meaning by a human being under certain contextual conditions. The concept of information includes the physical manifestation, the process of making sense of that information or 'being informed,' and contextual considerations. Data means information produced, selected, and/or assembled for further processing. (Sukovic, 2008, p. 73) Because interactions with data assume personal meanings in a context, the exploratory part of the study was designed to elicit participants' reflections related to a variety of situations. The phenomenological approach and hermeneutics as the "art and science of interpretation" (Robinson, 2002, p. 196) provided a philosophical background and broader framework for the study. Grounded theory, with close connections with phenomenology, provided suitable analytical methods and techniques. A literature review was also guided, to an extent, by grounded theory. In the initial literature review, the project team searched for papers related to the research topic. An in-depth review of the literature related to knowledge boundaries was performed after analysis of the research data and was informed by study findings. Data-gathering The study was conducted at four sites, each providing different types of support to the system. Sites were selected to provide a variety of participant roles and experiences. An important consideration was the elimination of any ambiguity between clinical and nonclinical work. Consequently, selected research sites were non-clinical and all provided services across the whole system. Data were gathered in two stages using mixed methods. In the first, exploratory stage, qualitative methods were used to investigate issues of data use and identify main themes. In the second stage, a survey was developed on the basis of the qualitative study to investigate the extent to which identified themes were applicable to a broader cohort of participants. Table 1 shows a summary of data-gathering methods. 4.2.1 Data-gathering methods. Semi-structured interviews and discussions in one workshop were used to obtain data for the exploratory stage of the study. Qualitative data includes 22 h of recorded discussions and field notes. 4.2.1.1 Interviews. Critical incident technique, well established both in health and information studies (Urquhart et al., 2003;Hughes et al., 2007), was used to structure Observing, spanning and shifting boundaries conversations about participants' experiences of data use. Introductory questions were asked about a participant's job, and further discussions evolved around two critical incidents. Firstly, participants were asked to discuss a project or an instance of a regularly performed task where it was easy to find information and work with data. Examples may have referred to complex projects or tasks, but the process of working with data was satisfactory and suitable for the task. The second critical incident concerned an opposite example. Participants were asked to discuss an example where it was difficult to work with information and data. All interviews (25 in total) were conducted by members of the research team, including the Chief Investigator. Interviews lasted about 45 min on average. The longest interview lasted about 1 h and 10 min, and the shortest lasted 30 min. All interviews were fully transcribed. 4.2.1.2 Workshop. The workforce skills and training working group, as part of the state health analytics initiative, conducted a workshop at a partner university to discuss a draft data capability framework. Members of the research team observed the workshop and recorded ethnographic notes. The event was recorded and fully transcribed. The workshop recording lasted 2 h and 10 min. 4.2.1.3 Survey. The survey was developed to complement qualitative data by eliciting a larger number of responses to specific questions arising from qualitative explorations. The survey was administered via Qualtrics. The questionnaire returned 177 responses from employees at the four data-gathering sites. Responses were not forced, so the questions yielded a different number of responses. 4.2.2 Participants. Participants interviewed during Stage 1 were volunteers who had responded to a call for participation or were recommended by someone in their workplace. Snowballing technique was used to identify participants. Purposeful sampling was used in the final stages of qualitative data-gathering to obtain inputs needed to understand the emerging themes and ensure data saturation. The workshop participants were selected by the organiser. Maximum deviation sampling was used to identify a variety of job types and levels, including senior managers, in each organisation. Participants' work roles encompassed a wide range of job types and levels, including jobs in administration, health education, finance, design, data analytics, reporting, human resources and clinical support. Most study participants were from the state capital city, but some lived and worked in regional centres and rural areas. Overall, the number of male and female participants was balanced (Table 2). Some survey respondents did not provide information about their gender, although "other" was an option. Analysis Audio recordings, transcriptions and notes constituted the qualitative data from the workshop and interviews. Software used for analysis included NVivo for qualitative analysis and SPSS Statistics for survey analysis. GKMC Analysis of interviews and the workshop started at the time of data-gathering. Listening to recorded interviews and reading transcriptions allowed emerging themes to inform further data-gathering. Layers of coding were developed at different levels of abstraction. Grounded theory techniques were used for analysis (Corbin and Strauss, 2015;Strauss and Corbin, 1998), especially open and axial coding. Process coding was used to understand the project development described in the interviews. Flip-flop techniques (Strauss and Corbin, 1998) were also used to ask questions about the data from the opposite direction. Descriptive analysis techniques, including multiple response analysis, were used to analyse the survey data. Independent samples test, analysis of variance (ANOVA) and multiple regression were used to analyse the data and test hypotheses. Ethical conduct of research Approval to conduct the study was obtained from the Hunter New England Human Ethics Committee (approval number LNR/17/HNE/296). All participants who provided qualitative data granted their consent for data-gathering. All transcripts were deidentified. Detailed data-management procedures were developed prior to the onset of the study to ensure appropriate management and confidentiality. Audio files and transcriptions are archived at a secure location. Reporting of research findings aims to reveal as much information as required to support results while preserving privacy and confidentiality. A numerical system was developed to identify participants. Excerpts from interviews are cited in a numerical form, meaning Site#/Participant#. For example, Participant 1/2 means the second participant at Site 1. Sitespecific details were removed from transcripts to protect individual and organisational confidentiality. Limitations The study was conducted within one health system. It is a large system, likely to be similar to public sector organisations around the world, especially in developed countries, but every organisation is also unique. Data were not gathered from a representative sample of participants. For these reasons, study results cannot be generalised. The reliability of the study was ensured by careful study design, including the selection of data-gathering and analytical methods appropriate for the research questions. Participants, employed in a wide variety of roles in urban and rural areas, were able to comment on a wide range of experiences. Triangulation enabled a deeper understanding of the research phenomena. The qualitative methodology allowed the exploration of meanings and experiences from participants' personal perspectives, whereas the survey enabled datagathering from a larger pool of participants to further investigate findings arising from qualitative research. The reliability of the study was further ensured by researchers' individual and group data analysis. Research results The research results are presented to answer two research questions related to data use in organisational contexts. Firstly, types of data used in non-clinical practice are considered to answer the question of what constitutes data in non-clinical practice. Some relevant survey results related to information sharing are included in this section. Secondly, findings arising from reflections about participants' interactions with data in their workplace provide responses to the second research question and are central to this section. As discussions of organisational issues have a prominent place in participants' accounts, this paper focuses on organisational aspects, particularly on the theme of working around Observing, spanning and shifting boundaries boundaries. Three types of data work in relation to organisational boundaries emerged from the analysis of the discussions about issues experienced during interactions with data: observation, spanning and shifting of boundaries. Types of data used in non-clinical practice Interviewees explained how they derived data from clinical information, including information about patients, medication and equipment; corporate information, such as financial and workforce-related; instructions, evaluation results and other types of organisational information; and specific subject matter information. Commonly used information sources ranged from organisational documents, various databases and grey literature to publicly available information from the internet. People, including colleagues, subject matter experts and professional networks, were recognised as a significant source of information. Participants were heavy users of corporate data, which was used most frequently and perceived as a priority over clinical and patient data. Survey results confirmed and further clarified qualitative findings. Respondents were asked to select all the types of data they use for their work (Table 3) and to prioritise the used data types (Table 4). Most used and most important types of data were Corporate dataorganisational (i.e. policies, emails, administrative), workforce and financial data. The least used type of data was Clinical: observation (i.e. patient data). This is not surprising because the respondents worked in the non-clinical units. It is also worth noting that the survey respondents used the whole spectrum of data. Three survey questions elicited responses about the clarity of reason for data collection, data availability and data accessibility. The five-point Likert scale was used for responses GKMC (almost never, rarely, sometimes, often and almost always). These questions yielded 110 responses. The following percentage of respondents chose "often" or "almost always" as their answers to the three questions: (1) Reasons for information collection in their teams were clear to 84% of respondents. (2) Information needed for work is available at their workplace, as reported by 53% of respondents. (3) Access to the information they need was reported by 46%. The results from an independent samples t-test show that there was no significant difference between males and females regarding easy access to information (t(107) = -1.409, p = 0.162) or perception of information-sharing culture at their workplaces (t(99) = -0.510, p = 0.611). There was no significant difference between those with a supervisory role and those without a supervisory role regarding understanding reasons for collecting data (t(108) = 1.095, p = 0.276) or ease of access to data (t(107) = -0.172, p = 0.864). A one-way ANOVA test indicated there was no significant relationship between years of working experience and the perception of easy access to information (F (3, 105) = 1.225, p = 0.304). Data use and organisational boundaries Participants' reflections on fitting the novel demands, challenges and opportunities arising from data use into existing structures of a large public service sector featured prominently in the study data. Intra-organisational boundaries were the consistent thread arising from the data, at all levels, across influencing factors, between organisation types and the spectrum of participants. Three types of interactions with data were described in relation to organisational boundaries: (1) Observing boundaries when data use happens within organisational boundaries or is limited by the boundary enforcement. (2) Spanning boundaries when people work across boundaries, often creating boundary objects and practices to respond to new challenges and opportunities. (3) Shifting boundaries when intentions and practices are focused on opening boundaries and creating new spaces to enable effective interactions with data. Table 5 summarises aspects of data work in relation to organisational boundaries. The matrix categorises issues experienced during data work into Professions and disciplines; Work roles; Work practices, including policies and procedures; Access to information; and Complex organisation. The matrix highlights how boundary issues emerge and play out across boundary work. 5.2.1 Boundary issues. In this section, we look more closely at how participants describe boundary-related issues that underpin their work. Boundaries sometimes defined or influenced data work, and in many cases, they had to be challenged and negotiated. 5.2.1.1 Professions and disciplines. Study participants described a strong sense of professional identity, often defined and differentiated in relation to other professions and knowledge disciplines. Data use is determined, to an extent, by profession-defined boundaries, which were also seen as an impediment to workflows. 5.2.1.1.1 Observing professional boundaries. Rigid professional boundaries were generally described as an impediment to effective work with data. This experience arose from an inability to understand the language of another professional group, from different meanings being assigned to terminology and from differing perspectives of multiple Other participants discussed major issues when data analysts or the IT team lacked an understanding of business needs. A major finance project, for example, was managed around blocks imposed by a key IT unit that, according to the manager of the finance unit, did not support the finance team or consider business needs. Similarly, participants who had worked or wanted to pursue working across professional boundaries found it difficult to convince their team members from different professional backgrounds to collaborate. Firm professional boundaries manifested for many participants in strict adherence to rules and mistrust around what constitutes appropriate data use. Spanning professional boundaries. Participants who understood different professional fields described ways that they spanned boundaries. An ability to understand the language of another professional group and understand their needs was critical in obtaining and providing data. Expressions like, "I've been taught the same language as them" and "I can talk that language", are used by people who successfully work across professional boundaries. Creating boundary-spanning opportunities often depends on people with different skill sets finding ways to apply them. Participants interested in the broader organisation, outside their team and profession, often created or led the successful collaboration. Novel projects and individual boundary-spanners benefit from connections through informal networks. People invited others to participate in broader initiatives often due to previous contact and awareness of their work, using personal connections to enrich and change local practices. 5.2.1.1.3 Shifting professional boundaries. The Workshop participants discussed how data capabilities would look like in practice and how to create new professional boundaries. Understanding how different groups work with information is critical for making connections. Workshop participants saw benefits in the environmental approach, so capabilities could be seen and applied in different settings. In an example cited above, different views of information were seen as an impediment to cross-professional communication. Similarly, a deep understanding of how different groups interact with data was seen as a primary enabler to tailor information for a team and individuals within the team. Participants who worked in different professional roles or on a range of tasks described how a broad range of capabilities enables someone to deal with new problems. Some noted a gradual blurring of professional boundaries and education and training provided in modular chunks as a future trend. For these participants, the opening of professional boundaries is not about if but how it is going to happen. 5.2.1.2 Work roles. Work roles are about job descriptions and established work delineation. Work roles are related to professional and disciplinary boundaries, but they are not necessarily the same. Observing, spanning and shifting boundaries 5.2.1.2.1 Observing role boundaries. The issue of role hierarchy concerns people's ability to access information or manage projects. Participants who managed data-oriented projects often had to manage vertically and bring other sections of the unit or organisation on board. In the absence of power leverage, participants had to find other ways to achieve their goals. Divisions around work roles were discussed as unhelpful practices. Similar to observing professional boundaries, enforced divisions around work roles were perceived as the inability or unwillingness of staff to see possible opportunities for contribution or collaboration. Participant 2/6 described how his technical team of people with different professional backgrounds was dismissed in the planning stages of project development despite some of them having good knowledge of the subject matter. IT was not embedded in all stages of project development as these roles are seen as separate. Participant 2/6's recommendation was to "rather than come to us with a solution, have us be part of the solution". 5.2.1.2.2 Spanning role boundaries. Barriers to data-oriented work were discussed as coming from people in higher positions in the hierarchy. However, hierarchies were also noted as useful by some, providing pathways for the resolution of the obstacles around role divisions. Participant 3/3 would seek the support of his supervisor, who understood the value of doing work outside strict role descriptions. Similarly, this may happen across divisions when a senior manager talks to a manager in another unit to overcome obstacles in their area of responsibility. In these instances, people in positions of authority support and enable boundary spanning. Open communication and team members who are empowered to apply their judgment have a significant impact on spanning role boundaries and improving outcomes. Participants discussed successful interventions outside their regular role as a solution to address broader or unresolved issues that improved results for the whole team. Shifting role boundaries. Boundary shifters are key in creating boundary objects and processes and then creating spaces for new roles. They have typically worked as boundary spanners for some time or are in a position to plan boundary shifting. Participant 1/7 explained how "super users" regularly facilitate the implementation of new technological systems: "Once you're a super user, generally super users are super users for lots of things. They become conduits for information, and they direct all sorts of processes that they have to look after." Participant 4/1 wanted to implement a major change that challenged existing boundaries. This participant purposefully employed a business analyst with knowledge of IT, and business needs to enhance communication between the relevant units. In this case, the business analyst worked as a boundary spanner to enable the planned boundary shift. The role of boundary shifters and their characteristics will be discussed in more detail later in the article. 5.2.1.3 Work practices. Work practices encompass official procedures as well as work processes and "ways of doing things." Practices assume repetition and are often fostered by the unit and team's culture. 5.2.1.3.1 Observing boundaries around work practices. Public service rules and regulations provide a structure for work practices, but they are also perceived as burdens; "red tape" and ineffective "ticking boxes" for compliance purposes. Determining and enforcing structures early in a project can have value, but can also stifle exploration: We wanted to do additional, kind of, more forward looking programs or more advanced projects focused on more of how do we use our clinical data, for example, in a more meaningful way, while GKMC they [sic! Ministry] were more focused on establishing the foundations, like, the governance, the definitions, the so-called architecture, et cetera. They wanted to do that first instead of playing in that space. (Participant 2/1) On the positive side, structure and rules are viewed as a transparent way of doing business. Procedures, in this case, allow people to focus on the content of their work and reduce possible risks. 5.2.1.3.2 Spanning boundaries around work practices. Participants described their reasoning behind boundary-spanning behaviours as primarily being concerned with connecting to the purpose of what they were doing or trying to achieve. Participant 1/6 described significant data work done in his own time as necessary to rescue a project and avoid challenges resulting from approval delays. This behaviour demonstrates an understanding of consequences and organisational pressures outside of the participant's own role. Participant 4/2 initiated a sizeable piece of work to revise the way data was managed and shared in the unit by acting on her professional insight. Similar to spanning role boundaries, personal initiative and professional judgement play a key role in spanning boundaries around established practices and developing new ways of working with data. Shifting boundaries around work practices. Boundaries around work practices usually start shifting after a period of boundary spanning. Boundary spanners become shifters as they develop boundary objects and then process to frame discussions about new practices. With persistence and alliances with the right people, they modernise processes. 5.2.1.4 Access to data. Issues around access concern predominantly big data. It is an area where high-level organisational decisions, as well as practices and culture in smaller units, play a part in restricting data use or developing sharing solutions. Similar issues arise with other data when organisational restrictions create unnecessary data boundaries. 5.2.1.4.1 Observing data boundaries. Restricted data access was raised by a number of participants, many noting it as a great source of frustration. Some participants attributed blockages caused by data restrictions to bureaucratic processes, a lack of collaboration and customer service, and even controlling behaviour. A senior manager grappled with the issue of access to big data: Right now, I feel the data is being held hostage. We have a business analytics group in [one unit], we have a business analytics group within [another unit]. There are walls being built around it saying that nobody can just simply access data. And that's fair enough off the cuff, but these groups are filled with technicians who can grab data out of a database, but [. . .] they don't actually understand the business problems that are trying to be resolved and they don't understand the businesses that they're working with. (Participant 4/1) The same participant attributed blockages to the exercising of control and described the problem in the following way: There are certainly some entrenched positions around "our information, your information" and I don't think really, we've got clarity around who owns the data rather than who is the custodian of -and who is the custodian of managing the data. Another senior manager, however, explained restrictions as necessary to protect confidential information, avoid system crashes and inefficient use of team's time. Participant 2/6 summarised this line of argument: A lot of people when they talk about data, they'll say [. . .] I need data. 'What do you need?' 'Oh, just give me everything and then I'll work it out'. That's hard to do because obviously can't give everybody everything. So, it's really having a firm idea of the types of data that you want and the type of business questions you're trying to answer. Observing, spanning and shifting boundaries 5.2.1.4.2 Spanning data boundaries.When participants discussed ways of dealing with restrictions, they frequently mentioned issues with problem identification followed by the development of technological solutions. Participants talked about finding alternative routes after experiencing blocks. For some, this meant avoiding people who blocked access or, when restrictions were perceived as reasonable or unavoidable, concerted collaboration was needed to get access. Participant 3/1, for example, described how managing restricted data access required contacting another organisational unit to obtain and summarise data on their behalf. Participant 3/1 discussed the issues and solutions with his team, developed a program based on the obtained data, and added comments to his coding to help others understand the rationale behind decisions. The final program was passed to the original organisation as it concerned their work, and they wanted it. Flexibility and openness to sharing are a prerequisite for this type of interaction. 5.2.1.4.3 Shifting data boundaries. The importance of long-term solutions to restricted data access was discussed by participants with a deep knowledge of technological solutions and business needs. One participant was employed in a new role to do this type of work. Participant 2/3 explained his plans to develop a data pool with agile access as a method "that can quickly empower the end users of the data to make decisions and doesn't require a sixmonth translation project for each hospital site". 5.2.1.5 Complex organisation. In a very large public service, the strategic direction and goals of the whole organisation need to be part of local decisions. However, organisational complexity and competing interests influence work on the ground. 5.2.1.5.1 Observing complex organisational boundaries. Many restrictions to work practices in a complex organisation arise from the intricacy of communication. In systemwide projects, data are gathered from different work units. Measures and codes used in local hospitals and other units are based on local practices and may not be clear to other parts of the system. This results in major issues in working with data across the system, leading to the loss of "faith in data" and a sense of working in isolation. Participants explained, however, that complete standardisation or centralisation may not be appropriate either. On the other side of the data cycle, communicating the results of data analysis can have a major impact on recipients and decision-making. Fine-tuned, individualised approach is necessary. "The politics of messaging", as Participant 3/2 described it, involves careful consideration and decisions around communication. This participant explained how their unit analyses data and reports, including "the good, bad and ugly". The reports are reviewed by the senior management before decisions are made on further communication. Participant 3/2 noted that it is about "balancing that independent voice while still recognising the impact". Finally, broad high-level decisions may be made in one part of an organisation and apply to the work of others without a good knowledge of their context. These decisions may have significant unintended consequences and further reinforce a sense that units work in silos. Spanning complex organisational boundaries. Since communication and connection between different parts of the organisation can be a major issue, the creation of avenues for information sharing and discussion is important for work across organisational boundaries. Participant 1/8 described a complex initiative with many different stakeholders who were invited to work together simultaneously and communicate formally and informally: They can see the data together and actually say what this means together. To me the key thing I'd like to see is that opportunity to continue because it's been very powerful in terms of helping to drive service change [. . .] It overcomes some of the issues around power and balance without directly confronting them. GKMC Similarly, participants reported that solutions for communication of sensitive results across units were found after a series of conversations about the meaning of results before they became public. 5.2.1.5.3 Shifting complex organisational boundaries. Specialised units functioning as connectors may have an important role to play in spanning and shifting organisational boundaries. Participant 4/2 worked in a team described as "the middle of the funnel, we get the information from one side about the other side and work out what we can do with it and how we can help". Engaging stakeholders in the development of new services and practices is a well-known strategy to create something new, but could be used more often in data-related non-clinical work. Participant 2/6 suggested an innovation week or a hackathon when people from different sectors could work together. Participant 4/1 discussed the importance of developing tools to demonstrate new ways of using data to give them meaning. For this participant, it is about improving business by understanding the value and use of data. Boundary work: summary and role of boundary shifters Observing, spanning and shifting are three types of work with data in relation to boundaries. In the previous section, we considered the five issues emerging in interactions with data. We showed how the issues are experienced and addressed by observing, spanning or shifting boundaries. In this section, we will provide a summary of the three types of boundary work and focus on the role of boundary shifters. 5.3.1 Observing, spanning and shifting boundaries. As previously noted, structure and boundaries are necessary for the organisation to function effectively. While rules and procedures are noted as helpful to provide structure, observing boundaries in data-related work is experienced predominantly as restrictive divisions. Data work inside organisational boundaries is characterised by experiences of difficulty in establishing shared meanings, unhelpful divisions between professions and roles, stifled innovation and efficacy due to rigid procedures and practices and a lack of communication and transparency. Work across organisational boundaries, on the other hand, is described as more effective and positive. It is enabled by employees' ability to participate in inter-professional collaboration, support for data work and cross-division assistance from people in positions of authority, and particularly, by open and effective communication. Boundary spanners have an important role to play in initiating and driving work across boundaries. The creation of boundary objects and processes is an important part of working across boundaries. In situations when working across boundaries is not the best response to the needs and opportunities, and under suitable conditions, the process of boundary shifting starts to happen. It requires a vision to see new possibilities and actively create spaces for new roles and types of work outside existing divisions. On the boundary-shifting level, a broader circle of people is engaged when creating boundary objects and processes to model new ways of working and open conversation and negotiation. In some instances, data itself became a boundary object as people from different sectors discussed how to reconcile data, which led to conversations about sources and the potential of shared standards. For example, Participant 4/2 discussed significant work involved in "data cleansing", consideration of data sources and practices involved in all stages of data use in her area of work. People who work as connectors, formally or informally, notice gaps and are often in a position to initiate or lead changes, resulting in small and big boundary shifts. In some instances, new work roles or even teams are created to enable and support boundary shifting. Observing, spanning and shifting boundaries 5.3.2 Boundary shifters. Boundary spanning and shifting concerns local work in relatively small organisational units as well as large and ambitious initiatives with farreaching impact. Regardless of organisational size, boundary spanning and shifting change work dynamics and may have long-term influences. Prominent boundary spanners in the study described their consistent work in the area they understood well. They also saw shortcomings in existing practices and opportunities for improvement. Through their professional vision and consistent work over a period of time, they became boundary shifters. In most cases, participants primarily viewed the change as a work in progress and did not describe a completed organisational shift. In rare cases, when they described a successfully completed change, it was discussed in the context of continuous improvement and a development mindset. An important part of the boundary shifters' work is the creation of boundary objects and processes that serve as a material point of discussion. Boundary objects provide an example of what a change could look like. While other participants also referred to boundary objects (collaborative documents often served this purpose), boundary shifters referred to a suite of objects, interactions and processes. For boundary shifters, these objects served as a point of discussion and negotiation as communication across different units and professional groups were constant part of their work. They also were used in attempts to establish collaboration and ensure improvements. Boundary shifters often progressed or hoped to progress their work to the point of official acknowledgement and standardisation, such as an adjusted job description or title. Boundary shifters typically had educational and/or professional backgrounds in more than one disciplinary area. They often had two degrees, one in a clinical area and another in IT. Alternatively, they combined work experience in their discipline with a strong technical background developed through any combination of personal interest and skills acquired on the job. In their current roles, they continued to tap into a range of their skills and experiences and saw it as a normal way of working. One of the Workshop participants explained it in this way: "I suppose I look at things from the perspective of I started working life as a clinician, then program manager, and now I'm managing an analytics performance team. So really, for me, it should be seamless". A need for roles that enable new types of work and seamless transitions was described by a number of experienced employees. Participant 2/7, for example, managed connections between the health system and a large data bank. By the nature of her work and background, the participant observed cross-system trends. The main current gap in her area of work was the role of "translator", establishing a service link between users and the IT system. Participant 2/7's requests to create this type of role illustrated a need to standardise some of the boundary-shifting work. In terms of personal characteristics, boundary shifters often mention a keen interest in understanding problems. They do not compartmentalise their work and use a range of experiences to initiate a change. Three vignettes showcasing boundary shifters are used to provide examples and illustrate the points raised in this section. Vignette 1 Participant 2/5 has a degree in a clinical field, and master's in digital data. She works in a unit with separate teams of clinicians and technical staff. Her role is clinical, but she observes work processes holistically. Participant 2/5 works as a link between the two teams, hoping to make this part of her job description. GKMC During the interview, she referred to an intervention that illustrated the creation of a boundary object. As described by Participant 2/5, "the top hat" and the new button created for clinicians symbolised an aspiration to create seamless connections between the two teams. We have a way to inactivate a drug [. . .] I've noticed yesterday that they were doing it in the clinical team without mentioning anything to the technical team. Obviously, this has an impact on the technical team[. . .] Yesterday, I asked everyone what was the exact requirement to inactivate a drug and translate this requirement into data kind of, the top hat, the recordit's just a character we add on the top of the description. I sent the email to the technical team to explain what this top hat means within the system and that they should inactivate another field that will be now out of scope. The technical team understood, and they are going to make the change today. They are going to create a button within an application for the clinical [team] to click this button to realign the technical content to their requirement. Vignette 2 Participant 3/3 is a graphic designer, gamer and coder who studied programming at college. His role is to design reports from data provided by data analysts. When he started, graphic designers produced visualisations separately from data sources. Every change in data was replicated manually in reports. Participant 3/3, however, understood data work enough to interpret what the analysts were doing, but not enough to do it himself. When he began asking for data to automate data visualisation and reduce double-handling, he described being pushed into the "designer corner". The participant saw opportunities for collaboration and was supported by his supervisor, but convincing others was difficult. Accessing data was not in the scope for his role. He had to demonstrate his ability to "data custodians" to overcome trust issues around the perceived risk. Over time, by connecting with analysts, he produced examples of automated visualisations, which were successful. As a result, he introduced a new consultation process to model the type of practice he wanted to achieve. New reports and closer collaboration between analysts and designers served as a boundary object and boundary process that ignited discussions and further negotiation. When asked what he would tell a new person about the most important aspects of the role, he said, "It's really understanding the data and how to build something from that data that's visual". It is a different area of work from traditional graphic design. Observing, spanning and shifting boundaries Vignette 3 Participant 4/1 worked in a senior finance role and wanted to use another unit's data combined with data available in his unit for a deeper understanding of issues. The goal was to create a better tool for trend prediction. There were many obstacles and challenges in accessing data from another unit. Participant 4/1 worked top-down to shift formal organisational boundaries and processes while trying to progress the project. He saw an organisational issue in the limited understanding of business problems and broader thinking about the use of data. The following example from his experience served to illustrate the thinking that he wanted to foster, involving shifting existing boundaries ingrained in practices and regulations at his workplace: I've worked with individuals back in the health sector in New Zealand; one was a former engineer; he was the chief executive as well and also entered the finance and he brought in (IT application). . . So we could understand or relate back to the care we were providing, why do we treat some patients in the same diagnostic group in an entirely different way with this clinician and this one is using something that seems to work a lot better in terms of efficacy of care, but also in terms of the efficiency of how we provide that care? Facilitator: So, you think it is about understanding and interpreting data? Interviewee: It's more thanit's also applying it to the business and applying it through to a culture and thinking, what does this mean for us? Discussion Data use in the workplace is a complex phenomenon; it is embedded in practice and contextual. One lesson the research team learnt during the interviews is that there is no such thing as a simple data-related task in the workplace. We confirmed throughout the project the observation that "[n]o piece of information is simple [. . .] even something so seemingly simple as what is a disease and how much does a treatment cost is enormously uncertain and complex" (Gerson and Star, 1986, p. 267). Study participants discussed the most sophisticated projects based on enormous data sets and high-end analytical capabilities as straightforward, and spent a big part of the interview explaining seemingly simple tasks fraught with difficulties imposed by territorial behaviours, inaccessible information, the lack of collaboration on one side or the successful use of organisational structures to promote work and achieve goals, on another. Boundary objects, processes and people in the roles of boundary spanners are key for our understanding of how interactions with data happen in practice. Teams perceived as open and supportive, and those with rigid rules and limited communication both reportedly advanced their practices by working around and across boundaries by using boundary objects and processes. Findings about the use of boundary objects support Carlile's reflection that [. . .] the capacity of a boundary object is two-fold: both practical and political. Practical because it must establish a shared syntax or a shared means for representing and specifying differences and dependencies at the boundary. Political because it must facilitate a process of transforming current knowledge (knowledge that is localized, embedded, and invested in practice) so that new knowledge can be created to resolve the negative consequences identified (Carlile, 2002, p. 453). GKMC Boundary process emerged as an important concept in the study, combining work with boundary objects and boundary clusters, in which, as described by Rehm and Goel (2014), individual artefacts may not be boundary objects. An important aspect of the boundary process is that work with artefacts is combined with attempts to negotiate new communication channels and collaboration opportunities. The thrust of transformational work and realisation of Carlile's "political capacity of a boundary object" is in boundary processes. Boundary processes include different boundary objects, clusters and communication channels aiming to achieve immediate and long-term goals. Boundary shifting is a result of continuous work across boundaries. Boundary-spanning mechanisms depend on the organisational context (Evans and Scarbrough, 2014). These authors identified two approaches that were used to support knowledge translation in health: a "bridging" approach involving designated roles and activities to span boundaries between communities, typically to work across strong boundaries; and a "blurring" approach to de-emphasise boundaries and enable knowledge translation in daily practice. In the study, participants discussed both mechanisms. One participant, in particular, who worked in a "bridging" team, described instances of connecting units and communities, but also "blurring" boundaries in daily practice. An insight from the study is that boundary spanners and shifters tend to work in any way that is available to them and appropriate for the task at hand. The role of individuals who work on reshaping boundaries is well recognised in the literature. Haas (2015Haas ( , p. 1033 identified six functions performed by boundary spanners and all, except "access to markets and commercialization of outputs", were identified in the study. Boundary spanners frequently aid information exchange. They support access to resources, sometimes by being able to manage blockages, but often by aiding intellectual access using their ability to interpret information from other disciplines. In some cases, they are group representatives. During later stages of change, boundary shifters trigger organisational change and take the roles of coordinators and facilitators. As Haas noted, shifters' competencies are hard to develop, can be slow to emerge, and they may not achieve their goal. Nevertheless, their work challenges the status quo. While it is reasonable to expect that a community of practice plays a part in the boundary processes, we have not found any evidence that this was the case. All study participants referred to groups, which could be classified as work teams and, in a few cases, networks rather than CoP, according to the distinction made by Wenger and Snyder (2000). Brown and Duguid (2001) rightly emphasise the importance of practice, which creates an epistemic difference. Even with this distinction, a sense of community of practice was not evident in this study. While we cannot draw any conclusions from this finding, the presence of CoPs to support data work is worth further investigation. Snyder et al. (2003) discussed the potential of CoPs to bridge formal boundaries and aid beneficial boundary crossing in public organisations. A better understanding of how they function in everyday work may provide useful evidence for practice. In the qualitative stage, we did not find evidence that experiences of boundaries as blockages depended on an employee's place in the organisational hierarchy, and we explored this question further in the survey. Fifty-three per cent of respondents said information was available often or almost always in their workplace, but 46% said it was available sometimes or rarely. The hypothesis that this depended on the position in the organisational hierarchy was rejected as there was no difference between people in supervisory and non-supervisory positions. These data indicate that organisational complexities in data-related interactions described in interviews are not isolated and are experienced at all levels. Observing, spanning and shifting boundaries The study opened a number of new questions which could not be adequately answered within the scope of this study. An area for further investigation with important practical implications concerns the nature and source of boundaries. Particularly relevant are practices involving paper-based and digital technologies. Boundary spanners and shifters try to introduce methods and practices suitable for digital environments. The nature of boundaries in the public sector, however, is often defined by rules, procedures and artefacts with a long paper-based tradition. Even in the highly technological health sector, data are kept on local spreadsheets and key pieces of information are discovered "in drawers"residual practices that contradict the possibilities of new ways of working. Forms, procedures, authorisation requests, all common in public services (and often directly translated from paper-based versions), create boundaries around interactions with digital information. From this perspective, organisations need to consider the technological and cultural origins of work practices to remove unhelpful boundaries and enable convergence. Advanced data use has a significant potential to improve health care. It requires the coordinated effort of the clinical and non-clinical parts of a health organisation. Study findings indicate that any organisational initiative aiming to advance data use in health needs to incorporate non-clinical work and workers. Digital capabilities are important, but they are only part of the picture. Education and training are critically important, but they need to be combined with targeted interventions to address work practices and organisational issues. While more work needs to be done to fully understand health organisation boundaries, practical initiatives to advance boundary spanning and shifting are advisable. It is particularly important to understand the role of local boundary spanners and shifters, and consider possibilities to provide formal and informal organisational support for boundary processes. The question about the optimal speed of change is important and answered on every organisational levelfrom decisions on the executive level, implantation by middle managers and in everyday practice. Conclusion The study contributes novel research findings relating to boundaries and data work, with implications for the workplace and improvement of health care. Investigations of datarelated boundaries, the nature of boundary processes, and work around observing, spanning and shifting boundaries contribute new insights to the literature in organisational and information studies. These study findings are relevant to a broader public sector and applicable to similar work settings. The study raises some interesting questions for future studies. Organisational structures and practices which inhibit and promote data work require further investigation. There is also a need to enhance understanding of the workforce involved in different aspects of data work, particularly to deepen insights into the characteristics and behaviours of people who assume the roles of boundary spanners and shifters in data-related work. This is particularly relevant in areas such as health, in which innovation and standardisation are both of critical importance. Further research into the transition from a paper-based public sector to a digitally enabled system, in which clinical and non-clinical work is considered holistically, would provide insights of practical and theoretical significance.
v3-fos-license
2020-02-20T09:16:28.830Z
2016-11-17T00:00:00.000
217083547
{ "extfieldsofstudy": [ "Art" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://revistas.fflch.usp.br/abei/article/download/3529/2882", "pdf_hash": "d85e7c405b2eb0c0593fd5a7d0a945517d6601a1", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42661", "s2fieldsofstudy": [ "Political Science" ], "sha1": "3ce140422cef6b8695586c2adbccabce9cc51218", "year": 2016 }
pes2o/s2orc
Portraits of Brazil The period of Modernism (from 1922 onwards) was also one which exhaustively attempted to understand and explain Brazil through literary works of various genres: fiction, poetry, essay. This paper attributes to these works the general category of “Portraits of Brazil”. It is important to highlight its central issue: the miscegenation of three races (indigenous peoples, white Europeans and African slaves) has formed the Brazilian population. We examine here the sources of different diagnoses and proposals. The Modernists will rebut this argument. Mário de Andrade's Macunaíma, in spite of being dedicated to Paulo Prado, surely overflows and goes beyond such conception. And Oswald de Andrade's poetry, favoring yet other perspectives, opens a horizon of unsuspected possibilities. and parodic ludicrousness of the novel overflow from a conception which, at bottom, is really narrow. As to "Manifesto Antropófago", it bears neither the thesis of sadness nor that of the three sad races; on the contrary, there we find a reiterated assertion: "Happiness is the litmus test". Therefore, it repels sadness. And under different verbal refinements of form, it also repels covetousness whether it is in the form of the "capitalist modus operandi", whether it is by chastising Father Antonio Vieira (1608-1697) for the infamous pecuniary commission he received by mediating a loan for Brazil. There is, however, a trait taken with great joy: lust, which matches with the Freudian notion of "de-repressing" proclaimed by the Dyonisian and libertarian posture of the Modernists. The fact is that Anthropophagism, of which Macunaíma, despite Mário de Andrade, would be the most hailed masterpiece of Modernism, gives the concern with miscegenation -which was seen by all as degrading-a different turn. And it finds an identification with an emblematic indigenous man, as far from reality as the Indian/ knight errant of the Romantic Indianism had been, but that is now a metaphor devised to propose a new relationship with the colonizer. A relationship which would not stem from the shame of being colonized: the Anthropophagus being proposed is, as it is known, that one that ritually devours the colonizer to absorb the values of his culture, which he deems interesting. Insistently, the "Manifesto", by modulating the tone, dwells on one watchword: "The transfiguration of Taboos in Totems. Anthropophagy", which, as it is known, can only be consubstantiated if the taboo is cannibalized in the anthropophagic banquet. Naturally, the Modernists, like all vanguardists, aimed their batteries of derision against everything that had preceded them. Besides the Parnassians and Academics in general, the Romantics were ridiculed; among the latter, their special target was the literary Indianism, in the figure of the Romantic prose writer José de Alencar and the poet Gonçalves Dias. The "Manifesto" reads "Against the torch holder Indian, Maria's son, Catherine de Médicis' godson and Antonio de Mariz' son-in-law", referring to Peri, the hero in O Guarani, a novel by Indianist writer José de Alencar. And since Gonçalves Dias' name was mentioned... With respect to Gonçalves Dias, it can be said that he, recklessly -for what happened in posterity -wrote the "Canção do Exílio" [The Song of Exile], which would become the most popularized Brazilian poem. It is a fine piece (Merquior 1990) unfortunately difficult to be valued nowadays, after a century and a half of superposed layers of ostentatious flag-waving kitsch. Written in Coimbra (Portugal), in 1843, "Canção do Exílio" opposes two adverbs of place expressing the two spaces of the poem: here and there. Here is the space of the exile, about which barely anything is said; and there is the space of the homeland, about which the comparative terms used are so absolute that they become superlative: everything there is more. "Canção do Exílio" ended up being unrivalled as far as parodies in our literature are concerned, stretching pseudopods even to our National Anthem ("... thy smiling, lovely fields have more flowers / than the most cheerful land away / our woods have more life / our lives in thy bosom more about love to say..." 2 ) and the World War II Brazilian Expeditionary Force anthem ("However many lands I visit / May God forbid I cease to exist / before I come back to the place ... /.../ where the thrush gives me solace". 3 ). And much later, in 1973, in a minimalist poem by José Paulo Paes (1986), the song ended up being stripped to its basic terms -here and there; showing aversion to the former and a boon with regard to the latter. It should be noted that the two adverbs are given merely two monosyllabic interjections; and one stanza made of five short nouns, all of which are oxytones in Portuguese, reinforces the rhyme in the adverbs and binds all the lines together. Thus, the poem, programmatically unlyrical, reveals its obsolete side resulting from a trivialization, while it demystifies the privilege embedded in demagogy and reaches the apex in the desacralization of the model: The interventions Oswald de Andrade resorts to, in general terms, are described below. "The Returning home song" opens with great impact, by boldly using an impropriety that shares an affinity with the "pungent metaphor" proclaimed by Modernist poets. Now it is the sea -no longer the bird -that chirps: in one single move a metaphor is born and a cliché is undone. It is worth noticing the comic demotion -for example, the diminutive little preceding birds, instead of just birds; from here, instead of just here -used in the transposition into a more colloquial style. Confirming the parodic inversion, the journey is made backwards, and "exile" becomes "return". Add to this, an abasement in thrush and palm trees, emblems of the homeland, which were very materialistically replaced with "Gold land love and roses". Indeed, the "love and roses" romantic markers are still there; however, they are preceded in the stanza by more concrete and self-seeking terms such as "gold land", forming a partnership that tips the scales in favour of the pocket rather than edifying feelings. Moreover, the anaphoric lines, narcissistically, are no longer first person plural -they become singular. On the one hand, Gonçalves Dias had closed his poem by praying to God that he still might at least catch sight of the palm tree where the thrush warbles (notice the subtle paronomasia that does neither resort to etymology nor to semantics: in Portuguese aves [birds] and aviste [catch sight of]). And however iconic the palm tree may be, it is not there for nothing: in its emblematic canonicity, it would mean constancy, a virtue that binds the poet to his homeland. On the other hand, when Oswald de Andrade closes his parody, he replaces the two natural beings that incarnate the homeland -in the case of sabiá [thrush], even its very name has an Indian origin -with a cynical and hilarious triad which forms one single emblem: São Paulo/rua 15/progresso [São Paulo/15th Street/progress]. Nature is out, the three components belonging to the realm of culture: the most prosperous city in the country; the street where banks can be found; and an evolutionist notion connected to industrial modernization, of which that city is considered a depository. Sabiá and palmeira [thrush and palm tree], generalized to cover the whole country, are thus particularized, reinforcing the singularization of the possessive pronoun, in an ambíguous move of what is referred to as localism: cheerful nostos -without any nostalgia -of a well-to-do São Paulo citizen. But that is not all. The disqualifiction of the national emblems undergoes yet two formal operations. The first one is a process of synthetization whereby 24 lines are reduced to 16 -or precisely to two thirds; the second is more complex and interferes in several albeit converging levels. We could refer to it as pseudoconservatism, in the sense that in a concealed way it reproduces norms that Modernism claimed to blast: such verses are neither blank nor free. In this way, the metrification repeats the same traditional verse and the most common one in Portuguese: the seven-syllable line -similar to the English ballad metre. As to syntax, it is respected in that the limits at the end of a line coincide with the elements of the sentence; rhymes are preserved, but this is barely noticeable thanks to the expedient of either repeating the same words in the rhyming position or combining consonant rhymes with assonant ones; or one-vowel words with diphtongs, or paroxytones with oxytones. In this aspect, they also emulate the original, though at this specific point in an irregular way, and occurring in other parts. And there is more: 1) as it happens in Gonçalves Dias' poem, the predominant rhyming pattern comes in a (palmares, mar, lá, lá, lá, Paulo, Paulo); 2) the other more frequent rhyme coming in ô/ó (rosas, amores, ouro, rosas, morra, morra), comes equally from "Canção do Exílio"; 3) only two lines are left out, the third one (daqui) and the last but one (quinze), which stress a spatial opposition while, disguisedly, rhyming with each other, and thus end up completing the rhyming alliance of the entire poem. Even this very rhyme comes from the only non-rhyming line of Gonçalves Dias' poem: "Nossos bosques têm mais vida" [Our woods have more life], marked only by an internal rhyme in the subsequent line: "Nossa vida mais amores" [Our life more love]. And all this is masked when the punctuation, in a typically Modernist trend, is rejected. 7 As we have seen, in the inaugural book, Pau Brasil, the first part is titled "História do Brasil" [Brazilian History], which is another aspect that Oswald de Andrade shares with both Paulo Prado's Retrato do Brasil and Mário de Andrade's Macunaíma: here, it is the case of a rereading of chroniclers and travellers, the first ones to write about Brazil. Oswald de Andrade's poetry That time, as even the most cursory glance at the then existing bibliography may show, was dominated by an almost obsessive attemp to recover these colonial foreign authors; after all, they were the ones who recorded the beginnings of our history. What is less obvious is to find them in Brazilian social thought, in Oswald de Andrade's poetry, and in the fictional prose of Macunaíma. The entire first part of Paulo Prado's Retrato do Brasil comprises a learned and comprehensive analysis of the chroniclers and travellers. And it is by relying on them that the author infers the features of what being Brazilian means, that is, by basing himself on the attribution of psychological traits to the "three races" and the result of this ethnic amalgamation; back to what we have already discussed: lust, covetousness, sadness, laziness, and romanticism as the outcome. In Mário de Andrade's case, more specifically in Macunaíma, such authors are submitted to parodic inversion in different passages. The pieces of information they have -which are the most varied possible and have widely different levels of absurdity, ranging from being an eyewitness to the existence of monsters to the ravished look at the abundance of naked and acquiescent women -, are glossed and disparaged through parody as phantasmagorias of the European explorers. And the best -and stylistically most coherent -example, is the celebrated "Carta pras Icamiabas" [Letter to Icamiabas], which not only is written by one "chronicler and traveller", but by one of those who take to paroxysm the mythology of "Edenic motives", much later studied by Sérgio Buarque de Holanda in Visão do Paraíso (1969). At the core of these motives, all that has to do with the abundance of women and wealth stands out. One of Mário de Andrade's great findings is the description of São Paulo city in the twentieth century in archaic and incongruous language, which in the case, was given by a visitor from the colonial times. With this kind of language, carnivalization here becomes debauchery at its highest level in an attempt to handle the urban civilization both of the machine and money. Such is its impact that Macunaíma is compelled to exhort the Amazon tribe of which he is the king, the Icamiabas, to abandon their rigorous chastity in order to emulate another tribe of women, that one of the prostitutes of São Paulo city. Such suggestion -a marvellous effrontery -aims at restoring his shaky financial situation caused precisely by the expenses he incurred in connection with the latter ladies. Yet in Oswald de Andrade's poetry, the same writers appear in Pau Brasil, in a direct, but peculiar and original way. Oswald de Andrade returns to these texts, prunes them drastically and appropriates a barely retouched snip in free and non-punctuated verse, of fragments metamorphosed in ready made poems, as Haroldo de Campos defined them in the edition mentioned above. As examples I take two of them, where the attribution of a title which is both modern and extraneous to the text decontextualizes the poem and transforms it into an allegory. The first instance is a re-elaboration of a well-known passage of Pero Vaz de Caminha's Discovery Letter, where the beauty of the native women's bodies is exalted : The gare girls They were three or four very nice young ladies With their shoulder-length black hair And their private parts so high and fit That despite much staring at them No shame did we feel 8 The second example comes from Gandavo's book (1980), accounting for the incredible sloth, the animal that seems to have been invented in the very fertile imagination of the author; but it does exist and left other fellow writers deeply impressed. This animal, with a human-looking face, appears in the etchings of a memoir written by Jean de Léry (1990) in the very first century of the New World. It would, though, be necessary to wait for many more centuries to go by until Lévi-Strauss claimed its relevance also for the native peoples, not only for the appalled foreigners. We only have to see the primacy this anthropologist assigns to it in one of his last books, The Jealous Potter [La potière jalouse, in French). Incidentally: according to Lévi-Strauss, in the native peoples' myths, the sloth, by embodying the anal-retentive principle, is opposed to the principle of voracitytherefore, oral -, represented by the wind-swallowing bird; this latter, thus, would be more compatible with the anthropophagic devouring instinct. The poem follows: Race party Also in these parts a certain animal is found Which is given the name Sloth It has a thick mass of hair in the nape And moves at such a slow pace That even though it perseveres for fifteen days It will not cover the distance of a stone's throw. (74) 9 In this way, the ready made, by allegorizing through the title the excerpt of the chronicle, results in a poem of the finest imagetic origin, a genuine graphic illustration of Mário de Andrade and even Paulo Prado. The sloth, a sophisticated index of primitivism -especially for Mário de Andrade, to whom it was an Amazonian and anti-European sign of creative idleness, retrieved from the Christian fate of the original sin (Mello e Souza op. cit.) -, here stands for the Brazilians' heraldic animal. About Oswald de Andrade Oswald de Andrade is so interesting a character that he deserves our lingering a bit more over him. Amongst the Modernists he was, as we all know, a rough, insolent, and sharp-tongued polemicist. Besides everything he wrote and published, this paradoxical protagonist left to posterity various rather unorthodox diaries, which he would cherish since his childhood, by keeping scrapbooks where he used to make notes, draw and stick reminders. Among these, the most spectacular is O perfeito cozinheiro das almas deste mundo [The perfect cook of the souls of this world], the facsimile edition of which is absolutely perfect. After so many unpublished writings having come to light we can now see Andrade in full, in all his exuberance: his passions and his love life; his brawls, his tantrums, and his quarrels; his outbursts; the polemics he entered into; his forked tongue; his verbal dexterity assisted by a temperament which woud rather lose a friend than a jest, which, by the way, he frequently did. At the same time, his great generosity, his ineptitude at bearing a grudge against anyone, as well as his irrepressible talent and loyalty to writing, which, one way or another, he practiced every day of his life. Journalism befit Oswald de Andrade's belicose nature; he had an early start, and death only would silence him. He began as a reporter and editor in Diário Popular, covering events of arts and shows; two years later he would leave to open his own weekly publication, O Pirralho [The Brat], with satirical overtones.He gathered a nice team, which included caricaturist Voltolino, and Juó Bananere, the author of celebrated daily accounts using the typical parlance of the Italian immigrants. Andrade would found, direct or be just a member of the most relevant periodicals of the Modernist movement, among which stood out Klaxon and Revista de Antropofagia [Anthropophagy Journal]. Later, with Patricia Galvão, he would publish O homem do povo [Everyman], a communist weekly newspaper, which eventually was defused by the right wing. Moreover, he would work as a columnist for the main Brazilian newspapers; as time went by, he would change the mediatic means he worked for and the goals he had in mind. The family's finances, which sustained O Pirralho, allowed him to set sail for Paris, in 1912, at the age of 22. The first of many voyages would define his route and would be decisive for Brazilian Modernism, as he established this bridge with the French vanguardist movements, the most brilliant of all at that time. Andrade was no sinner either in terms of constancy or in terms of coherence. In his kaleidoscopical points of view, his penchant for the multiple is to be highlighted. In his writing, rhetoric and even grandiloquence collide with the colloquial and with coruscating formulas of his own concoction. Everything tinted with his optimismimpervious to any denial suggested by reality, firmly anchored in his faith in utopias he never lost sight of. Nor could we frame Oswald de Andrade's works in the path of a rectilinear evolutionary process. His brilliant poetry sprang forth in outbursts. His seven novels comprise one trilogy, two stand-alone books, and a second -unfinished -trilogy, the trilogies being rather more conventional than the stand-alone novels. However, the first trilogy was written concurrently with the two stand-alone books, the "unique pair" to quote Antonio Candido, the most important Brazilian literary critic. As it is well known, Serafim Ponte Grande and Memórias Sentimentais de João Miramar [João Miramar's Sentimental Memoirs], along with Macunaíma, stand on the pinnacle of experimental literature the Brazilian Modernist prose reached by then. Later, he would pen two further novels of the second trilogy, but it did not go beyond the second volume; planned but unfinished, these books are anything but vanguardist and many layers below the experimental level mentioned above. Still, in the midst of all this, he also showed his interest in drama; producing such transgressive plays that not until half a century later did they reach the stage, but this happened only thanks to another transgressor, that is, José Celso Martinez Corrêa, who directed the first staging of O rei da vela [The Candle King]. The great theater actor Paulo Autran, who came from another and more austere school, and far from being transgressive himself, declared more than once that this staging had been the most important one in the entire history of the Brazilian theatre. Presumably, and if we take as a parametre the audacities he performed, Oswald de Andrade displayed a tendency to operate in different registers by advancing and retreating. Soon after having written the "unique pair", he opts for delivering speeches to the working class by addressing workers as vós -in Portuguese, a more literary and archaic use of you -, since, in all sincerity, he might as well use retrograde language despite his progressive goals. Though contemporary with the lack of boldness of the second trilogy, he would leave unpublished one of the most subversive of his works, the poem O Santeiro do Mangue. The mangue [mangrove] of the title is the region in Rio de Janeiro where brothels are located, the characters are prostitutes and their pimps, and the language is full of obscenities. There is no fun; in verse form, it is an accusation of the chauvinist exploitation of women. Otherwise, there is plenty of material for those who want to indulge in the findings of this writer who was the spearhead and the enfant terrible of Modernism, shooting verbal darts everywhere and, besides being a great writer, his most colourful figure. When comparing poems, it is worth noticing Andrade's versatility. Two poems As we have seen so far, amongst the accomplishments of the Modernist generation, a rediscovery of Brazil is one that stands out, and as Oswald de Andrade himself concedes, this could happen in Place Clichy, in Paris. This was the generation that, besides revolutionizing the fields of letters and arts, attempted to map Brazil and its heritage. Among the many tasks the group would carry out, there was a journey to Minas Gerais state, as we have already seen, convoying the Swiss vanguardist poet Blaise Cendrars, who wanted to make acquaintance with the regional Baroque. There was also Mário de Andrade`s tours to the Northeast and the Amazon region, reported in O turista aprendiz [The apprentice tourist]. Oswald de Andrade would also be the creator and theorizer of the anthropophagic movement which proposed a very special relationship with the colonizer, which meant devouring it. The movement's manifest is impudently signed and dated as "year 374 of the gobbling of Bishop Sardinha", thus selecting a cannibalistic event -when the Caetés Indians captured and devoured the Portuguese prelate, an object of study in schools -as the beginning of the anticolonialist endeavours. As we have seen, this rediscovery implied a return to the pages of chroniclers and travellers, our first historians, readings whose evidence is found in many Modernist writings. Apart from Oswald de Andrade's texts mentioned above, Retrato do Brasil, by Paulo Prado, and Macunaíma, by Mário de Andrade are also to be considered here; and still later, Murilo Mendes would tread a similar path with his set of poems História do Brasil (1932); the same title had been given by Oswald de Andrade to a cycle of short poems in his book Pau Brasil. By picking fragments from those pages, he makes the language of the originals worth enjoying along with the candid perception of the prodigies of the New World -from the nudity of the native women to the improbable arboreal mammal, the sloth. Below is a poem taken from the Primeiro caderno de poesia do aluno Oswald de Andrade (op. cit.) [ Student Oswald de Andrade's first notebook of poetry] : A Portuguese mistake When the Portuguese arrived under a raging storm he dressed the Indian what a pity! had it been a sunny morning the Indian would have undressed the Portuguese 10 This is a perfect example of an innovative proposal of the Modernist aesthetics, the "jest-poem": utmost concision, an outrageous statement, the impact caused in the reader by its originality -all this in prosaic diction, emulating the speech in one single utterance. In Andrade's poem, the apparently colloquial spontaneity hardly conceals the sophistication of the making process, exposing the reader, with a remarkable economy of means, to the clash between two cultures. The opposing verbs dress/undress resonates in further opposites such as Portuguese/Indian, rain/sun, arrived/had been -all of which arranged according to two axes -historical fact/utopia. In this way, sardonically, the poet attributes only to the climate the power the colonizer has to oppress the colonized; and this, incidentally, was the subject of heated racial debates which marked those times. Would inferior or mixed races, or even the tropical climate, be blamed for our backwardness? Was it a coincidence that all the wealthy countries with a white population were located in the northern hemisphere, or was it that cold weather boosted operosity? It is also worth noticing the felicitous double entendre mobilized in the poem; first, in the concrete and abstract dimensions of the Portuguese word "pena" -which, translated into English means both "pity" and "feather"-, skillfully explored; secondly, the cliché in the common meaning of the title -which, in Portuguese, points to a language issue -, by being dislocated to refer to people coming from Portugal as conquerors, is transformed in a wide and ominous historical commentary. Another poem, from Pau Brasil, illustrates Andrade's precise opposite Twilight 11 In the mountainous amphitheatre Aleijadinho's prophets monumentalize the landscape the white domes of the Passion events and the upturned headdresses of the palm trees stairs to the art in my country no one else has ever stepped on them soap stone Bible bathed in the gold of the mines As it is clear, from an ascending perspective, the view is that of someone who stands before and beneath São Bom Jesus de Matosinhos Church, in the town of Congonhas do Campo, one of the most famous baroque towns in Minas Gerais State. Drawing inspiration from and very similar to the homonymic church in the city of Braga, in Portugal, it is not to be confused with the latter, especially in view of the soap stone statues of the prophets spread in the church atrium, an artwork resulting from Aleijadinho's chisel. Aleijadinho was the greatest sculptor ever of Brazilian history. The poem surely is the product of the journey the Modernists took to the baroque towns of Minas Gerais -towns which were set up by virtue of the prosperity of the gold mines but which fell under the stagnation caused by the decline of the mines -, a part of their "discovering Brazil" project. Now to the making of the poem: in longer and more regular metre than the previous example, the main stanza ends with a couplet in the most typical Luso-Brazilian verse, the seven-syllable one, both lines relying on the alliteration of the same phoneme, which echoes in its interior. The beauty of the description, in its sharp selection, leaves out the church and elects the sculptures as agents of art over nature. A subjective evaluation closes the stanza by dislocating the apparently objective remark to an ascending movement which borders on the sublime. The radical synthesis of the couplet manages to bring everything together, the soap stone as raw material transfigured by art, the perception of what is sacred, the underlying historical element. Nevertheless, a most peculiar feature about the poem lies in its respectful nature. While the first of the two poems above is playful, irreverent, vanguardist, irregular in form, anticolonialist -ultimately a jest-poem -the second one is solemn, purposefully slow, with a more protracted and regular pace, reverent towards the colonial heritage, virtually dumbstruck by the beauty of Congonhas. It expresses and conveys an epiphany that takes possession of the iconoclast, roused by the power of the aesthetic experience. As to the title, it can be read in two ways, that is, by alluding to the time of day, and, more importantly, to the level of the artistic accomplishment, since then unachieavable. A profusion of Portraits of Brazil This is how the poet Oswald de Andrade, of whom two of his most distinctive poems are exemplified here, succeeds in reconciling very different things, as he does in the remaining of his work. As it can be noticed, here we have dealt with still two other outstanding "portraits of Brazil", according to Oswald de Andrade. This gem of Modernism, Oswald de Andrade, is undoubtedly a milestone. With his poetry and also the vanguardist prose of Serafim Ponte Grande and Memórias Sentimentais de João Miramar, he contributed to purge Brazilian literature -in what was the task of the Modernist generation -from all the dregs of a backward-looking rhetoric, be it Baroque, Romantic, Parnassian, Symbolist and even Realist-Naturalist, let alone our high-sounding tradition. In the preface to Pau Brasil, Paulo Prado compares Andrade's short poems to the Japanese haiku and adds: "Having, in the form of pills, minutes of poetry". Haroldo de Campos, in the first major study of Andrade's poetry 12 , resumes Paulo Prado's statement, calling these short lyrical pieces "pill-poems" and "minute-poems", in that they are minimalists, paraepigrammatic texts. Even if dated, Paulo Prado's book remains as a good example of a deep reflection upon the issue of miscegenation in Brazil. Only in 1933, with the publication of Casa Grande & Senzala (The Masters and the Slaves, in the English translation) would Gilberto Freire shift the discussion from race to culture. He had absorbed from the lessons of anthropologist Franz Boas the relativization of cultures, which had nothing to do with race; and thus the twilight of ethnopessimism is heralded. Such ideas have irremediably perished -but the poetry and the prose of the Modernists have not; they are still absolutely splendid.
v3-fos-license
2024-02-27T17:09:53.487Z
2024-02-22T00:00:00.000
268025372
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "d4ca13d2ca5219fa50b0459b76534ba5f5720397", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42664", "s2fieldsofstudy": [ "Medicine" ], "sha1": "237f21df1329a2fea26e75cf1a4bc8e3c1580bc6", "year": 2024 }
pes2o/s2orc
Management of essential tremor deep brain stimulation-induced side effects Deep brain stimulation (DBS) is an effective surgical therapy for carefully selected patients with medication refractory essential tremor (ET). The most popular anatomical targets for ET DBS are the ventral intermedius nucleus (VIM) of the thalamus, the caudal zona incerta (cZI) and the posterior subthalamic area (PSA). Despite extensive knowledge in DBS programming for tremor suppression, it is not uncommon to experience stimulation induced side effects related to DBS therapy. Dysarthria, dysphagia, ataxia, and gait impairment are common stimulation induced side effects from modulation of brain tissue that surround the target of interest. In this review, we explore current evidence about the etiology of stimulation induced side effects in ET DBS and provide several evidence-based strategies to troubleshoot, reprogram and retain tremor suppression. Introduction Essential tremor (ET) is among the most prevalent hyperkinetic movement disorders with a pooled prevalence estimate of approximately 1% across all ages.Its prevalence increases with advancing age affecting up to 20% of people over 95 years old (Louis and McCreary, 2021).ET is defined as a chronic, insidiously progressive, isolated tremor syndrome characterized by an action tremor of both upper extremities, lasting for a minimum of 3 years in the absence of any other neurological signs such as parkinsonism, ataxia, or dystonia, and may or may not be accompanied by tremor in the head, voice, or lower limbs (Bhatia et al., 2018). Pharmacological therapy has long been the mainstay of treatment for ET (Deuschl et al., 2011).First-line medications can provide approximately 55-60% mean reduction in tremor amplitude when used as monotherapy (Deuschl et al., 2011).Combined pharmacotherapy can sometimes yield better clinical outcomes (Wagle Shukla, 2022).Up to 55% of patients, however, manifest medication-refractory tremor (Louis, 2005), and thus surgical intervention may be considered in cases with refractory and disabling symptoms (Wagle Shukla, 2022). Since its FDA approval in 1997, deep brain stimulation (DBS) has been considered a safe and effective therapy for medication refractory ET when applied to carefully selected patients.The location of the implanted lead is a critical determinant in achieving tremor suppression while limiting the manifestation of stimulation-induced side effects.The ventral intermediate nucleus of the thalamus (VIM) has been referred to as a "relay station" in the tremor network, connecting the cerebellum and motor cortex (Schnitzler et al., 2009), and it is the primary target for ET DBS (Benabid et al., 1987(Benabid et al., , 1991;;Koller et al., 1997).The posterior subthalamic area (PSA) has been consistently reported as a target which may also provide optimal tremor suppression (Blomstedt et al., 2009(Blomstedt et al., , 2010;;Barbe et al., 2011;Fytagoridis et al., 2016), particularly for tremors that are difficult to control with conventional VIM DBS (Kim et al., 2021).The clinical effect of ET DBS has been attributed to the direct modulation of the dentato-rubro-thalamic tract (DRTT) inclusive of the prelemniscal radiation and the caudal zona incerta (cZI) (Holslag et al., 2018).In this review we will use the term "thalamic DBS" to include all three of these anatomical regions. Multiple side effects occur secondary to unintended stimulation of neighboring fiber tracts, which may modulate local and distal regions and neural networks.The most commonly encountered side effects in clinical practice are dysarthria, stimulation-induced ataxia, gait abnormalities, and loss of tremor benefit (habituation).Dysphagia is less common but has been reported.In this review, we will focus on these potential complications and discuss the current available options to reduce both acute and chronic stimulation induced side effects.In this review we discuss programing strategies that have been reported and trialed in the literature.While there is no consensus, this serves as a repository for evidence-based programing in challenging real-world scenarios. An important concept to keep in mind is that some of the symptoms that we see as stimulation-induced side effects are often part of ET itself.The 2018 consensus classification of tremor (Bhatia et al., 2018 #2) adds a category of "soft signs" or "ET plus" to account for the dysarthria (Biary and Koller, 1987;Barkmeier-Kraemer and Clark, 2017), ataxia (Bhatia et al., 2018), and gait impairment (Fasano et al., 2010) that can be seen in patients with ET as part of the disease.Dysphagia and dysarthria are also possible complications from botulinum toxin injections to treat vocal tremor (Newland et al., 2022).Therefore, careful preoperative clinical evaluation is important to establish each patient's baseline symptoms and avoid later embarking on a long-winded odyssey to attempt troubleshooting symptoms that are a part of the underlying pathology. Dysarthria Dysarthria stands out as the most common stimulationinduced side effect of thalamic region DBS (Chiu et al., 2020;Lu et al., 2020), with a prevalence reported in the literature ranging from 9% up to 75% (Pahwa et al., 2006;Flora et al., 2010).Despite tremor improvement, thalamic DBS can lead to reduced vocalization and imprecise oral articulation (Mucke et al., 2018). Speech production is mediated by a network that is centralized around the left laryngeal and orofacial regions of the primary motor cortex.These areas receive inputs from the surrounding premotor, somatosensory, and parietal cortices (Guenther and Vladusich, 2012;Fuertinger et al., 2015).Dysarthria may occur through the spread of current to the corticospinal/corticobulbar tracts and to the DRTT, reflecting either an aggravation of pre-existing cerebellar deficits and/or the involvement of the upper motor neuron (UMN) fibers of the internal capsule (Mucke et al., 2018).Those UMN fibers overlap with the networks associated with tremor benefit following stimulation (Petry-Schmelzer et al., 2021), rendering it challenging to increase stimulation parameters without negatively affecting speech. Stimulation-induced dysarthria occurs more frequently in those undergoing bilateral DBS (Picillo et al., 2016;Kim et al., 2021).It is also more commonly associated with stimulation on the electrode contacts more dorsally located, usually above the intercommissural line (Barbe et al., 2014;Kim et al., 2021), and those electrodes located relatively lateral (Becker et al., 2017).Spread of current to the medial aspect of the VIM region and to the centromedian and parafascicular thalamic nuclei region may also account for some speech dysfunction following DBS (Crosson, 2013). Strategies to address stimulation induced dysarthria include proactive pre-operative patient screening for dysarthria along with conscientious lead placement.Staging DBS procedure one lead at a time allows for revaluation of speech and helps in weighting risk vs. benefit of a second lead on speech function in a shared decisionmaking process.During surgery, microelectrode recordings for target mapping can be used to refine the lead trajectory by ensuring lead placement away from the leg somatotopic representation of the VIM, corresponding to the lateral part of the nucleus, which lies closer to the corticobulbar tract (Garonzik et al., 2002 #102).Macrostimulation from the electrode can further facilitate optimization of the target location by estimating the relative distance from the internal capsule through the stimulation threshold, assessment of clinical benefit, and determination of the presence of stimulation-induced side effects.Nonetheless, in about one-third of patients' dysarthria will only appear with chronic stimulation (Bot et al., 2018).Therefore, it is important to note that even well-placed electrodes might elicit stimulation-induced adverse events with chronic stimulation.Equally important is to note that in the operating room setting the number of test parameters is limited and this may not translate to the outpatient setting. When programming a patient with stimulation-induced dysarthria, the initial strategy is usually to decrease the stimulation amplitude (or current density).Although helpful in many cases, this may result in sub-optimal tremor control and should be balanced in a shared decision-making process with the patient (Kim et al., 2021), as most patients prefer the side effects over sub-optimal tremor control (Baizabal-Carvallo et al., 2014;Barbe et al., 2014).Another strategy is to decrease the amplitude one side at a time, starting with the lead that controls the least bothersome hemi-body. The ability to provide the patient with different programming settings using a handheld patient interface facilitates adjustment of stimulation parameters to fit the context of the situation (e.g., eating vs. speaking).For example, if they were to engage in a public speaking event, they can choose a stimulation setting where they manifest suboptimal tremor control, however, minimal dysarthria is present.In a circumstance where they may be eating a meal, they can choose a stimulation setting where they have complete tremor suppression, but mild dysarthria. Changing the stimulation site to more ventrally located contacts can reduce dysarthria, considering it is more common when stimulating through dorsal contacts.A bipolar contact configuration is also an option when trying to avoid spreading of current into adjacent structures and undesirable side effects (Kim et al., 2021).When using this strategy, the contact that provides the best tremor control during the monopolar review is chosen as the cathode and the adjacent contact (either dorsal or ventral) is set as the anode.The amplitude should be decreased (down to 1 mA is our practice) and then increase gradually until the side effect comes back to assess the stimulation threshold in this new configuration.Switching the polarity of the selected electrodes might improve effectiveness and provide better tremor control and fewer side effects (Marks, 2011).Another multi-contact technique is using a double monopolar configuration, which was previously described by Kim et al. (2021), in a study that simultaneously targeted the Vim and PSA regions.The double monopolar configuration is a flexible alternative to bipolar configurations through the utilization of current fractionation in some commercially available devices. Newer generations of DBS hardware may offer options to avoid stimulation-induced dysarthria while maintaining clinical benefit.This can be attempted by applying directional current steering and current fractionation.Essentially these technologies modify the shape of the volume of tissues activated (VTA) around the activated DBS electrode.By shifting the electric field axially along the DBS lead, one can reduce unwanted current spread to adjacent fiber tracts and can decrease stimulationinduced side effects (Rebelo et al., 2018).Rebelo et al. (2018) previously reported on an experimental study significant gains in therapeutic window (91%) and reductions in therapeutic current strength (31%) with stimulation in the "best direction" compared to "omnidirectional stimulation, " without any loss of tremor suppression.Omnidirectional stimulation would be either a full ring or all three segments of a directional lead activated to simulate a complete ring mode.Blume et al. ( 2017) conducted a prospective, randomized, double-blind study of ten ET patients and observed that directional DBS provided a larger "therapeutic window, " mainly due to lower therapeutic thresholds but not a higher threshold for side effects; therapeutic window is typically defined as the range of stimulation parameters that provides improvement of tremor without causing stimulationinduced side effects.Directional DBS was equally effective as the standard omnidirectional DBS for tremor suppression, and this was not associated with higher energy consumption (Bruno et al., 2021).Though it may be tempting to assume directional DBS is superior to ring mode or omnidirectional DBS, there are few comparison studies and drawing firm conclusions can be tricky. Interleaving stimulation (ILS) is another useful technique for troubleshooting stimulation-induced dysarthria.This programming method implements two spatially distinct stimulation configurations on the same DBS lead, and it applies the settings in a temporally alternating sequence (Wagle Shukla et al., 2017).Barbe et al. (2014) tested an individualized ILS setting, shifting current from the most effective contact to the immediately dorsal located contact, making this another option to reduce stimulation-induced dysarthria while maintaining tremor control if other strategies have not been successful.Though effective, ILS can also reduce IPG life span over time. Lastly, turning off stimulation at night, or decreasing the amplitude on only one side (Patel et al., 2014;Contarino et al., 2017) may in some select cases reduce dysarthria that might arise as a result of chronic stimulation.A summary of these strategies is depicted in Figure 1.Given that these programing strategies have not been compared in head-to-head studies, there is no evidence to demonstrate that one strategy is more effective than another.Therefore, we recommend implementing them in order of the simplest strategy to the most complex.We present Figure 2 as a protocol in this order for the programmer to have a sequence to follow when cases become complex, and a systematic approach is warranted. Dysphagia The prevalence of stimulation-induced dysphagia is not well documented in the literature.A study that used fiberoptic endoscopy demonstrated some degree of dysphagia in 12/12 thalamic DBS treated ET patients (Lapa et al., 2020).Dysphagia significantly improved in all patients after stimulation was turned off, with a reported mean improvement of 80% in the dysphagia score.The study was a small case series and given that dysphagia is not a common complaint post-thalamic DBS, we would advise caution in overinterpretation.As to the potential mechanism that leads to residual dysphagia, there are many possibilities including the implantation effect, spread of stimulation to corticobulbar fibers, suboptimal lead placement or a combination of a surgical effect plus disease progression.Authors have also postulated a "lingering neural network change" in dysphagia and stimulationinduced ataxia, however, this seems less likely.Finally, it is always important to investigate other potential underlying swallowing pathologies (Lapa et al., 2020). Both cerebellar and corticobulbar fibers have been posited to play an important role in the process of swallowing.Corticobulbar fibers connect the motor cortex to the cranial nerve nuclei, which innervate the swallowing musculature (Lapa et al., 2020).There is a substantial neuroanatomical overlap of structures involved in the control and execution of speech and swallowing.It is this overlap that has led to speculation that both spread of current into the internal capsule or alternatively interference with the cerebellar network might impact swallowing physiology in a similar manner as in stimulation-induced dysarthria (Hamdy et al., 1996(Hamdy et al., , 1997;;Jayasekeran et al., 2011). The options for troubleshooting stimulation-induced dysphagia are similar to those employed for stimulation-induced dysarthria.Brain imaging is critical to assess the anatomical relation of each contact with the surrounding structures, and "on and off DBS" testing during a barium swallow study can help to define the extent that stimulation is responsible for the acute issue.Reductions in pulse width and/or in amplitude may help swallowing, however, may also worsen tremor control.Bipolar stimulation settings, interleaved stimulation (Barbe et al., 2014), and/or current steering (Barbe et al., 2014;Bruno et al., 2021) may all be tried.Finally, brain imaging, on/off barium swallow 10.3389/fnhum.2024.1353150Regions associated with stimulation-induced side effects in thalamic DBS, and troubleshooting options.Since there is no side-to-side comparison of these strategies, we list them in ascending order of complexity. testing and programming may in some cases not provide a path for conservative management.In these cases, a revision of the DBS lead should be considered and when retargeting the team should consider trajectory as well as lead location.For a summary of these strategies see Figure 1. Ataxia Stimulation-induced ataxia has been estimated to occur in 35% of patients with thalamic DBS (Chiu et al., 2020) and it has been shown to be acutely "inducible" in almost all patients Reprogramming strategies sorted in increasing order of complexity.Since there is no head-to-head comparison of how effective these interventions are, we recommend implementing them in order of simplicity. in the operating room if enough current density is delivered to the thalamic target region (Groppa et al., 2014).Acute ataxia can similarly be reproduced in the clinic setting.Stimulation induced ataxia may impact the limbs, trunk, or features of gait. Ataxia may also develop insidiously over many years following implantation.The average time has been reported to be approximately 5 years postoperatively, and in small series it is more common in older patients and in patients with a shorter disease duration at the time of DBS implantation (Chiu et al., 2020).The most common types of chronic ataxia presentations are appendicular ataxia (92%) and gait ataxia (44%) (Chiu et al., 2020). Acute stimulation-induced ataxia most commonly arises following stimulation of the inferior aspect of the VIM and superior aspect of the zona incerta (Hidding et al., 2019).Proposed mechanisms for DBS-induced ataxia have been hypothesized to be related to antidromic stimulation of the cerebellar nodule via the uncinate tract from the subthalamic area (Reich et al., 2016), as well as secondary to plasticity changes in the cerebellum (Fasano and Helmich, 2019), possibly involving the stimulation of fibers to and from the red nucleus and inferior olive and/or fibers originating from interpositus nucleus, bundled within the dentatothalamic fibers (Elble, 2014;Groppa et al., 2014). Acute stimulation-induced ataxia is now viewed as a circuit disorder by many experts and thus may be due to functional disruption of cerebello-thalamo-cortical networks (Garcia et al., 2003;Fasano et al., 2010;Groppa et al., 2014).Models which have calculated the VTA have shown that ventrocaudal stimulation in the subthalamic area corresponds with more significant gait ataxia and correlates with position emission tomography (PET) changes in which hypermetabolism in the cerebellar nodule increases as stimulation-induced gait ataxia worsens.These effects tend to normalize by approximately 72 h after stimulation is deactivated (Reich et al., 2016).This finding suggests that stimulation-induced ataxia may be reversible with programming or discontinuation of the electrical current.Groppa et al. (2014) speculates that when we disrupt cerebello-thalamic input with therapeutic stimulation, a second pathway "compensates" for the information lost.Furthermore, he proposes that ataxia occurs when there is modulation of that "secondary pathway" and when the compensatory mechanism is inhibited (Groppa et al., 2014).The tract (if there is a specific tract) responsible for stimulation-induced ataxia has not been clearly identified or agreed upon.Some authors have proposed the ascending limb of the uncinate fasciculus present in the subthalamic area as the critical tract (Reich et al., 2016).This pathway connects efferent fibers from the deep cerebellar nuclei to the thalamus (Elsen et al., 2013;Fine et al., 2014).Finally, a few genetic subtypes of ataxia have been associated with axonal loss in the uncinate fasciculus providing further evidence to support this notion (Stezin et al., 2021).Fasano and Helmich (2019) recently demonstrated the impact of acute thalamic DBS on gait ataxia in patients with ET, showing improvement with therapeutic stimulation and deterioration following supra-therapeutic stimulation (defined by increasing the amplitude and pulse width until decomposition of movement in the finger to nose test).This finding suggests that cerebellar dysfunction in these patients may be differentially modulated with optimal versus supra-therapeutic stimulation, possibly through recruitment of a different fiber system other than the DRTT based on chronaxie characteristics (Groppa et al., 2014).This observation has in general been translated into small studies using lower pulse widths of 30 µs (Choe et al., 2018) and 40 µs (Moldovan et al., 2018), which have demonstrated a reduction in acute stimulationinduced ataxia while retaining tremor benefit.An important caution is that chronaxie estimates using extracellular stimulation have been difficult to interpret versus regional neuroanatomy and thus must be interpreted with caution (Grill et al., 2005;Elble, 2014). For troubleshooting, a common core principle is that strategies should include reprogramming trials that last at least a week or two to adequately evaluate delayed benefits and waning of benefits.The evaluation usually begins by repeating a monopolar review.A monopolar review is when the clinician programs for benefit and side effect at each contact and uses this information to guide potential strategies.Potential reprogramming strategies should in general aim to move the stimulation field, in a relative sense, away from cerebellar fibers.Moving active contact(s) dorsally is one such strategy.Another is using a bipolar configuration to narrow the VTA (Contarino et al., 2017).In select cases, current steering may lead to less ataxia compared to standard omni-directional stimulation (Bruno et al., 2021;Hidding et al., 2022;Roque et al., 2022).In newer generation hardware, a lower pulse width (less than 60 µs) (Choe et al., 2018;Moldovan et al., 2018) can also be attempted.Differences in axon diameters and chronaxies can be used by shortening pulse widths to achieve more selective activation of cerebellothalamic fibers, which may mediate tremor control, with less induction of ataxia (Kroneberg et al., 2019).If the monopolar review reveals low thresholds at active contacts and failure to maintain benefit, lead re-implantation may be considered. In addition to reprogramming, another strategy is turning off the stimulation at night, as this may potentially impact the onset of stimulation induced ataxia (Rebelo et al., 2018).Finally, if ataxia or clumsiness emerges slowly and chronically, this effect is more likely disease progression and is less amenable to programming strategies. Gait and balance impairment Impairments of balance and gait in patients with ET were reported by clinicians long before a formal association was explored or DBS was developed (Critchley, 1972;Singer et al., 1994).It has been shown in ET that there may be difficulties with tandem gait, balance confidence, and require significantly greater time to perform the Timed Up-and-Go relative to controls (Earhart et al., 2009).Worsening of pre-existing or new-onset gait and balance impairments following thalamic DBS affects between 5-50% of patients with ET (Benabid et al., 1998;Pahwa et al., 2006;Earhart et al., 2009;Kroneberg et al., 2019).However, others have presented contradictory evidence showing that DBS has no adverse effect on gait and balance in unilateral and bilateral stimulation (Earhart et al., 2009;Ramirez-Zamora et al., 2016).Age, disease severity and preoperative gait difficulties are considered risk factors for gait and balance impairment following DBS surgery (Newland et al., 2022).Despite adequate tremor control, patients may experience changes in gait either as an early acute or as a delayed side effect following DBS activation. The acute phenomenon of gait and balance impairment with DBS is believed to be caused by stimulation-induced network dysfunction, specifically from antidromic cerebellar activation.However, the exact mechanism remains elusive (Earhart et al., 2009).More posterior and medial stimulation are believed to activate cerebellothalamic tracts, leading to gait disturbance, especially when stimulating below the ICL (Murata et al., 2003;Kim et al., 2021).The persistence of stimulation-induced gait impairment after turning off stimulation has led to the suggestion of a possible "microlesioned effect" as the cause (Roemmich et al., 2019).Temporal circuit plasticity is also a possible etiology if considering the 72-h delay observed in some cases for the gait to improve following discontinuation of DBS (Earhart et al., 2009). Management of imbalance can be challenging, as reduction in stimulation parameters commonly leads to tremor recurrence or has no effect on balance (Ramirez-Zamora et al., 2016).Considering the probable etiological overlay between gait/balance impairments and the cerebellar ataxic features described in the "Stimulation-induced ataxia" section, the common core strategies to troubleshoot this side effect are the same as described above for ataxia. Besides the previously suggested strategies, dual VIM + PSA stimulation have been reported as an efficacious strategy to mitigate gait disturbances by Kim et al. (2021), and reducing stimulation frequency from 170-185 Hz to 130 Hz after optimizing tremor control was also effective in improving balance difficulties while maintaining tremor control as demonstrated by Ramirez-Zamora et al. (2016). Habituation versus disease progression Habituation to stimulation, also referred to as "tolerance" (Benabid et al., 1996), is a hotly debated topic in the field of neuromodulation.Chronic high intensity stimulation has been hypothesized to induce detrimental plastic effects on tremor networks over time that may ultimately lead to decreased symptomatic control (Pilitsis et al., 2008).Alternatively, sparse post-mortem findings mildly support a biological adaptation to stimulation (Peters and Tisch, 2021).There is much debate about natural disease progression and habituation in the gradual loss of DBS efficacy over time.The characterization and quantification of the amount of overall worsening that is possibly due to loss of the stimulation effect, plastic effects or disease progression is challenging (Peters and Tisch, 2021). In some studies, "habituation" has been found to occur in as many as 73% of patients with a mean follow-up of 56 months (Shih et al., 2013), and as early as 10 weeks post-implantation (Barbe et al., 2011).Another study showed that non-DBS treated ET controls had similar tremor worsening over time than those tracked with the DBS on and off.Favilla et al. (2012) made a strong argument that much of the chronic worsening in ET DBS may be related to disease progression.A retrospective analysis conducted by Tsuboi et al. (2020) assessed the long-term effects of VIM DBS in 97 patients with essential or dystonic tremor for as long as 13 years in some patients.In this study there were sustained benefits for both types of tremors. Whether the effect observed is called habituation or disease progression, overcoming it is challenging.Worsening typically manifests as a loss of initial DBS benefit in reducing tremor, and simply increasing the stimulation current may worsen tremor severity or induce stimulation-related side effects.Changing the active lead and stimulation parameters may lead to better tremor control, although these effects may not be sustained (Barbe et al., 2011).Studies have compared standard stimulation to weekly or daily rotating stimulation with mixed effects (Barbe et al., 2011;Seier et al., 2018;Petry-Schmelzer et al., 2019). The initial evaluation should ideally include ruling out the presence of iatrogenic tremor caused by excessive stimulation (Fasano and Helmich, 2019).Fasano suggests reductions in pulse widths, followed by reduction in amplitude.Once stimulationinduced cerebellar tremor is ruled out, the next step should be increasing the frequency and then increasing the amplitude (Fasano and Helmich, 2019).Our experience is that once progression of ataxia and tremor have set in, there are limited management strategies to mitigate them. Clinicians may attempt in challenging cases to widen the therapeutic window and allow for higher stimulation amplitudes.These strategies include using a bipolar lead configuration, switching the polarity of an existing bipolar setting, applying interleaved stimulation, utilizing directional leads, or shortening pulse widths.In some rare cases, adding an additional stimulation contact may also have a possible benefit, particularly if it is placed near the border of the thalamus (Picillo et al., 2016;Fasano and Helmich, 2019).Another suggested strategy is closed loop DBS (Kronenbuerger et al., 2006;Yamamoto et al., 2013). There might be a potentially maladaptive response to long-term stimulation that may lead to some of the stimulation induced sideeffects (Contarino et al., 2017).Therefore, having some time off the stimulation has been studied in the form of DBS holidays (Garcia Ruiz et al., 2001) or turning the stimulation off at night (Hariz et al., 1999).It should be noted that DBS holidays have been associated in some series with a prominent and debilitating rebound tremor despite symptomatic improvements.Paschen et al. (2019) recently observed that this rebound phenomenon tends to reach a plateau 30-60 min after DBS has been turned off, though the authors note that it is not always present.Furthermore, when utilized, the ideal duration of the DBS holiday is not known and, in our practice as in most expert practices, we do not recommend a DBS holiday. The last option for a reduction in DBS benefit over time would be surgical revision.This can involve removal and repositioning of a lead or adding a new lead without removing the old one, often referred to as "rescue surgery" (Koller et al., 2001).Secondary leads have been added to many targets including the Vop, PSA or cZI (Yu et al., 2009;Oliveria et al., 2017).A key consideration for repeating surgery is the trajectory (more vertical may be useful for head tremor and helpful to avoid tracts leading to adverse events).Another consideration is whether the issue is ataxia or tremor.This therapy may be best when treating ataxia.Finally, distal tremor is easier to capture than proximal tremor, so a careful examination prior to making any surgical decisions should be pursued. Conclusion Stimulation-induced side effects are common in ET patients treated with thalamic DBS.Additionally most ET DBS patients experience some progression of disease or worsening of their tremor over time.As we learn more about the implicated brain networks in ET, we can potentially build several strategies to increase the therapeutic window for stimulation management without compromising tremor control.Understanding the pathophysiology of these ET DBS side effects will likely empower refined programming strategies and improved surgical planning.and has received honoraria for speaking engagements, grant reviews, CME, and scientific reviews from The Parkinson's Foundation and UptoDate Inc. MO serves as Medical Advisor the Parkinson's Foundation, and has received research grants from NIH, Parkinson's Foundation, the Michael J. Fox Foundation, the Parkinson Alliance, Smallwood Foundation, the Bachmann-Strauss Foundation, the Tourette Syndrome Association, and the UF Foundation.MO's research was supported by: NIH R01 NR014852, R01NS096008, UH3NS119844, and U01NS119562.MO was PI of the NIH R25NS108939 Training Grant.MO has received royalties for publications with Demos, Manson, Amazon, Smashwords, Books4Patients, Perseus, Robert Rose, Oxford and Cambridge (movement disorders books).MO was an associate editor for New England Journal of Medicine Journal Watch Neurology and JAMA Neurology.MO has participated in CME and educational activities (past 12-24 months) on movement disorders sponsored by WebMD/Medscape, RMEI Medical Education, American Academy of Neurology, Movement Disorders Society, Mediflix and by Vanderbilt University.The institution and not MO receives grants from industry.MO has participated as a site PI and/or co-I for several NIH, foundation, and industry sponsored trials over the years but has not received honoraria.Research projects at the University of Florida receive device and drug donations. The remaining authors declare that research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The reviewer LR declared a past co-authorship with the author MO to the handling editor. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers.Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. FIGURE 3 FIGURE 3Example of how to troubleshoot ataxia implementing the strategies in Figure1and using the order proposed in Figure2. For a visual summary of these strategies see Figure 1, and for an example of their implementation see Figure 3.
v3-fos-license
2020-10-28T19:21:01.335Z
2020-10-15T00:00:00.000
227173047
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://joss.theoj.org/papers/10.21105/joss.02579.pdf", "pdf_hash": "93b954c056e15d11823949699a116b70d8035470", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42668", "s2fieldsofstudy": [ "Biology" ], "sha1": "d8bfa3ec65056045d0d43e84f0a21d213f1cc18a", "year": 2020 }
pes2o/s2orc
Minerva: a light-weight, narrative image browser for multiplexed tissue images Summary Advances in highly multiplexed tissue imaging are transforming our understanding of human biology by enabling detection and localization of 10-100 proteins at subcellular resolution (Bodenmiller, 2016). Efforts are now underway to create public atlases of multiplexed images of normal and diseased tissues (Rozenblatt-Rosen et al., 2020). Both research and clinical applications of tissue imaging benefit from recording data from complete specimens so that data on cell state and composition can be studied in the context of overall tissue architecture. As a practical matter, specimen size is limited by the dimensions of microscopy slides (2.5 × 7.5 cm or ~2-8 cm2 of tissue depending on shape). With current microscopy technology, specimens of this size can be imaged at sub-micron resolution across ~60 spectral channels and ~106 cells, resulting in image files of terabyte size. However, the rich detail and multiscale properties of these images pose a substantial computational challenge (Rashid et al., 2020). See Rashid et al. (2020) for an comparison of existing visualization tools targeting these multiplexed tissue images. Summary Advances in highly multiplexed tissue imaging are transforming our understanding of human biology by enabling detection and localization of 10-100 proteins at subcellular resolution (Bodenmiller, 2016). Efforts are now underway to create public atlases of multiplexed images of normal and diseased tissues (Rozenblatt-Rosen et al., 2020). Both research and clinical applications of tissue imaging benefit from recording data from complete specimens so that data on cell state and composition can be studied in the context of overall tissue architecture. As a practical matter, specimen size is limited by the dimensions of microscopy slides (2.5 × 7.5 cm or ~2-8 cm 2 of tissue depending on shape). With current microscopy technology, specimens of this size can be imaged at sub-micron resolution across ~60 spectral channels and ~10 6 cells, resulting in image files of terabyte size. However, the rich detail and multiscale properties of these images pose a substantial computational challenge (Rashid et al., 2020). See Rashid et al. (2020) for an comparison of existing visualization tools targeting these multiplexed tissue images. In this paper we describe a new open-source visualization tool, Minerva, which facilitates intuitive real-time exploration of large multiplexed images on the web. Minerva employs the OpenSeadragon ("OpenSeadragon," n.d.) framework to render images at multiple resolutions and makes it possible to pan and zoom across across images in a process analogous to Google Maps. However, tissues contain many specialized structures recognizable by pathologists and histologists but not necessarily by many other scientific or medical users. To capitalize on specialized histology expertise we require software that LicenseAuthors of papers retain copyright and release the work under a Creative Commons Attribution 4.0 International License (CC BY 4.0). ‡ corresponding author. * co-first author † co-first author mimics the current practice in which a pathologists sits alongside a colleague and reviews a specimen by moving from point to point and switching between high and low magnifications. Minerva is designed to generate precisely these types of interactive guides or "stories". The author of a story creates specific waypoints in the image each with a text description, position, zoom level, and overlaid shape annotations. In the case of highly multiplexed images, a subset of channels is chosen for display at each waypoint (typically 4-8 superimposed channels). Authors also add interactive single-cell data scatterplots, bar charts, heatmaps, and cell outlines with two-way linked navigation between the plots and points in the image. Minerva is deployed simply and inexpensively via static web hosting. See Figure 1 for schematic of the workflow and system components. Minerva is not designed to solve all analytical and visualization problems encountered in multiplexed tissue imaging. Instead, it is a publication tool specialized to the task of making data shareable and broadly intelligible without requiring specialized software on the user side. As such, Minerva is designed to be one component in an ecosystem of interoperable, open-source software tools. Minerva comprises two components, Minerva Story and Minerva Author. Minerva Story is a single-page web application for presenting an image and its narrative to end users. OpenSeadragon ("OpenSeadragon," n.d.) is used to render tiled JPEG image pyramids overlaid with the author's narrative text, graphical annotations and data plots. Audio presentation of the narrative text is optionally provided through integration with Amazon's Polly text-to-speech service. The image pyramid URL and all narrative details are loaded from an independent JSON story definition file. Minerva Story can be hosted through GitHub Pages or any other web host supporting static content such as Amazon S3. Minerva Author is a desktop application for constructing narratives (stories) for Minerva Story. It is a JavaScript React web application with a Python Flask backend, packaged with Pylnstaller as a native application. To create a narrative, the author first imports an image in standard OME-TIFF pyramid or SVS formats. Both RGB images (brightfield, H&E, immunohistochemistry, etc.) and multi-channel fluorescence images (immunofluorescence, CODEX, CyCIF, etc.) are supported. Fluorescence image rendering can be controlled through per-channel contrast adjustment and pseudocoloring. The author then adds one or more story waypoints. For each waypoint, the author can type a text description, draw polygon or arrow annotations to highlight specific cells, regions, and histological features, and choose specific image channels to present. See Figure 2 for the Minerva Author interface. After the waypoints are complete, Minerva Author renders the input image into one or more RGB JPEG image pyramids and produces a JSON story definition file. Finally an author uploads the JPEG images (vastly smaller than the raw image data) and JSON file to a web host along with the Minerva Story HTML files. Story waypoints can be also augmented with interactive linked data visualizations. Adding data visualization currently requires manually editing the JSON file but a version of Minerva Author is in development for adding these types of visualizations natively. Minerva offers two approaches to exploring a narrative. First, in the author-driven approach, users can progress through a story in a linear path using forward and back navigation buttons, allowing an efficient introduction to and expert overview of the data. Second, in a free-exploration approach, the user is free to move to any position or zoom level and select any channel grouping or segmentation mask. Users can also take a hybrid approach by following a story and then departing from it to freely explore or skip between waypoints. By returning to a story waypoint the narrated overview can be resumed, much as one follows an audio guide in a museum. Minerva supports creating deep links directly to any position and zoom level in an image simply by copying and sharing the current URL from the browser. Users can optionally write a text note and draw a custom shape annotation that is automatically presented to the recipients. We have identified multiple applications for Minerva: visualizing cell-type classifiers in image space, validating results of unsupervised clustering of single-cell data, manual scanning for spatial patterns, assessing quality of antibody staining, obtaining second opinions from collaborators, sharing high-resolution primary image data alongside published manuscripts, and creating educational content for medical trainees. Minerva is also being used by national consortia to build tissue atlases, and we plan to add it to existing genome browsers such as cBioPortal (Cerami et al., 2012) and thereby facilitate joint exploration of genomic and histological data. Detailed documentation with step-by-step instructions for using Minerva, tutorial videos, exemplar data, and details on software testing are located alongside the source code on the Minerva wiki on Github. A wide variety of exemplary Minerva stories can be found at https://www.cycif.org/software/minerva.
v3-fos-license
2018-12-05T07:54:47.829Z
2016-11-04T00:00:00.000
152041042
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://kutaksam.karabuk.edu.tr/index.php/ilk/article/download/541/427", "pdf_hash": "187882ff903b5cf590374a59528b8cd7a24fb0fd", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42670", "s2fieldsofstudy": [ "Art" ], "sha1": "187882ff903b5cf590374a59528b8cd7a24fb0fd", "year": 2016 }
pes2o/s2orc
Under Western Eyes as an Illustration of the Consequences of Loneliness The protagonist in Joseph Conrad’s novel Under Western Eyes, Razumov is a man who suffers from loneliness. Although, at first, he was a man who possessed the advantages of youth, education and health to make his life fruitful and enjoyable, he could not escape from being a victim of his own wrong doings that can be said to have happened due to his lacking in sharpness and decisiveness. When he ceases his agonizing fear to confront himself and his own wrong doings, he realizes that he is a shameful person. Together with shame, there comes punishment, which is justified by Razumov himself. Being aware of the fact that he can become neither Ziemianitch nor Haldin, he finally internalizes the idea of being “no one” as pointed out by Miss Haldin at the end of the novel. As suggested by Miss Haldin, all humans will be pitied, in the end, no matter which ideology they come from. In this sense, being “no one” serves as a good enough categorization for Razumov who looked for a place for himself in life; at the beginning of the novel, through material success and, in the second half of the novel, through feelings. Razumov is the representation of an ordinary man who is in search of a place for himself and who has his own agitations driven from past experiences. In a world that is described through the binaries of the good and bad, he is the representation of the man who stands alone without a strong adherence to a point of view in life and will end up being categorized as “no one”. 73 The Polish-British author Joseph Conrad is renowned for his novels like Heart of Darkness, The Secret Agent, Lord Jim and Under Western Eyes, which address profound themes of human existence and nature.And, Conrad's much acclaimed novel Under Western Eyes depicts the political turmoil of nineteenth-century Russia.The protagonist of the novel, Razumov, is a young university student in St. Petersburg, Russia.Although he is a much admired, young, and healthy man, as a result of his unsound decisions and deluded vision, he experiences unfortunate events which follow up each other; and, Razumov finds himself entrapped in an unfortunate structure from which he is unable to escape.In my essay, I will seek to identify the elements of loneliness experienced by Razumov which eventually lead to the destructive finale of Razumov's story. In Under Western Eyes, the character Razumov is a lonely man.He lives in a state of alienation from his own country and his own people.He does not have an interest in the welfare of the state or the society he lives in.As it is pictured in the novel: "Razumov was one of those men who, living in a period of mental and political unrest, keep an instinctive hold on normal, practical, everyday life.He was aware of the emotional tension of his time; he even responded to it in an indefinite way.But his main concern was with his work, his studies, and with his own future" (Conrad, 2015, p. 9).Thus, it can be claimed that he can be characterized as a selfish man unattached from the community he lives in. In his search for honor and virtue, Razumov drifts away from the righteous path. Razumov is after fallacious ideals.While he seeks the ways of being called a good and an honorable man, what he is really after is an important place for himself in the society and making a name for himself through getting the silver medal.As a man who was seeking a place in the society and trying to make a name for himself through getting the silver medal, he seeks for the ways to be called as the good and the honorable one.In the novel, this is openly stated: He hankered after the silver medal.The prize was offered by the Ministry of Education; the names of the competitors would be submitted to the Minister himself. The mere fact of trying would be considered meritorious in the higher quarters; and the possessor of the prize would have a claim to an administrative appointment of the better sort after he had taken his degree.The student Razumov in an access of elation forgot the dangers menacing the stability of the institutions which give rewards and appointments.(Conrad, 2015, p. 9) His desire of arranging a name and a place for himself in the society is also apparent in this statement of his: But a celebrated professor was a somebody.Distinction would convert the label Razumov into an honoured name.There was nothing strange in the student Razumov's wish for distinction.A man's real life is that accorded to him in the thoughts of other men by reason of respect or natural love.Returning home on the day of the attempt on Mr. de P--'s life Razumov resolved to have a good try for the silver medal.(Conrad, 2015, p. 11) As a lonely person, he is unattached from what goes around him in the country.Also, his mistake of spying on the Russian revolutionists is in line with his desire for fame or, in other words, a place for himself in the society.However, he does not find redemption in his attempt either as "his position brings him no real fame-only the infamy of a fall made more infamous by the fact that it was not deserved but bureaucratically imposed by the same bureaucracy he devotedly served" (Davidson, 1977, p. 26). In his journey of inviting Haldin into his house, then, of giving him in and then of spying and, finally, of his epic confession; Razumov seems to be completely unable to decide which action will bring him good; or, putting it more straight, what is good for himself.As Michel states: According to his conservative principles he delivers up the assassin Haldin.But he finds he has betrayed himself as well when he followed these dictates.He is conscious of his mistake almost immediately.But society condones it, for the Prince who is his natural father and the General to whom he reports Haldin's whereabouts agree on the moral soundness of Razumov's action.(Michel, 1961, p. 132) Right after the moment when he realizes that he has done something wrong, he, this time, attempts to do something else to fix the former problem.However, the outcome does not change and every step he takes leads to destruction. Through love and trust Razumov attains self-knowledge and realizes that in betraying Haldin he has betrayed himself.His contempt for others a sense of scorn which now extends even to himself has become a viper in his soul which can only be exorcised by confession.After Razumov recognizes this point and a bases himself before Miss Haldin, everything else is explanatory and not dramatic necessity.When Conrad failed to develop this change in Razumov as the sole climax of the plot, as the psychological inevitability of Razumov's story then he committed many grievous errors esthetically the worst of which is the ending.(Karl, 1959, p. 325-326) The reason why Razumov fails to produce an action that will be in his welfare as well as others' is because his actions are not in the service of pure goodness.And, this is what brings Razumov to his downfall.The assassinated Mr. de P.'s statement about sin and stability is quite interesting in this sense: In the preamble of a certain famous State paper he had declared once that "the thought of liberty has never existed in the Act of the Creator.From the multitude of men's counsel nothing could come but revolt and disorder; and revolt and disorder in a world created for obedience and stability is sin.It was not Reason but Authority which expressed the Divine Intention.God was the Autocrat of the Universe...." It may be that the man who made this declaration believed that heaven itself was bound to protect him in his remorseless defence of Autocracy on this earth.(Conrad, 2015, p. 7) In Mr. de P's perspective, action is nothing but sin in this world that is created for stability.This stability requires the person to do nothing in an attempt to find goodness.In other words, he should sit and wait under the autocracy of others and let them decide what is right for him.In Razumov's case, Razumov "has discovered, for example, that the consequences of his decisions are more complex and problematic than he had initially anticipated" (Cousineau, 1986, p. 29).At this point, a question arrives: Did Razumov choose to act or not when he gave away Haldin?From my perspective, Razumov chose to act but his motivations were wrong and this drove him further and further away from getting the chance of attaining good.Thus, it can be claimed that "Razumov redeems himself 'by acknowledging the demonic self as his own and giving himself over to the course of action that it suggests'" (Cousineau, 1986, p. 28-29).And, like a boomerang in his attempt to achieve action, he is bound to return to his starting point of inaction in his each and every attempt.The idea of stability turns into his curse from which he is unable to run away."Razumov thought: 'I am being crushed-and I can't even run away.'Other men had somewhere a corner of the earthsome little house in the provinces where they had a right to take their troubles.A material refuge.He had nothing.He had not even a moral refuge-the refuge of confidence.To whom could he go with this tale-in all this great, great land?"(Conrad, 2015, p. 24). Razumov's giving in Haldin is an action done not for the sake of goodness but with other agenda; and this can also be defined as an outcome of panic and fear which may fail to be qualified as a truly good reaction.Thus, "His betrayal, suffused by his anguish and attacked by his reason, becomes the more sordid because he stands by it.He sees even the very moment of his becoming safe (by virtue of the suicide of someone assumed to be the betrayer) as being absurd" (Michel, 1961, p. 132).His denial of the fact that he has done something wrong makes his diverting from the true path much more likely.He says: Betray.A great word.What is betrayal?They talk of a man betraying his country, his friends, his sweetheart.There must be a moral bond first.All a man can betray is his conscience.And how is my conscience engaged here; by what bond of common faith, of common conviction, am I obliged to let that fanatical idiot drag me down with him? On the contrary-every obligation of true courage is the other way.(Conrad, 2015, p. 28) The fact that Razumov lacks in a moral direction is the reason behind the fatal maneuvers he makes."In murdering Haldin, he has also murdered time, and the slain dimension cuts him off from the world of light as much as the slain man" (Gurko, 1960, p. 446).Lost in space and time, thus pathless and timeless, Razumov is aware that he is a solitary man with a lack of a path which leads him to a direct circumstance.Gurko comments on this lack of directive in his life in this way: Though a student for some years, he has made no friends, his air of forbidding aloofness discouraging contact.Paradoxically, this very air is taken as a mark of intellectual profundity and moral purity, as the sign of "an unstained, lofty and solitary existence."Unknown to himself, Razumov has acquired a reputation as a man in whom one could have confidence.His isolation, and the unintended respect and admiration which it accidentally breeds, are to be the very elements that plunge him into tragedy.(Gurko, 1960, p. 445) His loneliness comes with birth as he lives without any bond to any mother or father or any relative; Officially and in fact without a family (for the daughter of the Archpriest had long been dead), no home influences had shaped his opinions or his feelings.He was as lonely in the world as a man swimming in the deep sea.The word Razumov was the mere label of a solitary individuality.There were no Razumovs belonging to him anywhere.His closest parentage was defined in the statement that he was a Russian. Whatever good he expected from life would be given to or withheld from his hopes by that connexion alone.This immense parentage suffered from the throes of internal dissensions, and he shrank mentally from the fray as a good-natured man may shrink from taking definite sides in a violent family quarrel.(Conrad, 2015, p. 9) And, he acquires his loneliness as an incurable illness.That's why he sees that his every attempt is nothing but a mere trial in vain.The memoir of the previous year's prize winner serves as a proof of Razumov's hypothesis that everything he achieves is bound to be nothing but a volatile attempt. He was a quiet, unassuming young man: "Forgive me," he had said with a faint apologetic smile and taking up his cap, "I am going out to order up some wine.But I must first send a telegram to my folk at home.I say!Won't the old people make it a festive time for the neighbours for twenty miles around our place."Razumov thought there was nothing of that sort for him in the world.His success would matter to no one.(Conrad, 2015, p. 9) As Davidson claims, Razumov thinks that "the very fact that he has little, no family or position, ostensibly justifies Haldin who jeopardizes what little he has, his lonely independence and his hope for future fame.Not surprisingly, when he cannot immediately escape from the threat that Haldin represents, Razumov informs on him and so assures his capture and execution" (Davidson, 1977, p. 25).In the depth of the immensity of his suffering Razumov cries out: "You are a son, a brother, a nephew, a cousin-I don't know what-to no end of people.I am just a man.Here I stand before you.A man with a mind.Did it ever occur to you how a man who had never heard a word of warm affection or praise in his life would think on matters on which you would think first with or against your class, your domestic tradition-your fireside prejudices?... Did you ever consider how a man like that would feel?I have no domestic tradition.I have nothing to think against.My tradition is historical.What have I to look back to but that national past from which you gentlemen want to wrench away your future?Am I to let my intelligence, my aspirations towards a better lot, be robbed of the only thing it has to go upon at the will of violent enthusiasts?You come from your province, but all this land is mineor I have nothing.No doubt you shall be looked upon as a martyr some day-a sort of hero-a political saint.But I beg to be excused.I am content in fitting myself to be a worker.And what can you people do by scattering a few drops of blood on the snow?On this Immensity.On this unhappy Immensity!I tell you," he cried, in a vibrating, subdued voice, and advancing one step nearer the bed, "that what it needs is not a lot of haunting phantoms that I could walk through-but a man!" (Conrad, 2015, p. 46) However, Razumov is not completely blind to the very fact that he is also dishonoring himself through his actions.When he was on duty in favor of the autocracy, he also knew that this was his finale: Moreover, the more capably he served, the more he would dishonor himself in his own eyes by making others, like himself, victims of a misplaced trust.He desired to achieve renown and believed he possessed the qualities-intelligence and dedication-necessary to do so.Yet Conrad shows that, even as Razumov attempts to cope with the difficult situations that are forced upon him, he must increasingly perceive the degree to which he is dishonoring himself.His rationality thus serves primarily to reveal die extent of his failure.(Davidson, 1977, p. 25) In this perspective, it can be claimed that the journey to the ultimate goodness should be within the person's soul rather than on an external ground.Thus, if the person wants to be good then he should first acknowledge that he should be directed by his soul.Haldin puts a finger on how important soul is in one's life and warns Razumov: Men like me leave no posterity, but their souls are not lost.No man's soul is ever lost. It works for itself-or else where would be the sense of self-sacrifice, of martyrdom, of conviction, of faith-the labours of the soul?What will become of my soul when I die in the way I must die-soon-very soon perhaps?It shall not perish.Don't make a mistake, Razumov.(Conrad, 2015, p. 16) Haldin's such belief in the eternity of the human soul represents the idea that the labors done by the person will not evaporate the minute he dies.The labors and the ultimate effects of them will perish even after the body is rotten.Haldin says: "The Russian soul that lives in all of us.It has a future.It has a mission, I tell you, or else why should I have been moved to do this-reckless-like a butcher-in the middle of all these innocent people-scattering death-I!I!...I wouldn't hurt a fly!" (Conrad, 2015, p. 16).In this perspective, all the sacrifices done by all these killings are done for a greater purpose: for the future welfare of the state.Haldin states that this is the responsibility of every citizen and calls Razumov to act and help the revolutionists in their aim.Thus, the sins that are committed may be redeemed in the eyes of a great power and all the wrong doings can be forgiven for they were leading people to a greater and virtuous purpose.That is the reason why Haldin is ready to die when the moment comes; he is not after living a long life as he thinks that the dimension he will go right after he dies, carries much more importance than the material world he lives in right now.Haldin consoles Razumov by saying: "Why be anxious for me?They can kill my body, but they cannot exile my soul from this world.I tell you what-I believe in this world so much that I cannot conceive eternity otherwise than as a very long life.That is perhaps the reason I am so ready to die" (Conrad, 2015, p. 44).This is why he does not even care enough to hate the people who torture him on earth.He says: "Haunt it!Truly, the oppressors of thought which quickens the world, the destroyers of souls which aspire to perfection of human dignity, they shall be haunted.As to the destroyers of my mere body, I have forgiven them beforehand" (Conrad, 2015, p. 45). Could Razumov have been saved from his troubles if he followed those instructions given by Haldin?As a devoted man to his discourse in order to create a better future for the citizens of his country and risking all that he has got, he shows courage which is an important element in this foretold journey.While Haldin can be associated with strong feelings and irrationality, Razumov is just the opposite of him.Razumov "has been a faithful believer in the intellectual life and has always tried to regulate his activities in accordance with a strict logic of profit and loss" (Karl, 1959, p. 316).However, as each of them plays the role that is predestined for the other one, they shift places."As Razumov later points out to Haldin the Latter has family connection to fall back upon, while he, Razumov, has no one; he is just 'a man with a mind' (Karl, 1959, p. 315).Razumov says: "I have no domestic tradition.I have nothing to think against.My traditional is historical.You [Haldin] come from your province but all this land is mine-or I have nothing again identifies himself with the now equal" (Conrad, 2015, p. 47).Thus, in this shift, there is a tie of brotherhood between them because "after Haldin leaves to fall into the police trap, Razumov again identifies himself with the now equally isolated revolutionary and in their common rootlessness they become spiritual brother" (Karl, 1959, p. 315).In this context, logic does not bring salvation; a strong belief, a strict discourse and devotion are the only ways of salvation of the human being.However late it is, Razumov, in the end, understands this.As Karl states: "Once Razumov recognizes that a pact with logic is a pact with the devil, he becomes spiritually cleansed, and his confessions first to Miss Haldin and then to the revolutionaries, are the fruits of his conversion" (Karl, 1959, p. 317). As a result of his ultimate belief in eternity of the soul, Haldin thinks that his life should have a meaning and utilityand in his attempt to make Razumov's life inherit a meaning, such a conversation occurs between the two: "Kirylo Sidorovitch," said the other, flinging off his cap, 'we are not perhaps in exactly the same camp.Your judgment is more philosophical.You are a man of few words, but I haven't met anybody who dared to doubt the generosity of your sentiments.There is a solidity about your character which cannot exist without courage." "That is what I was saying to myself," he continued, "as I dodged in the woodyard down by the river-side."He has a strong character this young man," I said to myself."He does not throw his soul to the winds."(Conrad, 2015, p. 12) However, Razumov does not seem to be very hopeful in this respect.Razumov questions how his life can be defined."What was his life?Insignificant; no good to anyone; a mere festivity. It would end some fine day in his getting his skull split with a champagne bottle in a drunken brawl.At such times, too, when men were sacrificing themselves to ideas.But he could never get any ideas into his head.His head wasn't worth anything better than to be split by a champagne bottle" (Conrad, 2015, p. 60).When he is asked for help by Haldin, the first image comes to his mind regarding his future is far off from being described as desirable: "He saw his youth pass away from him in misery and half starvation-his strength give way, his mind become an abject thing.He saw himself creeping, broken down and shabby, about the streets-dying unattended in some filthy hole of a room, or on the sordid bed of a Government hospital" (Conrad, 2015, p. 16). Quite interestingly Razumov who at the beginning disregarded his spiritual side, experiences his sufferings first in this spiritual side of his.As Madran claims: "His spiritual collapse begins with his moral conflicts.His tragedy begins in his soul, and the external action only serves to reveal his psychological alienation and loneliness.Razumov must pass through an excruciatingly painful split in his soul in order to arrive at an understanding of himself" (Madran, 2006, p. 239).Thus, the hallucinating and mentally imbalanced Razumov is a result of this painful split in his soul: "Conrad makes the reader analyze Razumov's conflicts by his inner voices.The dilemma he has is triggered with the hallucinations he sees" (Yağlıdere, 2013, p. 98).An example to his hallucinations can be the one in which he saw Haldin: This hallucination had such a solidity of aspect that the first movement of Razumov was to reach for his pocket to assure himself that the key of his rooms were there.But he checked the impulse with a disdainful curve of his lips.He understood.His thought, concentrated intensely on the figure left lying on his bed, had culminated in this extraordinary illusion of the sight.Razumov tackled the phenomenon calmly.With a stern face, without a check, and gazing far beyond the vision, he walked on, experiencing nothing but a slight tightening of the chest.After passing he turned his head for a glance, and saw only the unbroken track of his footsteps over the place where the breast of the phantom had been lying.(Conrad, 2015, p. 27) At this point, this question should come to mind: Then, what prevented Razumov from acting in the direction of the good?What did lead him to his loneliness?Why did he become a victim of despotism?Panichas claims that fear is the ultimate answer to those questions: A man may destroy everything within himself.But he cannot destroy fear.Indeed, from the moment of his encounter with Haldin it is fear that possesses and drives Razumov in all of his actions his moods, feelings, and decisions that would permanently, even fatally, affect him and also the lives of those who come into any contact with him.Increasingly the external world presses against Razumov's world of solitude and the sense of order that it seems to provide him.His isolation defines and strengthens his control over his life.(Panichas, 1998, p. 361) This orderly life of his is shattered by the appearance of Haldin."In a sense Haldin is the destroyer of Razumov's ordered, if not innocent, world.Extremism, in a word, now invades Razumov's private world; and he feels overpowered by its antagonist spirit; indeed, this can even be termed the spectre of ideology casting a dark shadow over human existence" (Davidson, 1977, p. 361).After Haldin was arrested by the authorities, Razumov is depicted in a situation where he does not seek for order in his life anymore.This can be observed from the way he behaves in his apartments: Razumov turned away brusquely and entered his rooms.All his books had been shaken and thrown on the floor.His landlady followed him, and stooping painfully began to pick them up into her apron.His papers and notes which were kept always neatly sorted (they all related to his studies) had been shuffled up and heaped together into a ragged pile in the middle of the table.This disorder affected him profoundly, unreasonably.He sat down and stared.He had a distinct sensation of his very existence being undermined in some mysterious manner, of his moral supports falling away from him one by one.He even experienced a slight physical giddiness and made a movement as if to reach for something to steady himself with.He did not attempt to put his papers in order, either that evening or the next day-which he spent at home in a state of peculiar irresolution.This irresolution bore upon the question whether he should continue to live-neither more nor less.(Conrad, 2015, p. 58) From that point on, his life is invaded with extremisms such as the extremism of his feelings and affections.When he went to Geneva and made friends with the revolutionaries, his life started to change uncontrollably.Although Razumov did not carry such an intention in the novel so as to build good relations with other people, he, accidentally, or by force, is made to create a bond of friendship with the circle of the revolutionaries. Caught up in these extremes of Russian conduct, Razumov's dream of pursuing his private life has been shattered.Driven into the role of a government spy, he now finds himself thrown into the most intimate contact with others.Yet each relationship is poisoned by duplicity.Befriended by the students in St. Petersburg, he uses them shamefully as pawns to help his sham escape.Accepted by the revolutionary circle in Geneva, he betrays them in long reports on their activities to their enemies at home.(Gurko, 1960, p. 447) Now that he has found friendship, his extreme lack of affection in the past blinds him and he is now in the hands of a fatal mistake.Although he was previously forced to make fake relations, when he encounters love, things change for Razumov: Love is one of the sentiments in Conrad which releases men from the suffocation of narcissism and the emptiness of non-involvement.It is by no means the only one: friendship, duty, honor, patriotism, even a diffusely warmhearted generosity, feelings intricately dissected in the other novels, have a similar cathartic effect....It forces Razumov to examine himself as he is, free from the bondage of vanity and the desperation of loneliness.(Gurko, 1960, p. 451) In the circle of the revolutionaries, especially his feminine surrounding has this shocking effect upon Razumov: In Geneva, Razumov encountered that feminine presence which had been excluded from his life in St. Petersburg.This meeting coincided with his discovery of utopian aspirations that made his earlier ambitions seem banal by comparison. Psychologically, he experienced the dissolution of the barrier which prevents access to infantile memories and recovered the dream of a lost paradise concealed within them.(Cousineau, 1986, p. 38) And, when he starts to acknowledge and even respect the bond that is created between himself and those women, and among them especially Miss Haldin, for whom Razumov feels deep affections; everything starts to become disjointed and center-less."Only Razumov makes in Under Western Eyes a significant redemptive choice by respecting the bond of love he has come to feel for Natalie Haldin.His severest temptation is to trick her, to betray her trust in him as he betrayed her brother's" (Michel, 1961, p. 135).His feelings urge him to take action but this action is not done by a reasonable and sound Razumov, indeed this Razumov who is urged to take action is a deluded and an unstable one.He has lost control due to the strange and extreme feelings of his. By occupation he is, ironically, a student of philosophy.Yet he is continually misjudged and misjudging.... Yet these various illusions are all interrelated by Conrad's manipulating the events of the novel so that the manner in which others are deceived about Razumov finally forces him to see that he was also equally deceived about himself.Such a process begins with Haldin's misjudgment.His intrusion into Razumov's life entails, for the latter, an impossible dilemma but one that still must be immediately resolved.(Davidson, 1977, p. 24) The wrongness in his direction has started with his first wrong action and it is giving Haldin in: He believes that he is self-sufficient and self-contained, that he is capable of acting solely according to the dictates of reason.However, Razumov forgets that reason does not create as much as it discovers the conditions of human happiness.In the interest of self-protection and self-delusion, he goes in search of the peasant sledge driver, Ziemianitch, but he cannot wake him from his drunken sleep.He beats him unmercifully.It is Razumov's anger at the failure of a man on whom Haldin depended and on whom Razumov also now depends to extricate himself from the position he is in.(Madran, 2006, p. 237-238) And, his unjust actions continue to occur with his beating Ziemianitch which is nothing but an action done through rage."Razumov turns Haldin over to the police but before doing so betrays his own avowed convictions by trying to help Haldin escape; when he finds the carriage driver Ziemianitch Razumov flies into a rage which leads him in the end to the police" (Gurko, 1960, p. 449). In this respect, it would not be wrong to claim that both Ziemianitch and Haldin correspond to the fact that Razumov feels disclosed in an entrapment as both of them signify the wrong doings he has committed.As Panichas claims: "Between the two he was done for. Between the drunkenness of the peasant incapable of action and the dream-intoxication of the idealist incapable of perceiving the reason of things, and the true character of men" (Panichas, 1998, p. 362).And, at this position, he immediately realizes that he is neither of those two men.He cannot be a peasant who is deprived of action like Ziemianitch nor he can be a dreamy idealist like Haldin.With his identity being crushed among those two very different profiles, he asks himself to which direction he should divert himself: Now, since his position had been made more secure by their own folly at the cost of Ziemianitch, he felt the need of perfect safety, with its freedom from direct lying, with its power of moving amongst them silent, unquestioning, listening, impenetrable, like the very fate of their crimes and their folly.Was this advantage his already?Or not yet?Or never would be? (Conrad, 2015, p. 207) This problem constitutes Razumov's biggest challenge in life, which is to be a person who lacks in a strong acclaim in direction. Things and men have always a certain sense, a certain side by which they must be got hold of if one wants to obtain a solid grasp and a perfect command.The power of Councillor Mikulin consisted in the ability to seize upon that sense, that side in the men he used.It did not matter to him what it was-vanity, despair, love, hate, greed, intelligent pride or stupid conceit, it was all one to him as long as the man could be made to serve.The obscure, unrelated young student Razumov, in the moment of great moral loneliness, was allowed to feel that he was an object of interest to a small group of people of high position.(Conrad, 2015, p. 225) Razumov is under a big burden as he needs to fulfill the requirement of finding a place of direction for him though this is not an easy task.His choice of side should be a good one that will enable him to be praised by the others.With his unfortunate position, Razumov stands on an unstable ground. And there was some pressure, too, besides the persuasiveness.Mr. Razumov was always being made to feel that he had committed himself.There was no getting away from that feeling, from that soft, unanswerable, "Where to?" of Councillor Mikulin. But no susceptibilities were ever hurt.It was to be a dangerous mission to Geneva for obtaining, at a critical moment, absolutely reliable information from a very inaccessible quarter of the inner revolutionary circle.There were indications that a very serious plot was being matured....The repose indispensable to a great country was at stake....A great scheme of orderly reforms would be endangered....The highest personages in the land were patriotically uneasy, and so on.In short, Councillor Mikulin knew what to say.This skill is to be inferred clearly from the mental and psychological self-confession, self-analysis of Mr. Razumov's written journal-the pitiful resource of a young man who had near him no trusted intimacy, no natural affection to turn to.(Conrad, 2015, p. 226) At the threshold of making an important decision, Razumov faces the problem of making a choice.But, unfortunately, as he has made the worst choice of all, he moves further away from being a good man.Apparently, Razumov is not conducted by reason anymore and he is only directed with a false sense of what should be good for himself and himself only; but, still, he cannot escape from harms himself, too.Thus, it can be said that the exaggeration of certain mediums in his life affects him in a negative way that brings him to his sad finale.In his blinded situation which was directed first by excessive usage of reason and then by the excessive usage of feelings, he, eventually, inclines towards the wrong direction.He is also aware of this: "It was the world-those officers, dignitaries, men of fashion, officials, members of the Yacht Club.The event of the morning affected them all.What would they say if they knew what this student in a cloak was going to do?Not one of them is capable of feeling and thinking as deeply as I can.How many of them could accomplish an act of conscience?" (Conrad, 2015, p. 29). The question that should be asked at this point should be this: Was Razumov acting according to his desires or did he start to pretend as if he desired what he has committed after he had done the action?Razumov might have done the latter one as he was the puppet of his past, because at the very stroke of midnight he jumped up and ran swiftly downstairs as if confident that, by the power of destiny, the house door would fly open before the absolute necessity of his errand.And as a matter of fact, just as he got to the bottom of the stairs, it was opened for him by some people of the house coming home late-two men and a woman.He slipped out through them into the street, swept then by a fitful gust of wind.(Conrad, 2015, p. 263) The history demands to be defended.Thus, being devoted to one's own past action is also a necessity for Razumov as he is in a position where he cannot even react to his own past actions although he is aware of their deficiencies: Of course he was far from being a moss-grown reactionary.Everything was not for the best.Despotic bureaucracy... abuses... corruption... and so on.Capable men were wanted.Enlightened intelligences.Devoted hearts.But absolute power should be preserved-the tool ready for the man-for the great autocrat of the future.Razumov believed in him.The logic of history made him unavoidable.The state of the people demanded him, "What else?" he asked himself ardently, "could move all that mass in one direction?Nothing could.Nothing but a single will."(Conrad, 2015, p. 26-27) And, he is also aware that his past will not be advocated by the others, even after his death."It passed through his mind that there was no one in the world who cared what sort of memory he left behind him.He exclaimed to himself instantly, 'Perish vainly for a falsehood!... What a miserable fate!'" (Conrad, 2015, p. 27). His fatal mistake lies in this bipolar nature of his.His reason symbolizes the autocratic views and his feelings symbolize the revolutionary point of view."We are made aware of Razumov's moral predicament, no less than his moral isolation, condemned as he is by both 'the lawlessness of autocracy' and 'the lawlessness of revolution'" (Panichas, 1998, p. 364) and when he acts according to any of those, he acts in the most excessive way.As Cousineau puts it: The narrator's reservations about revolutionary activities notwithstanding, we are led to feel that Razumov has come to a recognition of his past errors and, hence, to a deepening of his moral consciousness.Razumov's psychological development, however, seems to proceed in the opposite direction.Briefly, and with some simplification, we may say that the Razumov whom we meet in the first part of the novel has been initiated into the world of adult reality, as evidenced by his willingness to adapt himself to the desires of others.(Cousineau, 1986, p. 29) Razumov says: "Did it ever occur to you how a man who had never heard a word of warm affection or praise in his life would think on matters on which you would think first with or against your class, your domestic tradition-your fireside prejudices? . . .Did you ever consider how a man like that would feel?I have no domestic tradition" (Conrad, 2015, p. 46).Thus, his former commitment to the autocratic authorities was a secure point and a comfort zone for him: "His joining the secret police at the invitation of Mikulin and with the encouragement of the prince is the logical outcome of his struggle to guarantee his position in the world by submitting himself to the representatives of paternal authority" (Cousineau, 1986, p. 30). When he pretended to be a man protecting autocracy, he subsided his feelings and in the circle of the revolutionaries he started to act as if he was an actual revolutionary.When the realization of this stroke him, he started talking to himself in the empty room in this fashion: He imagined himself accosting the red-nosed student and suddenly shaking his fist in his face."From that one, though," he reflected, "there's nothing to be got, because he has no mind of his own.He's living in a red democratic trance.Ah!You want to smash your way into universal happiness, my boy.I will give you universal happiness, you silly, hypnotized ghoul, you!And what about my own happiness, eh? Haven't I got any right to it, just because I can think for myself?"(Conrad, 2015, p. 222) However, this realization was what he needed the most: Conrad's conception of the individual is, ironically a, person who, once thrown out of society must recognize the terms of his existence and then try to re-enter or else be overcome by a hostile world.His way of re-entrance in so far as he has a choice, can be through conquest or renunciation.Razumov makes the latter choice and paradoxically his renunciation leads to both is destruction and acceptance in each case by the same people.(Karl, 1959, p. 313-314) When he learns to disregard his fears concerning lack of appraisal from the society, he takes the responsibility of his actions; but, this does not suggest that his wrong actions will be forgiven.The current position of Razumov is explained by Karl in such manner: He becomes a helpless man exposed upon a craft which is at everyone's mercy and because of his realization of guilt, a man unable to function for himself.The everyday world is left behind food, clothes marriage the nice ties of societal intercourse even the leisurely and relaxed moments a person intermittently allows himself-all these necessities of sane living are pushed in to the background.(Karl, 1959, p. 315) Razumov evaluates and then regrets his past action after falling in love with Miss Haldin. And, he experiences the feeling of shame.Razumov is also aware that shame won't bring salvation, thus he gives his own punishment by making the decision of confessing his guilt to Miss Haldin and by acting on his decision.As Razumov's resolution to confess becomes stronger the rain increases in intensity as the storm cleanses him physically so, his confession is to cleanse him spiritually as; he nears Laspara's house where the revolutionaries are meeting a, single clap of thunder heralds his arrival; and after he is deafened by Nikita and thrown in to the street the violence of the outer world can no longer touch him-his confession has truly led to serenity of mind and spirit.(Karl, 1959, p. 326) What led him to confession is his ability to sympathize with Miss Haldin."Razumov apparently achieved a double perspective on himself that led to a fuller understanding of the implications of his plot and a more general awareness of what adhering to it would indicate about his own confused nature.In other words, he recognized himself in her and Haldin in himself.Must she be, like him, a victim?"(Davidson, 1977, p. 27).Miss Haldin is the one who brings out the shameful Razumov who finally finds the courage to confront himself. "Razumov looked behind a veil to see what the extent of Natalia's suffering would be and what that suffering might mean as an index to his own nature.Natalia, however, cannot return the act" (Davidson, 1977, p. 28).Natalia functions as the trigger of this predestined confession."Ultimately, Nathalie is the force that wrenches from Razumov the truth of his fateful involvement in Haldin's life.She insists on hearing the full "story" of his involvement, even as Razumov has been agonizing to relate it to her, fitfully, fatefully.His final words to her have far-reaching consequences" (Panichas, 1998, p. 369).Those far-reaching consequences are far away from bringing salvation as mentioned above.They function as the required and desired punishment for Razumov.Madran states that,
v3-fos-license
2018-04-03T00:22:32.519Z
2016-10-27T00:00:00.000
11189284
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "BRONZE", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21645515.2016.1242539?needAccess=true", "pdf_hash": "08141401bd15fe10f45044d3a98d4a3ae03ef99d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42672", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "08141401bd15fe10f45044d3a98d4a3ae03ef99d", "year": 2016 }
pes2o/s2orc
Impact of universal mass vaccination with monovalent inactivated hepatitis A vaccines – A systematic review ABSTRACT The WHO recommends integration of universal mass vaccination (UMV) against hepatitis A virus (HAV) in national immunization schedules for children aged ≥1 year, if justified on the basis of acute HAV incidence, declining endemicity from high to intermediate and cost-effectiveness. This recommendation has been implemented in several countries. Our aim was to assess the impact of UMV using monovalent inactivated hepatitis A vaccines on incidence and persistence of anti-HAV (IgG) antibodies in pediatric populations. We conducted a systematic review of literature published between 2000 and 2015 in PubMed, Cochrane Library, LILACS, IBECS identifying a total of 27 studies (Argentina, Belgium, China, Greece, Israel, Panama, the United States and Uruguay). All except one study showed a marked decline in the incidence of hepatitis A post introduction of UMV. The incidence in non-vaccinated age groups decreased as well, suggesting herd immunity but also rising susceptibility. Long-term anti-HAV antibody persistence was documented up to 17 y after a 2-dose primary vaccination. In conclusion, introduction of UMV in countries with intermediate endemicity for HAV infection led to a considerable decrease in the incidence of hepatitis A in vaccinated and in non-vaccinated age groups alike. Introduction Most data on the incidence of acute HAV infection and prevalence of immunity cited in the literature are relatively old. According to World Health Organization (WHO) estimates, there were 126 million cases of acute hepatitis A in 2005. 1,2 Acute hepatitis A-related morbidity and mortality increase with age. In children aged <6 years, »70% of infections are asymptomatic; if illness does occur, it is typically anicteric. In contrast, in older children, adolescents and adults, infection often leads to clinically overt acute hepatitis. 3,4 Acute hepatitis A in adults may lead to prolonged incapacitation and rarely also to acute liver failure in previously healthy individuals and in patients with chronic liver disease. 5 There is no specific treatment for acute hepatitis A except for supportive care and liver transplantation in the rare cases with liver failure. 6 The virus is transmitted through the fecal-oral route, either through person-to-person contact or through contaminated food or water. 6 The highest rates of infection are found in areas with poor sanitary conditions and hygienic practices and lack of access to clean water. 7,8 Other risk factors for acquiring HAV include intravenous drug abuse and men having sex with men (MSM). 9 Improvements in sanitation and access to clean water reduce viral circulation and infection and therefore the risk of waterborne HAV transmission and the overall rates of transmission. This reduction can be observed in the absence of vaccination as well as when vaccination programs are in place. The first commercially produced hepatitis A vaccine was launched in 1992. 10 Both inactivated and live attenuated vaccines against hepatitis A are currently available. 11 A live attenuated vaccine is mainly used in China; most other countries use inactivated vaccines. 12 Several monovalent inactivated hepatitis A vaccines are available, which are licensed for children aged one year or older (Table S1). [11][12][13][14] The WHO considers that HAV vaccines of different brand names are interchangeable. 11 The antigen content differs between vaccines, 14,15 however, all are considered safe and immunogenic. 13,[16][17][18][19][20] Long-term persistence of antibodies has been shown with 2-dose vaccination schedules in adults. 21,22 Areas with high viral transmission rates have a lower rate of severe morbidity and mortality than areas with lower viral transmission rates, as there are few susceptible adults in areas with high transmission rates. 2,23 However, epidemiologic shifts from high to intermediate levels of HAV circulation, resulting from improvements in sanitation and hygiene, are paradoxically associated with an increase in susceptibility to infection due to decreasing immunity in the population as well as to more symptomatic disease due to older age at first infection. 7,10 The impact of vaccination can therefore be confirmed by a decline of reported symptomatic cases, of fulminant hepatitis cases and of liver transplants. 24 In these settings, the WHO recommends the integration of HAV vaccination into the national immunisation schedule for children aged one year and above, if indicated on the basis of incidence of acute hepatitis and consideration of cost-effectiveness. 1 Most countries that have introduced hepatitis A vaccination in their immunisation programs use the available monovalent vaccines. Combined vaccines that include hepatitis A and B or hepatitis A and typhoid have also been developed. However, with the exception of Quebec in Canada 25 and Catalonia in Spain 26 where the combined hepatitis A and B vaccine is used in the, pediatric immunisation programmes, these are mainly intended for use in adult travelers or patients with specific risks like chronic liver diseases. 27 Furthermore, hepatitis B vaccination has been introduced as a birth dose, monovalent or combined with other antigens, since the late 1990s or early 2000s in most countries. This review is therefore focused on the use of monovalent hepatitis A vaccine in the universal mass vaccination (UMV) setting. Single-dose inactivated hepatitis A vaccines have been introduced in the national immunisation program in Argentina and additional countries in Latin America are considering adopting a similar protocol. This option seems to be comparable in terms of short and intermediate-term effectiveness, and is less expensive and easier to implement than the classical 2-dose schedule. 1,24 However, until further long-term experience has been obtained with a single-dose schedule, in individuals at substantial risk of contracting hepatitis A, and in immunocompromised individuals, a 2-dose schedule may be preferable. 1 Following an increase in the number of HAV outbreaks in the 1990s, Israel was the first country to introduce nationwide UMV for 18 months old toddlers using 2 doses of Havrix TM . 28 Additional countries that introduced UMV programs for hepatitis A include among others Argentina, 24 Bahrain, 29 Brazil, 30 China, 31 Greece, 32,33 Panama, 34 the US 35 and Uruguay 36 ; as well as regions of Belarus (Minsk City), 37 Canada (Quebec), 25 Italy (Puglia) 38 and Spain (Catalonia). 39 The objectives of this systematic review were to: (1) summarize data on the impact of monovalent inactivated hepatitis A vaccines in the context of UMV on the incidence of acute hepatitis A; (2) assess the impact of UMV on other parameters than incidence (e.g. indirect effects such as herd immunity); (3) summarize data on the long-term persistence of anti-HAV (IgG) antibodies in pediatric populations. Inclusion and exclusion criteria In this systematic review only peer-reviewed primary research articles were included; review articles were excluded, but the reference lists of systematic reviews were screened to identify additional relevant primary articles. Review of the gray literature was not included. For review objectives 1 and 2, only observational studies conducted in a setting with UMV with monovalent, inactivated hepatitis A vaccines were included (Table S1). Studies from settings where hepatitis A vaccination was only implemented at the regional level (for example in Puglia, Italy 38 or Minsk City, Belarus 37 ), or from settings in which live attenuated hepatitis A vaccines or only combined hepatitis A vaccines were used in the UMV programs were excluded. Furthermore, studies in at risk populations, outbreak studies, modeling studies and economic evaluations were excluded; as were studies that did not present incidence or prevalence baseline data (i.e. data from the era prior to the introduction of UMV). For review objective 3, studies were only included if they were conducted with monovalent, inactivated hepatitis A vaccines in children (at time of primary vaccination) and provided follow-up data for a minimum of 5 y. Selection process Articles were selected in 3 steps. Firstly, titles and abstracts identified through the search strategy were screened to identify potentially relevant articles. All titles and abstracts were screened in duplicate by 2 independent researchers. Any disagreements were resolved by the 2 reviewers by discussing the title and abstract; in case any doubts remained, the full-text was screened to ascertain if the article answered one of the research questions. Secondly, the full-text of the selected articles was screened, keeping in mind the inclusion and exclusion criteria described above, to determine whether it answered one of the review questions. If any aspects of the methodology were unclear, a comment was placed in the results table. Thirdly, for articles that presented duplicate data, the article that presented the most complete data (e.g., longer follow-up) was included. Results The search resulted in 3313 unique hits, of which 27 were included in this systematic review (Fig. 1). In total, 10 articles were included for review objective 1, 15 for review objective 2 and 10 for review objective 3. Some articles presented data for more than one review objective. Objective 1: Impact of UMV on HA incidence Reduction in incidence Ten studies provided data on incidence of acute hepatitis A before and after the introduction of hepatitis A UMV programs; all but one study (in Greece) found a marked decrease in acute hepatitis A incidence after UMV was implemented (Table 1). Declines were independent of the brand of the hepatitis A vaccine used in the programs; the number of doses that was given; the target age at first vaccination, which ranged from 12 to 24 months; or the attained vaccination coverage (range 25%-96.8%). After the introduction of UMV, the percent reduction in the incidence of acute hepatitis A was 88% in Argentina, >95% in Israel, 93% in Panama and 96% in Uruguay going form incidence rates ranging 6.0 to 142.4 per 100,000 population before vaccine introduction to a range of 0.4 to 7.9 per 100,000 population. In Greece, a UMV program was initiated in 2008 however due the low endemcity level (<3.0 per 100,000 population) registered since the late 1980s, the program has not had significant impact on the notification rate of acute hepatitis A cases. 40 Table 2). 24 A study that looked at hepatitis A outbreaks in day care centers in the Southern District of Israel showed that no more outbreak-related acute hepatitis A cases were reported. 41 Hepatitis A vaccination was implemented is some US States as of 1999, the rate of hepatitis A-related ambulatory healthcare visits among enrollees going from 20.9 in 1996-1997 to 8.7 in 2004. 42 The age-adjusted hepatitis A-mortality rate decreased significantly from 0.51 in 1999-1995 to 0.28 in 2000-2004. 43 The 2011 hepatitis A incidence rate was the lowest ever recorded for the United States, data form the National Inpatient Survey have shown a reduction in the HA hospitalization rates from 0.64 in 2004-2005 vs. 0.29 in 2010-2011, 44 however the relative rates of hospitalized hepatitis A cases among overall acute hepatitis A cases increased. In Greece, the number of HArelated hospital admission per 1000 hospital admissions among children dropped from 77.3 (95% CI 58.7-95.9) in 1999 (year of introduction of vaccine in private market) to 18.5 (95% CI 8.2-28.9) in 2013. 45 Furthermore the outbreaks in 2013 among Roma populations did not spread to the general population. 40 Indirect effects A decline in acute hepatitis A incidence was seen in all age groups after the introduction of UMV in Israel in 1999 28,46,47 as well as in the US, where vaccination was introduced in 1999 in some States, 35,48,49 Such a drop was also recorded in Argentina in 2005 24 and in Panama in 2007 50 (Table 3). Declines in incidence were generally highest in the age groups that contained the most vaccinated children. 24,28,35,51 Incidence rates also dropped among children too young to be vaccinated in the programs. 28,35,50 In most studies, the smallest declines in acute hepatitis A incidence were noted in the oldest investigated age groups. 24,28,35,50 Similarly, a drop in hepatitis A-associated hospitalization rates was observed in non-vaccinated age groups in the US. 44 In settings where many adults are likely to have natural immunity from prior infection, the drop in incidence in age groups not targeted by the UMV programs suggest a remarkable degree of herd immunity. Objective 3: Long-term persistence of anti-HAV antibodies Of the 10 included studies that reported on persistence of anti-HAV (IgG) antibodies more than 5 y after vaccination of a pediatric population, 2 studies were performed in Argentina, one in Belgium, 2 in China, one in Israel and 4 in the US (Table 4). In six studies, authors reported that children who received booster vaccinations after the primary immunisation schedule were excluded from the follow-up analyses. Follow-up among the included studies ranged from 5 to 17 y. In the study with the longest follow-up, 87 to 100% (depending on the vaccination schedule) of the children whose antibody levels were measured at follow-up were found to be seroprotected up to 17 y after vaccination. 52 The vaccination schedule, the number of doses, the antibody-status of the mother and age at vaccination were all found to influence the height of the geometric mean concentration (GMC) of anti-HAV antibodies. In the study with the longest follow-up, children who received the third dose of hepatitis A vaccine at month 12 compared to month 2 of the vaccination schedule had a higher GMC after 17 y (354 mIU/mL [95%CI 142-880] vs.129 mIU/mL [95%CI ). 52 In another study, the GMC at 5 y follow-up was significantly higher among those who had received 2 doses of hepatitis A vaccine compared to those who had received only one dose (592 mIU/mL [95%CI 480-729] vs. 123 mIU/mL [95%CI 111-135]). 53 In 2 studies, the presence of maternal antibodies was significantly associated with lower GMCs at 6 and 15 y follow-up among infants vaccinated at aged 2 and 6 months, respectively. 54,55 One of these studies also showed that the GMC was higher among children vaccinated at age 12 or 18 months compared to those aged 6 months. 55 Discussion In 2012, Ott et al. reviewed the literature on the long-term protective effect of hepatitis A vaccines, 56 and new studies have been published since then. 52,53,55,[57][58][59] To our knowledge, the present communication is the first systematic review that examines the overall impact of universal mass vaccination with inactivated hepatitis A vaccines. Impact of UMV programs The goal of the hepatitis A UMV programs in countries with intermediate endemicity for HAV is to protect individuals from infection and disease and reducing the virus circulation. Most UMV programs are aimed at very young children, as they represent the reservoir of the infection representing an important vehicle in the transmission of HAV. 28,60-63 UMV programs are generally based on 2-dose vaccination, however Argentina and Brazil have decided to introduce a one dose only program. 24,30 This review focused only on countrywide UMV with monovalent inactivated vaccines. In China, hepatitis A vaccination was introduced into the routine childhood program in 2008. However, as 92% of the 16 million children aged 18 months are vaccinated annually with a single dose of live attenuated vaccines, data from China was beyond the scope of this review. 64,65 Likewise, some areas implemented vaccination only in a particular region of a country (e.g. Puglia, Italy 38 ; or Catalonia, Spain where a combined hepatitis A and B vaccine was used 39 ) or only in one city (e.g., Minsk, Belarus 37 ), and were not included. All but one of the reviewed studies that looked at incidence of acute hepatitis A showed a marked decline in the incidence after the introduction of hepatitis A UMV programs in countries with intermediate levels of endemicity defined elsewhere. 66 However reductions in the rate of transmission are also attributable to the improved hygiene resulting from cleaned water access. It has been in fact shown that Hepatitis A virus (HAV) is associated with inadequate water and sanitation, the increases in clean water access lead to reduced risk of waterborne HAV transmission. 67 The incidence in non-vaccinated age groups was also found to decrease, likely indicating a strong impact of vaccination programs on herd immunity. 24,28,35,46,48,49 However an increase in the proportion of acute hepatitis A cases hospitalized in the United States was reported and this could be explained by the increase in age of the susceptible population which is predominantly adults more prone to clinically overt and severe disease. 44,51 Greece is the only country where the introduction of UMV for hepatitis A did not show a strong impact on the incidence of hepatitis A, as reported in other countries. Differently than in other countries the UMV in Greece was implemented when already at least a third of children on a national level had been vaccinate in the private market and specifically data from Athens metropolitan area showed a vaccine uptake >50% prior to the UMV introduction. The low impact of UMV showed in Greece is likely due to the fact that Greece was already a country with low endemicity at the time UMV was introduced, 66 and that children in the main high risk group, the Roma population, are not vaccinated at the recommended level to prevent transmission. 40 The evaluation of the impact of hepatitis A vaccination has been carried out mostly using national or state-wide passive surveillance systems, based primarily on laboratory-confirmed or epidemiologically-linked cases of acute hepatitis A, 24,35,36,40,[47][48][49][50] and 2 Israeli studies used health insurance organization data. 46,68 None of the systems ascertained asymptomatic infections or those with acute hepatitis A disease that did not seek medical care. None of the authors reported any changes in underreporting over time. If these factors had any effect on the outcome, it is likely they led to an underestimation of the reduction in acute hepatitis A, rather than an overestimation. Outbreaks have often been the direct trigger for the introduction of UMV programs, such as the 2003-2004 outbreaks in Argentina. 24 In some areas, the overall incidence of acute hepatitis A was already declining in the decade prior to the introduction of routine vaccination programs, but maintained its cyclic pattern. 28,35,48 For example, Singleton et al. state that increasing rates of in-home running water perhaps contributed to somewhat lower HAV incidence in the later pre-licensure period in Alaska. 48 However, the decline in incidence after the introduction of UMV was unprecedented in magnitude. No major changes in water or sanitation infrastructure were reported that coincided with the introduction of UMV and to which the decrease could be attributed. 24,28,48 Furthermore the decline can't be entirely explained by the cyclical nature of the disease, as the decline was accompanied by shifts in the relative age distribution of acute hepatitis A to older age groups 24,35,44,46,68 and declines were larger in vaccinating than non-vaccinating states of the US. 35 Finally, epidemic peaks have been disappearing. The decline in incidence was sustained over time; studies in this review included up to 8 y of data post-UMV introduction, 48,49,68 during which no increase in acute hepatitis A incidence was observed. Vaccination coverage varied widely among geographical areas; reaching over 95% of young children in Argentina, but only 25% in US states where vaccination was "considered" from 1999. Efforts to target children at highest risk in certain areas of the US might explain the high impact of UMV despite this low vaccination coverage. 35 The impact of UMV on acute hepatitis A incidence was evident despite limited vaccine coverage rates in some countries. Due to the heterogeneity of the data, meta-analysis was not performed. The periods compared in the before vs. after comparison differ too much between the studies, especially in terms of years since the introduction of UMV. Additionally, Vizzotti et al 24 describes a 1-dose vaccination program while the other publications refer to 2-dose vaccination programs. As with any systematic literature review, this review is subject to the limitations of the included articles. Almost all included studies showed a decline in the incidence following the introduction of hepatitis A UMV programs. As these studies were conducted in settings with intermediate endemicity, the results should be interpreted within this context and differences in the surveillance systems should also been considered when interpreting the data. In the study from Greece, a country with low endemicity, no such decline was seen. Improved hygiene over the past century has led to low endemicity in much of the developed world. The resulting high susceptibility may increase the risk of outbreaks when exposure does occur, and this has been the cause of recent large outbreaks through food contamination, e.g. through frozen pomegranate arils in the United States 69 and frozen berries in Europe. 70 However, this does not necessarily indicate that mass immunization programs with hepatitis A vaccine should be introduced in low endemic countries. Long-term persistence In this review, evidence of long-term persistence of anti-HAV (IgG) antibodies in children for up to 17 y following vaccination with monovalent inactivated vaccines was found. 52 However, these data have been generated following vaccination with 3 doses of an inactivated Hepatitis A vaccine containing 360 EU per dose (HAVRIX~360 EU), not used anymore. Long-term immune memory is important so that children vaccinated under childhood vaccination programs will still be protected by the time they reach the age at which disease is likely to be symptomatic. 12,71 Infants vaccinated before the age of one year appear to have a lower antibody response. 54,55 This supports the target age of one year or older for children immunized in UMV programs. A recent model-based assessment of vaccine induced immune memory against HAV in adults suggests that anti-HAV antibodies will persist for at least 20 y in >95% of vaccines. 22 A limitation in interpretation of most long-term persistence studies is that a fraction of the children that received the primary vaccination series also received an additional booster dose before the last follow-up. Indeed, boosters complicate the interpretation of follow-up data. For instance, 5 of the included studies reported that children who received booster doses were excluded [52][53][54][55]72 ; this results in an overestimation in the GMCs and the percentage of children who were seropositive years after the primary vaccination series, as boosters were likely given to precisely those children whose antibodies levels dropped below a certain threshold. The interpretation is further hampered by the fact that most studies did not report how many children were excluded for this reason. For these studies, it can be concluded that anti-HAV-antibodies can persist up to the time of last follow-up (6 to 14-15 years). 54,58,59,[72][73][74] For the 2 studies that did report how many children were excluded due to the receipt of a booster dose, it can be concluded that antibodies persist to the time of last follow-up (5 and 17 y) in the majority of children who were not lost to follow-up. 52,53,55,57 A limitation of all studies on long-term persistence of antibodies and immune memory is the large number of participants that were lost to follow-up over time. Additionally, the possibility that the children received a 'natural booster' due to exposure to circulating HAV cannot be excluded, especially in the early years of the vaccination programs. However, there is also no proof that the long-term persistence of antibodies was the result of natural boosting. 54 Furthermore, seroprotection against HAV was defined as a GMC of at least 5, 10, 20, or 33 mIU/mL depending on individual assays and vaccines used in the included studies. However, the true lowest limit of anti-HAV that confers protection is unknown and might be even lower than the detection limit of a particular assay. 71,75 Information on long-term persistence after administration of a single dose of vaccine in children is limited, but has been documented in adults. [76][77][78] This information would suggest that protective anti-HAV antibody levels after a single dose of inactivated hepatitis A vaccine can persist for up to 11 y. A recent publication also suggests that antibody titers are lower and antibodies decay faster in younger children (aged 1-7 years). 74 Long-term persistence data after one dose in children would therefore be valuable, especially as not all children are vaccinated twice, either because the second dose is missed or because only one dose is recommended, e.g., in Argentina and Brazil. The impact on the disease incidence and other related health outcomes as well as the long term antibody persistence provided by the vaccination are all critical considerations of vaccine program impact, however every country must also assess the cost-effectiveness of the program in deciding for or against the implementation of hepatitis A UMV. The data reported present certain heterogeneity in terms of epidemiology and reporting systems and therefore the data should not be read as a comparison of the impact of immunization programs in the different country but solely as a descriptive assessment of the country by country outcomes. Conclusion Introduction of UMV with monovalent inactivated hepatitis A vaccines in countries with intermediate endemicity for HAV infection led to a considerable decrease in the incidence of hepatitis A in vaccinated and in non-vaccinated age groups alike. Abbreviations UMV universal mass vaccination HAV hepatitis A virus WHO world health organization MSM men having sex with men GMC geometric mean concentration Disclosure of potential conflicts of interest LDM and CM are employees of the GSK group of companies and hold stock options or restricted shares. EB and ALS are/were employees of Pallas Health Research and Consultancy BV company and they declare that Pallas Health Research and Consultancy BV company received a grant for the research from the GSK group of companies during the conduct of the study and also outside the submitted work. DS does not have anything to declare.
v3-fos-license
2019-04-06T13:11:44.234Z
2015-08-15T00:00:00.000
97229441
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13203-015-0132-z.pdf", "pdf_hash": "b9ce5fc4f439fdd5dca50add8a881ecfc543396c", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42673", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "b9ce5fc4f439fdd5dca50add8a881ecfc543396c", "year": 2015 }
pes2o/s2orc
Preparation and catalytic performance of high dispersion of Y zeolite treated with alkali solution Y zeolite slurry contains a lot of colloid and pretreatment of Y zeolite slurry could separate Y zeolite nanoparticles and Si sol effectively by alkali solution. These nanoparticles were characterized by scanning electron microcopy, particle size distribution, X-ray diffraction, thermogravimetric analysis, and NH3 temperature program desorption. After integrating the Y zeolite in fluid catalytic cracking catalyst, performance of the catalyst containing this nano-zeolite was evaluated by cracking the mixed feed of Xinjiang vacuum gas oil and vacuum resin in fixed-fluidized-bed reactor. This catalyst is favorable for the production of light oil by catalytic cracking the mixed feed. Introduction In recent years, the demand for catalyst with high activity and activity stability has been growing strongly in China. As domestic feedstock of chemical industry was becoming more and more inferior, Y zeolites, which were the major rate-controlling components in most modern hydrogen cracking catalysts, should remain extremely stable during cracking. Nowadays, almost all zeolitic fluid catalytic cracking (FCC) catalysts are prepared by the ion-exchange of rare-earth (RE) cations owing to the high activity and proper hydrothermal stability in many reactions. Y zeolite is widely used in petrochemical field as catalytic material and usually modification is needed to improve its hydrothermal stability and catalytic performance [1][2][3]. There is a high content of gel in Y zeolite crystallization seriflux that can not be efficiently removed by zeolite filtration and separation process and thus leaded to Y zeolite's low degree of crystallinity, undesirable dispersion and stability [4]. Therefore, before modifying NaY zeolite, it is critical to remove the gel that is attached to the surface of Y zeolite. Experimental Experiment raw material The chemical materials used in this experiment are shown in Table 1. Preparation of high dispersion Y zeolite During the filtration and isolation of NaY crystalline system, the NaY zeolite was first washed with 0.1 M alkaline solution 2-5 times at 80°C, and then 3-5 times of water of the weight of zeolite dry basis is used to wash the zeolite filter cake to thoroughly remove residue inorganic salt ions, especially silicate ions in order to prepare high performance NaY zeolite. With conventional NaY and novel NaY zeolite as raw material, we prepared rare earth modified Y zeolite through traditional two times ion exchange and calcinations process. In the catalyst preparation process, one Y zeolite, a Kaolin matrix (Suzhou Kaolin Company, China), and & Zhaoyong Liu lzy0539@126.com 1 alumina sol matrix, were mixed together and shaped spraydrying to obtain a micropheroidal catalyst. Then we prepared CAT-1 sample containing conventional Y zeolite and CAT-2 sample containing novel Y zeolite through catalyst preparation process, respectively. Characterization The degree of crystallinity and unit cell size (UCS) were analyzed with a Rigaku D/MAX-3CX diffractometer. The pore properties of catalysts were tested in a Coulter Omnisorp 360 analyzer. The samples were first outgassed at 300°C for 4 h with a vacuum degree of 1.33 9 10 -7 Pa. The particle morphology and particle size of the samples were observed by SEM, which was carried out on S-4800 (Hitachi Company, acceleration voltage of 0.5-20 kV). FT-IR spectroscopy was performed on a Nexus FT-IR instrument (Nicolet Co, USA) to study the acidity of zeolite. Pretreatment of the sample was made in the cell at 300°e under vacuum for 2 h. Then purified pyridine vapor was adsorped on the zeolite at room temperature. The excess of pyridine was removed under vacuum over two consecutive (1 h) periods of heating at 200 and 350°C, respectively, each of them followed by IR measurements. The particle size distribution of zeolites was observed by a laser particle size analyzer with measuring range from 0.05 to 500 lm. Evaluation device The fixed fluidization bed FCC unit (we call it FFB briefly) is used to evaluate catalysts' performances, which is designed by Luoyang Petrochemical Engineering Corporation Ltd, Sinopec [5]. Evaluation method At the beginning of the experiment, a certain amount of fluidization wind is blown into the fixed fluidization bed FCC unit, and then the vacuum pump starts to load 200 g of catalyst into reactor. After loading the feedstock to gauge tank and heated to a certain temperature, the water pump starts to send distilled water into vaporizing oven to generate over heated steam. The steam is first used to replace fluidization wind and then used as atomization and strip steam. When each part of the reactor's temperatures is stabilized for 5 min and the feedstock has reached given temperature, the oil pump is started for reflux for 2 min to make sure the accuracy of input quantity of feedstock. The feedstock flows to the bottom of reactor along the central axis from top to bottom through feed pipes. The feedstock then contacts high temperature catalysts through nozzle and starts cracking reaction. After half an hour's steam stripping, oxygen is let into regenerate the spent catalyst. The condensing system will separate reaction product into liquid and gas. The tests were carried out under the typical conditions for FCC units: cracking temperature 500°C, catalyst to oil mass ratio 4.0, weight hourly space velocity 15 h -1 . Prior to a FFB test, CAT-1 and CAT-2 were steam-deactivated at 800°C for 10 h in a fluid bed with 100 % steam. The chemical composition of the product FCC gasoline was determined by an online GC-MS. The feedstock (as shown in Table 2) was a mixture of 70 % Xinjiang vacuum gas oil (VGO) and 30 % Xinjiang vacuum tower bottom (VTB). Results and discussion Particle size and SEM analysis By improving zeolites' dispersion ability and reducing zeolites' particle size, zeolites' surface area can be efficiently improved; the decrease on zeolites' particle size makes the zeolites better-dispersed in FCC catalyst. Therefore, the matrix pre-cracked big heavy oil molecules are much easier to react on the zeolites' active center. This shortens the reaction route of heavy oil molecules. At the same time, due to the decrease of zeolites' particle size, the inner pores of zeolites are shortened; therefore, the reactant, reaction products, and coke precursor are easier to spread and lead to decrease on coke generation and acceleration on heavy oil conversion. Figures 1 and 2 are the particle size comparison on high dispersion Y zeolite and conventional Y zeolite. From Fig. 1, we can see that the high dispersion Y zeolite has a smaller particle size and is relatively well distributed. There are clear boundaries between each crystalline grain without adhesion. Nano-scale Y zeolite' size is smaller than conventional one. On the other hand, the conventional Y zeolites' particle is imbalanced distributed due to adhesion. From Table 3, we can see, compared with the conventional Y zeolite, the novel zeolite's median particle size D (0.5) has decreased by 0.422 lm and D (0.9) has decreased by 6.029 lm. Table 4 shows the comparison results between hydrothermal and thermal stability of novel and conventional Y zeolite. Thermal and hydrothermal stability From Table 4, we can see that compared to conventional Y zeolite, the novel Y zeolite maintained good relative crystallinity after 2 h hydrothermal deactivation and the collapse temperature increased 13°C. This indicates that the novel Y zeolite has good hydrothermal stability and thermal stability. Acidity Table 5 shows the comparison results between novel Y zeolite and conventional zeolite on acidity. Table 5 shows that, compared with the conventional Y zeolite, the novel Y zeolite shows excellent selectivity on Brønsted acid. Table 6 is the evaluation results on CAT-1 and CAT-2 which were tested on fixed fluidization bed FCC unit. Table 6 shows the fixed fluidization bed FCC unit evaluation result on CAT-1 and CAT-2. From Table 6, it is clear that compared with CAT-1, CAT-2 which containing the novel Y zeolite could increase gasoline yield by 2.39 % points; decrease the heavy oil yield by 1.87 % points; and increase the conversion rate, LCO and total liquid yield by 2.53, 1.72, and 1.63 % points, respectively. Therefore, the novel Y zeolite has features of strong heavy oil conversion ability and increasing total liquid yield. Figure 2 is the mechanism graph of separation of NaY zeolite and Si gel. At the end of silicon-alumina zeolites' thermal crystallinity, the zeolite needs to be separated from crystallinity mother liquid. In industrial application, vacuum filtration is adopted to separate zeolite crystal and inorganic salt-rich crystallinity mother solution. During the separation, large amount of residue of inorganic salt ions is left on the filter cake. In industrial situation, distilled water is often used to wash off these inorganic salt ions. When filtering and separating zeolite crystallinity system, there are large quantity of unreacted silicates in the residue inorganic salt ions left on the filter cake. Silicate ion is a kind of special ion which is very sensitive to pH value. Under normal circumstances, the higher pH value a system has, the more stable silicate is; but when the system's pH value decreases, silicates would be very active to form gel. When wash the zeolite's filter cake, the system pH value is low; therefore, silicates are active and would form a large quantity of amorphous gel. This gel not only blocks filter cloth and cause bad effect on filtration speed, but also significantly influence zeolites' physical and chemical properties, such as degree of crystallinity, dispersion, and stability. Therefore, after pretreatment on NaY zeolite with alkaline solution and then separation with mother solution, NaY zeolite and Si gel can be effectively separated. Conclusions (1) There is a large amount of gel in Y zeolite's crystallinity seriflux; pretreatment on NaY zeolite with alkaline solution can effectively separate Y zeolite from Si gel. (2) The high dispersion Y zeolite has a smaller particle size and is relatively well distributed. There are clear boundaries between each crystalline grain without adhesion. On the other hand, the conventional Y zeolite's nanoparticles are imbalanced distributed with adhesion. a C/C 0 is the relative crystallinity, which is based on the peak height between 23 and 24.5 e Compared to the conventional Y zeolite, the novel zeolite's median particle size D (0.5) has decreased by 0.422 lm and D (0.9) has decreased by 6.029 lm. (3) The novel Y zeolite maintained good relative crystallinity after 2 h hydrothermal deactivation and the collapse temperature increased 13°C. This indicates that the novel Y zeolite has good hydrothermal stability and thermal stability. (4) The novel zeolite shows excellent selectivity on Brønsted acid. (5) On the basis of maintaining coke and dry gas yields, CAT-2 which containing the novel Y zeolite could increase gasoline yield by 2.39 % points; decrease the heavy oil yield by 1.87 % points; and increase the conversion rate, LCO and total liquid yield by 2.53, 1.72, and 1.63 % points, respectively. Therefore, the novel Y zeolite has the features of strong heavy oil conversion ability and increasing total liquid yield.
v3-fos-license
2014-10-01T00:00:00.000Z
2004-09-23T00:00:00.000
17484200
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jbiol.biomedcentral.com/track/pdf/10.1186/jbiol14", "pdf_hash": "a918e84321473970968ffd027b48560788e0ceee", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42675", "s2fieldsofstudy": [ "Biology" ], "sha1": "a918e84321473970968ffd027b48560788e0ceee", "year": 2004 }
pes2o/s2orc
Hide and seek: the secret identity of the phosphatidylserine receptor Phosphatidylserine on the dying cell surface helps identify apoptotic cells to phagocytes, which then engulf them. A candidate phagocyte receptor for phosphatidylserine was identified using phage display, but the phenotypes of knockout mice lacking this presumptive receptor, as well as the location of the protein within cells, cast doubt on the assignment of this protein as the phosphatidylserine receptor. The problem Apoptosis, or programmed cell death, is a normal physiologic process for orderly removal of effete cells. As a process, apoptosis fell below the notice of cell biologists for quite some time, in part because cells dying an apoptotic death in vivo vanish almost immediately from view. They vanish because they are promptly engulfed, either by a neighbor or by a professional phagocytic macrophage; within the confines of the resulting phagosome, the dying cell digests itself from the inside out while the engulfing cell digests it from the outside in. To orchestrate its disappearance, the suicidal cell must make its intentions known to its neighbors, triggering signaling pathways that activate the engulfment machinery of the phagocyte. Investigation of the molecules involved in apoptotic cell recognition is a growing and industrious scientific subfield of the larger apoptosis research enterprise, with a host of specific proteins identified and cloned. One salient feature of the proteins identified is that most are receptors or enzymes residing on the surface of the phagocytic cell, along with a burgeoning number of bridging molecules from the extracellular fluid ( Figure 1). Almost none of the molecules identified is a feature of the apoptotic cell surface. One molecule that universally distinguishes apoptotic cells is known, however, although it has the disadvantageous property that it cannot be cloned: it is the phospholipid phosphatidylserine (PS), which newly appears on the surface of cells undergoing apoptosis. Although the mechanistic details of PS appearance on the apoptotic cell surface have not been completely resolved, the basic outlines are clear ( Figure 1). PS is normally sequestered in the inner leaflet of the plasma membrane bilayer by an active transporter, the aminophospholipid translocase (APLT). In apoptotic cells this enzyme is downregulated, and an enzyme activity called scramblase is upregulated; scramblase acts to randomize the lipids between the two leaflets of the plasma membrane, bringing PS to the surface [2]. This rearrangement is universal, occurring in vivo in a wide variety of cell types [3,4] and in both vertebrate and invertebrate organisms [5]. The ready conclusion is that PS has something to do with apoptotic cells identifying themselves to phagocytes, a conclusion borne out by the finding that masking PS on the apoptotic cell surface with the PS-binding protein annexin V blocks phagocytosis [6]. Moreover, many of the known bridging and receptor molecules bind to PS with varying degrees of specificity ( Figure 1), including serum-derived protein S [7], milk fat globule protein MFG-E8 [8], and the scavenger receptors such as SR-BI [9]. Nevertheless, because the inhibition of engulfment of apoptotic cells by vesicles of PS is stereospecific [10], in the absence of bridging molecules there must be a receptor on phagocytes that directly and specifically recognizes PS itself. Identifying the phosphatidylserine receptor The first experimental evidence for the existence of a PS receptor came from Fadok and co-workers [11]. Their approach began with the production of monoclonal antibodies against 'stimulated' macrophages whose recognition of apoptotic cells is inhibited by PS vesicles in a stereospecific fashion [10], in contrast to 'unstimulated' macrophages, whose uptake of PS-expressing target cells is insensitive to PS vesicles (even though it is sensitive to annexin V) [6]. One monoclonal antibody, mAb 217, was selected because it bound preferentially to unfixed stimulated macrophages, and this binding was inhibited by PS vesicles. The antigenic target of mAb 217 would thus appear to have the hallmarks of a PS receptor: it is on the cell surface, it recognizes PS, and, as the authors went on to show, mAb 217 blocks engulfment of apoptotic cells [11]. What exactly might this receptor do? This question was examined in more detail in a later paper from the same laboratory [12], where an experimental system was established that allowed the binding step of the uptake of an apoptotic cell to be distinguished from the engulfment step. Bound cells or particles not expressing PS were engulfed only upon addition of PS vesicles, but coating target cells with PS vesicles did not result in significant binding to macrophages. The authors concluded that the PS receptor did not bind PSexpressing targets tightly (tethering), but that low-affinity binding could nonetheless stimulate engulfment. Curiously, it was reported that addition of mAb 217 would itself induce uptake of previously bound bystander cells, in contrast to the earlier observation that pre-incubation with the antibody prevented uptake of subsequently added apoptotic cells [11], suggesting perhaps that signal transduction induced by occupancy of the PS receptor is transient. The next step was to identify the molecule corresponding to the PS receptor. On western blots, mAb 217 reacted with a protein of an apparent molecular weight of 70 kDa. Treatment of cellular extracts with a mixture of four O-glycosidases reduced the size of the protein to roughly 50 kDa, suggesting that the target of the antibody is glycosylated and is thus presumably a cell-surface protein. When a phage display library expressing proteins from mouse macrophages was panned with mAb 217, half of the one dozen phages sequenced contained identical sequences, from a protein in the sequence databases with a theoretical Phosphatidylserine (PS) is a central player in the recognition and engulfment of apoptotic cells. PS may be recognized by a variety of tethering receptors (shown as a single entity in green) and bridging molecules (shown as a single entity in pink) that help tether the apoptotic target to the phagocyte. The PS receptor signals to a pathway that leads to engulfment, for example by rearranging elements of the cytoskeleton (shown as cross-hatching). The proteins that correspond to the PS receptor, the aminophospholipid translocase (APLT), and the scramblase are unknown, as are the functions of ABCA1 and the PS exposed on the surface of the phagocyte. PSRp denotes the protein encoded by the psr gene, which is found within the nucleus. molecular weight of 47 kDa [11]. Information from the databases suggested that the gene encoding the protein was highly expressed in heart, skeletal muscle, and kidney, with lower levels of expression in many tissues. Sequence analysis suggested that the protein contained one potential transmembrane domain, although this hydrophobic region was interrupted by an aspartic acid residue in mammals and two glutamic acid residues in the Caenorhabditis elegans (nematode) homolog. Two pieces of evidence linked this gene to the engulfment of apoptotic cells. One was that expression of the mammalian gene, denoted psr, or its nematode homolog PSR-1 [13], in lymphoid cells conferred an inefficient capability to bind apoptotic targets, and perhaps to engulf them as well, notable in view of the conclusion (mentioned above) that the PS receptor does not contribute to the tethering step of engulfment [12]. The second piece of evidence was that transfecting cells with small-interfering RNA corresponding to this gene, so as to decrease expression, reduced both binding of mAb 217 to transfected cells and phagocytosis of apoptotic targets by transfected cells; whether binding of apoptotic targets by transfected cells was affected was not reported. PS receptor knockouts What happens when the psr gene is deleted? Wang and colleagues [13] examined this question in C. elegans, and found that, in their words, "in a time course analysis of cell corpses during development, in almost all embryonic stages, more cell corpses were observed in psr-1 embryos than in wild type embryos… On average, cell corpses of psr-1 embryos persisted for 55% longer than those of wild type animals." In these studies, expression of the protein recognized by mAb 217 was not compared in wild-type and knockout animals. The first report of the effects of deletion of the psr gene in mammals came from the laboratory of Richard Flavell [14]. These investigators concluded that the protein encoded by the gene is required for clearance of apoptotic cells in the mouse. They observed lung developmental abnormalities and occasional brain hyperplasia, which were "associated with increased numbers of nonphagocytosed apoptotic cells". They also adoptively transferred fetal liver cells from knockout mice into lethally irradiated hosts and found that fewer of the stimulated macrophages recovered from these animals were able to engulf apoptotic cells compared with wild-type controls. Reactivity with mAb 217 was not compared between cells from knockout and wild-type animals. The second report of a knockout of this gene in mice appeared earlier this year [15], and this study noted severe developmental defects in definitive erythropoietic and T-lymphopoietic lineages. Reduced numbers of macrophages and apoptotic cells were observed in fetal livers of knockout versus wild-type animals; the fraction of apoptotic cells that were not phagocytosed was higher in the knockouts. Once again, reactivity with mAb 217 was not compared between cells from knockout and wild-type animals. In contrast to these studies, in the third report of a knockout of this gene by Lengeling and colleagues [1], which appears in these pages, reactivity with mAb 217 was compared between cells from knockout and wild-type animals, and no difference was observed; in each case, staining was restricted to the plasma membrane of fetal-liver-derived macrophages. This finding was foreshadowed by recent reports that the product of the psr gene is actually a nuclear protein, as judged from localization of green fluorescent protein (GFP) constructs as well as immunolocalization with antibodies directed specifically against the protein encoded by the psr gene [16,17]. Sequence analysis also indicates the presence of nuclear localization sequences in the encoded protein. Western blot analysis by Lengeling and colleagues, using a commercial antibody generated against a peptide present in the protein encoded by the psr gene, indicated that this protein does disappear from the knockout mouse. Lengeling and colleagues also document that deletion of this gene causes "perinatal lethality, growth retardation and a delay in the terminal differentiation of kidney, intestine, liver, and lungs during embryogenesis." On the other hand, in a variety of assays, no defect was observed in vivo or in vitro in the clearance of apoptotic cells in the knockout mouse. Where do we stand? The simplest interpretation of these studies is that the psr gene does not encode a PS receptor; rather, the gene appears to encode a nuclear protein that plays a role in development and differentiation, perhaps as a regulatory protein related to the iron-oxidase family of proteins [16]. The experimental link between the PS receptor and the presumptive psr gene is mAb 217 binding to an epitope in phage display. This methodology has the potential to identify weak crossreacting epitopes, and the Lengeling study has shown that mAb 217 does weakly cross-react with a peptide within the protein encoded by the psr gene. This cross-reactivity could explain the reactivity of mAb 217 with recombinant PSR protein expressed in bacteria [13], although it is not clear whether it can also explain the results using RNA interference (RNAi) [12]. If the cloned gene does indeed encode a critical regulator of hematopoietic differentiation, it is perhaps not surprising that defects were observed in macrophage function in worms and mammals. If this interpretation is correct, this gene should no longer be the subject of concentrated attention from those who study the clearance of apoptotic cells. More importantly, mAb 217 remains as an important foundation for renewed attempts to identify the genuine PS receptor. In the meantime, the product of the psr gene will live on in the databases identified as encoding a PS receptor. In doing so, it joins a rogue's gallery of functions without molecules and molecules without functions that are linked to PS (Figure 1). In a very similar story, there is a protein identified in the databases as the phospholipid scramblase that is not that protein [18], and another identified as an aminophospholipid translocase [19] that is probably not the one responsible for transport and sequestration of PS at the plasma membrane. At the same time, there is a protein, known as ced-7 in nematodes [20] or ABCA1 in mammals [21], that is required for engulfment of apoptotic cells and that seems to be involved in phospholipid movements [22] but whose function is unclear. Finally, PS itself poses some puzzles, since engulfment seems to require its exposure not only on the apoptotic target, but at lower levels on the phagocyte surface as well [23]. Why this should be the case is a mystery. All in all, PS seems to have an involved secret life whose molecular outlines are frustratingly well hidden.
v3-fos-license
2020-11-04T14:06:12.493Z
2020-11-04T00:00:00.000
226238671
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2020.554941/pdf", "pdf_hash": "c72ba2797332e10a3cfe857b58483a59570969f2", "pdf_src": "Frontier", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42676", "s2fieldsofstudy": [ "Biology" ], "sha1": "c72ba2797332e10a3cfe857b58483a59570969f2", "year": 2020 }
pes2o/s2orc
Soy Isoflavones Accelerate Glial Cell Migration via GPER-Mediated Signal Transduction Pathway Soybean isoflavones, such as genistein, daidzein, and its metabolite, S-equol, are widely known as phytoestrogens. Their biological actions are thought to be exerted via the estrogen signal transduction pathway. Estrogens, such as 17β-estradiol (E2), play a crucial role in the development and functional maintenance of the central nervous system. E2 bind to the nuclear estrogen receptor (ER) and regulates morphogenesis, migration, functional maturation, and intracellular metabolism of neurons and glial cells. In addition to binding to nuclear ER, E2 also binds to the G-protein-coupled estrogen receptor (GPER) and activates the nongenomic estrogen signaling pathway. Soybean isoflavones also bind to the ER and GPER. However, the effect of soybean isoflavone on brain development, particularly glial cell function, remains unclear. We examined the effects of soybean isoflavones using an astrocyte-enriched culture and astrocyte-derived C6 clonal cells. Isoflavones increased glial cell migration. This augmentation was suppressed by co-exposure with G15, a selective GPER antagonist, or knockdown of GPER expression using RNA interference. Isoflavones also activated actin cytoskeleton arrangement via increased actin polymerization and cortical actin, resulting in an increased number and length of filopodia. Isoflavones exposure increased the phosphorylation levels of FAK (Tyr397 and Tyr576/577), ERK1/2 (Thr202/Tyr204), Akt (Ser473), and Rac1/cdc42 (Ser71), and the expression levels of cortactin, paxillin and ERα. These effects were suppressed by knockdown of the GPER. Co-exposure of isoflavones to the selective RhoA inhibitor, rhosin, selective Cdc42 inhibitor, casin, or Rac1/Cdc42 inhibitor, ML-141, decreased the effects of isoflavones on cell migration. These findings indicate that soybean isoflavones exert their action via the GPER to activate the PI3K/FAK/Akt/RhoA/Rac1/Cdc42 signaling pathway, resulting in increased glial cell migration. Furthermore, in silico molecular docking studies to examine the binding mode of isoflavones to the GPER revealed the possibility that isoflavones bind directly to the GPER at the same position as E2, further confirming that the effects of the isoflavones are at least in part exerted via the GPER signal transduction pathway. The findings of the present study indicate that isoflavones may be an effective supplement to promote astrocyte migration in developing and/or injured adult brains. INTRODUCTION Soybean isoflavones are a natural class of isoflavones, exclusively produced by the legume family (1). They are well-known phytoestrogens that can bind and modulate the action of nuclear receptors including estrogen receptor (ER), thyroid hormone receptor, androgen receptor, pregnane X receptor, and aryl hydrocarbon receptor (2)(3)(4)(5)(6). Binding of isoflavones to receptors exerts various effects at the molecular, cellular, and organ levels (7). In addition, isoflavones also can affect other pathways by modulating membrane receptors, protein kinases, transcription factors, chromatin remodeling, antioxidants, and altering some enzyme activities (8,9). Genistein, daidzein, and S-equol, a metabolite of isoflavones, are the main isoflavones that have been intensively studied. This wide variety of actions indicates that isoflavones act via several different signaling pathways. Recent studies have shown that 17b-estradiol (E2) activates the G protein-coupled estrogen receptor (GPER; also known as GPR30), which then initiates several intracellular signal transduction pathways, such as the epidermal growth factor receptor-mediated pathway to activate extracellular signalregulated kinase 1/2 (ERK1/2) and/or Akt-mediated pathways (10)(11)(12)(13). In addition to E2, isoflavones may also interact with the GPER. In vitro, activation of the GPER by isoflavones has been demonstrated to trigger cell signaling pathways and growth factor receptor cross-talk (14,15). Our previous study showed that S-equol could activate GPER to increase p-ERK1/2 leads to induced proliferation, growth, and differentiation in both neurons and astrocytes during cerebellar development (14). The K d (dissociation constant) of E2 to the GPER is 3-6 nM. Meanwhile the effective concentration 50 (EC 50 ) values of isoflavones to the GPER based on functional dose-response 133 nM for genistein, < 1 nM for daidzein, and 100 nM for Sequol (16). Based on these findings, we hypothesized that isoflavones would affect the GPER signaling pathway and alter cellular function. Estrogen plays a key role in the development and functional maintenance of the central nervous system (CNS) through genomic (via the ER) and rapid nongenomic responses via the GPER (17,18). GPER is highly expressed in the CNS, including glial cells (19). GPER knockout mice showed altered anxiety levels and stress response (18), and this phenotype could not be fully rescued by estrogen treatment (18). These results indicate the involvement of the GPER in the normal development of the CNS. However, the role of the GPER on the function of each subset of cells remains unclear. Glial cells are essential for brain functioning during development and in the adult brain and have been shown to play a significant role in neuronal migration, proliferation, differentiation, and synaptogenesis (20). Glial cells comprise astrocytes, oligodendrocytes, and microglia, among which, astrocytes are the most abundant cell type in the CNS (21). Astrocytes are most likely migrate to their final destination shortly after their birth in the ventricular zone or subventricular zone, cortical gray matter astrocytes were found to migrate along with radial glia processes, whereas white matter astrocytes migrated along developing axons of neurons (21,22). Astrocytes are activated in injured or diseased CNS and begin to proliferate and migrate. This process is known as astrogliosis (21). High levels of GPER expression in astrocytes may affect the physiological response of astrocyte during development or in the adult brain. Cell migration is a critical process in both physiological and pathological processes. The Rho family of GTPase is the core regulator of cell migration (23). In the CNS, Rho GTPase family members, such as RhoA, Rac1, and Cdc42, play fundamental roles in a wide variety of cellular processes, including rearrangement of the actin cytoskeleton, cell polarity, and controlling dynamic astrocyte morphology (24)(25)(26). Deletion of Rac1 and Rac3 in cerebellar granule neurons (CGNs) led to severe impairment of radial migration of CGNs, defects in the internal granule layer, and decreased cerebellum size (27). Cdc42 knockout mice also showed impaired radial migration of CGNs, disturbed alignment of Bergmann glia in the Purkinje cell layer, and aberrantly aligned Purkinje cells (24). In addition, astrocytes lacking Cdc42 were still able to form protrusions, although were unable to migrate in a directed manner toward the scratch/wound (26). Since isoflavones may bind to the GPER in astrocytes, these results raise the possibility that isoflavones affect astrocyte migration via the RhoGTPase signaling pathway. Our previous study showed that S-equol, a daidzein metabolite, activates GPER to induced F-actin rearrangement lead to increase astrocyte migration during cerebellar development with unknown mechanisms (14). The present study examined the effects of isoflavones on cell migration of glial cells using astrocyteenriched cultures of cerebral cortex and astrocyte-derived C6 clonal cells by wound healing and cell migration/invasion assays. We also examined changes in the actin cytoskeleton by labeling F-actin using phalloidin. Our findings revealed that isoflavones induced Factin rearrangement and accelerated cell migration. These effects were reduced by the GPER inhibitor, G15, or short interfering RNA (siRNA) knockdown of GPER. Furthermore, activation of GPER by isoflavones activated the PI3K/Akt signaling pathway that induce RhoGTPase to accelerate cell migration. The results of our in silico molecular docking study revealed a common possible binding site of the isoflavones on the GPER. Primary Culture of Mouse Cerebral Cortex Astrocytes The animal experimentation protocol in the present study was approved by the Animal Care and Experimentation Committee, Gunma University (19-024, 17 December 2018), and all efforts were made to minimize animal suffering and the number of animals used. A primary culture of mouse cerebral cortex astrocytes was prepared as previously described (29,30) with slight modifications. A pregnant C57BL/6 strain mice were purchased from Japan SLC (Hamamatsu, Japan). Briefly, postnatal day 1 mouse cerebral cortices were dissected and digested with 2.5% trypsin (Wako, Japan) in Hank's balanced salt solution (Wako) for 30 min with continued shaking at 37°C. Cells were resuspended in an astrocyte culture medium (high-glucose DMEM, 10% heat-inactivated FBS, and 1% penicillin/streptomycin), and 10-15 million cells were plated on 10-cm dishes coated with Collagen I (Iwaki, Japan). Cells were incubated at 37°C in a CO 2 incubator. On day 3 in vitro (DIV3), astrocyte culture medium was replaced with phosphatebuffered saline (PBS). Dishes were then shaken by hand for 30-60 s until only the adherent monolayer of astrocytes was left. The PBS was then replaced with a fresh astrocyte culture medium. Astrocytes were harvested on DIV7 using 0.25% trypsin 1 mM disodium EDTA (Wako), and then plated on 12 or 24 well dishes. Cells were used for cell invasion assay or F-actin staining. In Vitro Wound Healing (Scratch) Assay C6 cells were plated in 24-well plate and cultured until confluent. Prior to making a scratch, cells were serum-starved in FBS-free DMEM for 6 h. A wound was created by scratching the monolayer with a 200-µl pipette tip. Floating cells were washed away using PBS. Serum-free DMEM and/or isoflavones, E2, G15, U1026, LY294002, rhosin, Casin, and/or ML-141 were added to the wells and incubated for a further 24 h. At 0 and 24 h, live-cell staining was performed using Cellstain-Hoechst 33258 solution (Dojindo Molecular Technologies, Inc., Japan) according to the manufacturer's protocol. Images of the scratched area were taken at 0 and 24 h. The cells were then visualized using a fluorescence microscope (Keyence BZ9000, Keyence Corporation of America, Itasca, IL, USA). Cell migration was determined at the edges of the wound, and the percentage migration was determined as the ratio between migrated distance and initial distance of the wound. Matrigel Invasion Assay In vitro invasion assays were performed using a 24-well Millicel hanging cell culture insert and a Corning Matrigel matrix according to the manufacturer's instructions. In brief, astrocytes were seeded at a density of 1 × 10 5 /ml in serum-free DMEM in the upper chamber. The lower chamber was filled with serum-free DMEM and/or isoflavones, E2, G15, U1026, LY294002, rhosin, casin, and/or ML-141. After 16-18 h of incubation, noninvading cells in the upper chamber were removed with a sterile cotton swab. The filters from the inserts were fixed with 4% paraformaldehyde (PFA) and stained with DAPI. The cells were then inspected using a laser confocal scanning microscope (Zeiss LSM 880, Carl Zeiss Microscopy GmbH, Jena, Germany). The number of invaded cells on the lower surface of the filter was counted. Filopodia Formation and Cortical F-Actin Score Index Astrocytes were cultured on poly-L-lysine-coated coverslips and serum-starved DMEM for 24 h. The cells were then treated with either isoflavones or E2 for 30 min then washed with PBS and fixed with 4% PFA followed by blocking with 2% FBS. The cells were incubated with CytoPainter Phalloidin-iFluor 594 reagent (Abcam, Cambridge, UK) and nuclei were stained with DAPI and then visualized under a laser confocal scanning microscope (Zeiss LSM 880, Carl Zeiss Microscopy GmbH). The degree of cytoskeletal rearrangement was examined using the FiloQuant by ImageJ Fiji (NIH) or cortical F-actin score CFS index (31). The CFS index was determined based on at least three independent experiments. Briefly, F-actin cytoskeletal reorganization for each cell was scored on a scale ranging from 0 to 3, based on the degree of cortical F-actin ring formations 0, no cortical F-actin, normal stress fibers; 1, cortical F-actin deposits below half the cell border; 2, cortical F-actin deposits exceeding half the cell border; and 3, complete cortical ring formatting and/or total absence of central stress fiber. A minimum of 50 cells were examined from each group in each independent experiment, and the CFS index for treated astrocytes was the average score of the counted cells ± standard error of the mean (SEM). Immunocytochemistry Analysis of Protein Phosphorylation and F-Actin Formation Cultured cells were exposed to isoflavones or E2 for 30 min then rinsed three times with PBS, fixed with 4% PFA, and blocked with 2% FBS. Cells were then incubated with rabbit monoclonal anti-phospho-Akt RNA Interference Assay Astrocyte-enriched cultures were transfected with siRNAs for ERa (Thermo Fisher Scientific.), ERb (Thermo Fisher Scientific), GPER (Integrated DNA Technologies, Inc., Coralville, IA, USA), or negative control RNAs (nontargeting control [catalog no. SIC001; Sigma-Aldrich] or negative control DsiRNA [catalog no. 51-01-14-03; Integrated DNA Technologies, Inc.]), using lipofectamine RNAiMAX reagent (Thermo Fisher Scientific) according to the manufacturer's protocol. The list of siRNA sequences used in this study is listed in Table 1. Briefly, siRNA lipid complexes [1 nM of control siRNA (scrambled RNA), ERa, ERb or GPER siRNA] were incubated for 20 min, and then added to astrocytes at approximately 80% confluency in 35-mm dishes. After 16-24 h, the cells were subjected to matrigel invasion assay. The efficacy of the siRNA knockdown was verified by quantitative real-time PCR (qRT-PCR). Total RNA was extracted using SuperPrep cell lysis and RT kit for qPCR reagent (TOYOBO Bio-Technology, Japan) according to the manufacturer's instructions. qRT-PCR was performed using THUNDERBIRD SYBR qPCR mix (TOYOBO) as per the manufacturer's instructions and using a StepOne RT-PCR System (Thermo Fisher Scientific). The list of primers used in this study is listed in the supplementary information 1 (SI. 1). qRT-PCR was performed as follow: denaturation at 95°C for 20 s, followed by amplification at 95°C for 3 s and at 60°C for 30 s (40 cycles). All experiments were repeated three times, using independent RNA preparations to confirm the consistency of the results. All mRNA levels were normalized to that of Gapdh. In Silico Analysis of Ligand-Receptor Binding In silico molecular docking analysis was performed as described previously (3), with slight modifications. All in silico calculations were performed using Dell XPS 8930 with Intel Core i7-8700 CPU @ 3.2 GHz, 16 GB DDR4 2666 MHz, NVIDIA GeForce GTX 1060 6 GB, running on a windows 10 professional operating system. Molecular structures for genistein (PubChem CID 5280961), daidzein (PubChem CID 5281708), S-equol (PubChem CID 91469), and E2 (PubChem CID 5757) were downloaded from PubChem (https://pubchem.ncbi.nlm.nih.gov/) in sdf format. The encoding sequence for GPER was retrieved from UniProt database (accession no Q6FHU6) and FASTA format was submitted to I-TASSER website (https://zhanglab.ccmb.med.umich.edu/I-TASSER/), a specialized server for building three-dimensional (3D) models of seven-transmembrane domain receptors (32)(33)(34). The I-TASSER server yielded five models from 10 different templates (3oduA, 4mbsA, 4n6hA2, 4yayA, 5nddA, 5t1aA, 5vblB, 5zbhA, 6d26A, and 6me6A). The protein conformation was refined through molecular dynamics (MD) simulations performed with GROMACS package (35). The three-dimensional structure of GPER and ligands files were opened and modified with Discovery Studio structure-based design software, version 4.0 (BIOVIA/Accelrys Inc., San Diego, CA, USA). Water molecules and other substructures (bound molecules/ligand molecules) were removed from the coordinate file before docking. GPER models 1-5 were used for the docking of genistein, daidzein, S-equol or E2. Polar hydrogen atoms were added to the 3D structure of the GPER and generated input file in pdbqt format of GPER using AutoDockTools of MGLTools (http://autodock.scripps.edu/ resources/adt). Docking coordinates were determined through a grid box in PyRx-Python Prescription 0.8 Virtual Screening software for Computer-Aided Drug Design (http://pyrx. sourceforge.net/) with AutoDock 4 and AutoDock Vina are used as a docking software (36). A blind docking strategy was utilized to include all possible ligand binding sites. MD simulations of the molecular complexes were carried out for each starting pose by using the AMBER ff99SB-ILDN force field (37) for the protein and GAFF (38) for the ligand. After an initial period of equilibration, conformational sampling was performed in the isobaric-isothermal ensemble in explicit water for 10 ns, with Cl-counterions added to obtain an overall neutral system. The system was first equilibrated for 2.5 ns and structures were afterwards sampled every 0.5 ns to evaluate the binding energy and the ligand location. At the end of the MD simulations, the binding modes and the affinity of the ligands were estimated from the structures of the protein-ligand complexes obtained every nanosecond. The binding energy was evaluated by using the AutoDock Vina energy evaluation function in score-only mode. LigPlot + v.1.4 (http://www.ebi.ac.uk/thornton-srv/software/ LigPlus/) was used to determine the interactions existing for the GPER and ligands complexes with best affinity score values. Binding affinity was expressed as a binding free energy (kcal/mol). Statistical Analysis Data are expressed as mean ± SEM of three individual experiments performed in triplicate. One-way or two-way ANOVA followed by Bonferoni's multiple comparison tests were performed using GraphPad Prism version 8.3.1 for windows (GraphPad Software, San Diego, USA, www.graphpad. com). All p-values < 0.05 were considered statistically significant. Sequences ERa Sense Isoflavone Increased Cell Migration Extensive research using flat, two-dimensional (2D) glass and plastic cell migration analyses has elucidated the detailed molecular and biophysical mechanisms of the migrating process in cultured cells. However, most cells migrating through tissues undergo 3D migration under the physical constraints of the surrounding cells and extracellular matrix (39,40). Therefore, we examined the effects of isoflavones or E2 ( Figure 1A) in 2D and 3D migration using wound healing and invasion assays, respectively. Representative photomicrograph images of cells stained using live-cell Hoechst staining in wound healing assays with 10 nM of isoflavones or E2 for 24 h are shown ( Figure 1B). In the wound healing assay, genistein, daidzein, S-equol and E2 increased cell migration of C6 astrocyte clonal cells without an evident concentration dependency. Genistein accelerated cell migration at concentrations of 1 and 10 nM, whereas this effect decreased at 100 nM. Daidzein accelerated cell migration in a concentrationdependent manner and reached a peak at 100 nM. S-equol (1 nM) showed the greatest acceleration ( Figure 1C). These results indicate that isoflavones accelerated 2D cell migration, but, was independent concentration. We also examined 3D astrocyte migration using an invasion assay. Representative photomicrographs of invaded astrocytes stained with DAPI after isoflavones or E2 exposure are shown ( Figure 1D). Isoflavones and E2 exposure accelerated the astrocyte migration in a dose-dependent manner, except genistein, which showed greatest acceleration at 10 nM ( Figure 1E). These results indicate that isoflavones and E2 can induce cell migration in C6 clonal cells and astrocytes. Isoflavones Increased Cell Migration via the GPER Pathway Isoflavones are known phytoestrogens that activate the estrogenmediated signaling pathway via nuclear ER and GPER. To examine whether further isoflavones affect cell migration, via ER and GPER, we used siRNA against ERa, ERb, or GPER to knockdown their RNA expression. Knock down of GPER in astrocytes significantly reduced isoflavone and E2-accelerated cell migration (Figures 2A, B and Figure SI.1). On the other hand, knock down of ERa also significantly but weakly reduced genistein or E2 accelerated cell migration ( Figure 2C and Figure SI.1). Knock down of ERb also weakly reduced S-equol or E2accelerated cell migration ( Figure 2D and Figure SI. 1). Furthermore, co-exposure with the GPER inhibitor, G15 (10 nM), significantly reduced isoflavone or E2-accelerated cell migration in cell invasion ( Figure 2E) and wound healing ( Figure 2F) assays. These results indicate that isoflavones and E2 accelerate cell migration mainly via activation of GPER. Acceleration of Cell Migration by Isoflavones Was Associated With F-Actin Induction The ability of cells to migrate requires complex molecular events that are initiated by the assembly of F-actin to alter the cellular morphology to move through interstitial submicron size pores in tissues (40,41). In order to further examine the mechanisms involved in isoflavone-induced acceleration of cell migration, we visualized F-actin with Phalloidin-iFlour 594 reagent. At first, we examined filopodia formation using FiloQuant by ImageJ Fiji. Isoflavones or E2 exposure for 30 min increased filopodia formation ( Figure 3A, upper panel). Quantitative analysis showed the increase in the number and length of the filopodia ( Figure 3A, lower panel). Then we continued to examine the formation of stress fibers in the cortical actin filaments using cortical F-actin score (CFS) index. The CFS index was determined based on F-actin cytoskeletal reorganization for each cell. It was scored on a scale ranging from 0 to 3, based on the degree of cortical F-actin ring formations 0, no cortical Factin, normal stress fibers; 1, cortical F-actin deposits below half the cell border; 2, cortical F-actin deposits exceeding half the cell border; and 3, complete cortical ring formatting and/or total absence of central stress fiber. Isoflavones or E2 attenuated stress fibers to increase cortical actin filaments in astrocytes after 30min exposure ( Figure 3B, upper panel). The CFS index significantly increased after 10 nM exposure of isoflavone or E2 and these effects were reduced by knockdown of GPER ( Figure 3B, lower panel). In addition, we also examined the effects of isoflavones and E2 on focal adhesion proteins related to actin reorganization. We found that 10 nM isoflavones or E2 increased the protein expression levels of vinculin, cortactin and paxillin, and these effects were reduced by knockdown of GPER ( Figure 3C). We also found that both isoflavones and E2 increased the protein expression levels of ERa and it reduced after silencing the GPER. However, there is no significant changes in the talin-1 and a-actinin protein expression levels after the exposure of isoflavones or E2 ( Figure 3C). These results indicate the exposure of isoflavones induced Factin formation, which may have accelerated the migration of astrocytes. Isoflavones Accelerated Cell Migration via the GPER/PI3K/FAK/Akt Pathway Activation of GPER can induce FAK, Akt, and ERK phosphorylation signaling. To examine the downstream targets of GPER activation by isoflavones, we performed Western blot analysis to measure phosphorylation of FAK, Akt, and ERK1/2 after knockdown of GPER. Isoflavones or E2 increased pFAK, pAkt, and pERK1/2 protein levels, and these effects were reduced after knockdown of GPER ( Figure 4A). Moreover, immunofluorescence study showed increased F-actin expression concurrent with pAkt, but not with pERK1/2 ( Figure 4B and Figure SI.2). To examine further, we cultured cells with isoflavones and either PI3K inhibitor (LY294002) or ERK1/2 inhibitor (U0126) prior to performing wound healing and invasion assays. LY294002 suppressed isoflavones or E2-accelerated cell migration in C6 cells. No significant effects were observed after co-exposure of isoflavones with U0126 ( Figures 4C, D). However, we found significant difference in cell invasion after co-exposure of E2 and U0126 ( Figure 4D). These results indicate isoflavones increased cell migration via GPER/PI3K/FAK/Akt pathway. Isoflavone Activated p-Akt Led to Increase RhoGTPase Levels Cell movement is depended on the involvement of Rho GTPase activation on actin. RhoA, Rac1, and Cdc42 play major roles in actin polymerization that leads to cell movement. We examined the effects of isoflavones on Rho GTPase signaling using Western blot, wound healing assay, and immunocytochemistry analyses. Western blot analysis showed that 10 nM isoflavones or E2 increased protein levels of pRac1/Cdc42 ( Figures 5A, B and Figure SI. 3). The phosphorylation of Rac1/Cdc42 significantly decreased after knockdown of GPER or co-exposure with LY294002 ( Figure 5A). Immunocytochemistry analysis also showed an overlap between F-actin and pRac1/Cdc42 ( Figure 5B). Co-exposure with RhoA inhibitor (rhosin), Rac1/Cdc42 inhibitor (ML-141), or Cdc42 inhibitor (chasin) significantly suppressed isoflavone or E2-accelerated cell migration in the wound healing and cell invasion assays (Figures 5C, D). These results indicate that exposure to isoflavones increased the expression levels of Rho GTPase to induce F-actin formation and subsequent activation of cell motility. Potential Binding of Isoflavones to the GPER To investigate the plausible binding modes of isoflavones to GPER, we generated in silico binding models using molecular docking study with AutoDocks Vina. Since the crystal structure of GPER remains unknown, the 3D protein structure was predicted using the I-TASSER website. The encoding sequence for GPER was retrieved from the UniProt database (accession number Q6FHU6) Data are expressed as mean ± SEM and are representative of at least three independent experiments. ****p < 0.0001, ***p < 0.001, **p < 0.01, *p < 0.05, indicates statistical significance measured using Bonferroni's test compared with control (-). #### p < 0.0001, ### p < 0.001, ## p < 0.01, indicates statistical significance measured using Bonferroni's. and submitted to the I-TASSER website, a specialized server for building 3D models of seven transmembrane receptors. The I-TASSER server yielded five predicted models from 10 different templates. Before generating the MD simulations, several geometrical observables such as area per lipid, the root mean square deviation (RMSD) of heavy atom with respect to the starting conformation, and the atomic fluctuation were evaluated to observe if the systems reached equilibrium. RMSD is known standards for measuring structural similarity between two structures which are usually used to measure the accuracy of structure modeling. MD simulations shows that after reached the equilibrium, the RMSD values of 8.4 ± 4.5, 5.6 ± 0.19, 5.5 ± 1.2, 5.0 ± 0.13, and 5.9 ± 1.1 Å for GPER, GPER-genistein, GPERdaidzein, GPER-S-equol, and GPER-E2, respectively. The 3D docking results of GPER showed isoflavones and E2 possess a similar binding pose under blind docking procedures. Isoflavones and E2 could form a hydrogen bond with Glu 329 and have the same amino acid residues that have equivalent 3D positions concerning the residues in the first plot, as shown in red circles and ellipses ( Figures 6A-D). An additional possibility to form hydrogen bond was also found between genistein and Arg 169 and Arg 253 ( Figure 6A), daidzein and Arg 169 ( Figure 6B), S-equol and Thr 330 ( Figure 6C), and E2 and Thr 330 ( Figure 6D). The binding affinities for genistein, daidzein, S-equol and E2 were -8.8, -8.6, -8.9, and -8.3 kcal/mol, respectively, with GPER. The docking poses in 3D model of isoflavones and E2 bound to GPER also shown in SI. 4. These results indicate that isoflavones may bind directly to GPER to accelerate cell migration. DISCUSSION The present study examined the effects of isoflavones and E2 on glial cell migration. Previously we reported that S-equol, a daidzein metabolite, activates GPER to induced F-actin rearrangement lead to increase astrocyte migration during cerebellar development with mechanisms remains unclear (14). This study reveals a novel mechanism of isoflavones (genistein, daidzein, and S-equol) in cell migration via GPER that may play an important role not only during brain development but also brain injury. We found that isoflavones increased 2D and 3D glial cell migration of primary astrocytes and C6 clonal cells. Isoflavone-accelerated cell migration was suppressed by knock down of GPER expression or coexposure with a GPER inhibitor. Isoflavone exposure also increased phosphorylation of FAK and Akt, which is a downstream target of GPER, leading to increased phosphorylation levels of RhoGTPase signaling, including Rac1 and Cdc42 that play a major role in F-actin formation. In silico analysis revealed that these isoflavones may directly interact with GPER. Our results showed the novel action of isoflavones in promoting glial cell migration via GPER signaling pathway. The estrogenic activity of isoflavones has been well demonstrated (16,42). Genistein exhibits >20-fold higher affinity for ERb than ERa. Binding of isoflavones to ERs leads to shuttling of the ligand-ER complex to the nucleus and induces the transcription of target genes via the classical genomic pathway (16). In addition, recent studies have shown that isoflavones can interact with GPER and mediate rapid cellular signaling in neurons and endothelial cells (43)(44)(45). The present study also showed that the effects of isoflavones may be exerted by binding to the GPER to accelerate cell migration, since GPER knock down or co-exposure with GPER antagonist, at least in part, inhibited its action on astrocytes migration. On the other hand, knock down of nuclear ERs led to a weaker effect than that seen after GPER knockdown. These results indicate that the accelerated migration of astrocyte by isoflavones is mainly exerted via the GPER. The action may be slightly different to that of E2, since E2 action was inhibited by knock down of both nuclear ERs and GPER. We were unable to clarify the mechanisms involved in these differences. One possibility may be due to the difference in the affinities of the ER and GPER. The finding of our in silico study revealed a higher affinity of isoflavones compared with E2. However, further studies that include crystallization of GPER are required to confirm this difference. The GPER belongs to the family of seven transmembranespanning GPCR and specifically binds estrogens, thereby activating intracellular signaling cascades (15,16). In addition to cell migration, the GPER regulates various cellular functions, such as apoptosis, autophagy, proliferation, and differentiation, via a wide variety of signal transduction pathway including Ras/ ERK (46,47), PI3K/Akt (47)(48)(49), receptor tyrosine kinase (16), PLC-mediated pathway (50), and cAMP-mediated pathway (16). GPER induces rapid cellular effects including the production of cAMP, the mobilization of intracellular calcium, and the activation of kinase, such as ERK and PI3K, as well as ion channels and endothelial nitric oxide synthase (eNOS) (16). In addition, ERs (especially ERa) also activate such pathways. However, in cells that express both ERa and GPER there is a possibility of crosstalk or squelching between receptors (16,51). The GPER also mediates estrogenic regulation of actin polymerization involves SRC-1 and PI3K/mTORC2 pathways in the hippocampus of female mice (49). In addition, the GPER acts via the PLCb-PKC and Rho/ROCK/LIMK/cofilin pathways to regulate F-actin cytoskeleton assembly, thereby enhancing TAZ nuclear localization and activation, leading to increased cell migration and invasion (50). These varied GPER-activated pathways to regulate diverse cellular functions indicate the profound implications of GPER under physiological and pathophysiological conditions. In the present study, although isoflavones increased phosphorylation of ERK1/2 and Akt, inhibition of the PI3K/Akt pathway significantly suppressed the cell migration of astrocytes. These results indicate that, although various signal transduction pathways may be activated by isoflavones via the GPER, the Akt signaling pathway plays a major role in accelerating cell migration. Each signal transduction pathway of GPER may play a distinct role in cellular function. The small GTPases of the Rho family (RhoA, Rac1 and Cdc42) appear to be at the heart of the initial signals leading to cellular polarization, and stress fiber, filopodia, and lamellipodia formation in migrating cells (40,41,52). Classic RhoGTPases are regulated by the opposing actions of Rho- specific guanine nucleotide exchange factors (GEFs) and GTPase-activating proteins (53). PI3K activates Rac and Cdc42 via activation of PIP3-regulated GEFs, and inhibition of Cdc42 in Ras-transformed cells decreased Akt signaling, leading to reduced migration/invasion (54). In addition to PI3K activates Rac and Cdc42, FAK also can influence the activity of RhoGTPases through a direct interaction or phosphorylation of the protein activators or inhibitors of RhoGTPases (55). It has been reported that estrogen activated FAK tyrosine phosphorylation (Tyr 397/576/577 ) via Src, then regulated Cdc42 and Cdc42 effector Wiskott-Aldrich syndrome protein N-WASP (Neuronal-WASP) (56). N-WASP is a scaffold protein that links upstream signals to activate of the Arp2/3 complex, leading to actin nucleation for the rapid formation of actin network at the leading edge of the cell (55,56). It has been known that paxillin and cortactin are direct target of FAK in the regulation of focal adhesion dynamics to promotes cell motility or invasion (55). In the present study, it is highly possible that activation of the PI3K/Akt axis and FAK induced the activation of Rac1, Cdc42, and focal adhesion protein, leading to accelerated cell migration. Cdc42 is essential for the formation of protrusions leading to elongated morphology. Deletion of Cdc42 in astrocytes revealed that the cells were still able to form protrusions, but in a nonoriented manner (26). Consequently, astrocytes failed to migrate in a directed manner toward a scratch. On the other hand, Rac is essential for both the development and maintenance of protrusions during migration. Rac1 also plays a role in local restructuring of the cytoskeleton coordinate with surface expansion, leading to astrocyte stellation (57). Isoflavones activated of Rac1/ Cdc42 and increased the filopodia and cortical actin ( Figure 5). Moreover, coexposure of isoflavones with ML-141, a Rac1/Cdc42 inhibitor, or casin, a Cdc42 inhibitor, significantly suppressed isoflavones-accelerated cell migration, indicating the involvement of Rac1/Cdc42 in this process. In addition, rhosin, a RhoA inhibitor also suppressed migration, indicating its involvement. These results are consistent with our hypothesis that isoflavones bind to the GPER ( Figure 6) and activate PI3K/Akt axis signaling pathways to induce activation of RhoGTPase, resulting in F-actin formation and activation of astrocytes cell migration. Astrocytes contribute to physiological brain function on many levels, including monitoring normal function of neurotransmitter uptake, synapse formation, regulation of the blood-brain barrier, and development of the CNS (21,25). Astrocytes become dynamic migratory cells under certain physiological action and/or pathological conditions (21,26,40). Astrocyte migration requires coordination of complex signaling pathways, such as actin polymerization, delivery of membrane to the leading edge, and formation of attachments at the leading edge to provide traction, contraction, and disassembly of attachment at the rear (21,26). Cell migration also depends on the mechanical and chemical interaction between the cells and their extracellular environment. Mechanical interaction depends on the polarization, adhesion, deformability, contractility, and proteolytic ability of cells (58). Our study showed that exposure to isoflavones increased the 2D or 3D migration of astrocytes by activating F-actin formation. Filopodia and stress fiber formation significantly increased after the exposure to isoflavones. F-actin in migrating cells are polarized with their plus (barbed)-ends toward the cell periphery against the plasma membrane, resulting in the formation of filopodia or lamellipodia, anchored focal adhesion, and extension at the front of the cells (40,59). Filopodia are thin, finger-like, highly dynamic actin-rich membrane protrusions that extend out from the cell edge, thus extension of filopodia is driven by linear polymerization of actin filaments (41). During 3D migration, the mode of migration depends on the extracellular environment in which cells adopt round, amoeboid shapes, and extend lamellipodia. Extensive studies have revealed that amoeboid migration does not require focal adhesion-dependent force transmission, but instead relies on the global retrograde flow of cortical actomyosin (40). Cortical actin networks are involved in aligning along cell-cell junctions, supporting both stable and dynamics contacts in stationary epithelial and during collective cell migration (58). In summary, isoflavone-induced cell migration and F-actin rearrangement are processes that cannot be separated. However, further studies are required to understand how isoflavones induce cell migration as a result of the F-actin rearrangement. The effect of isoflavones on human health remains controversial. While studies into the use of phytoestrogens as dietary supplements have reported various health benefits, such as antioxidant, anti-inflammatory, anti-cancer, and neuroprotective effects, there is a concern about potential adverse effects as results of modulating or disrupting endocrine function (8,16,60). However, most studies showing adverse effects used a higher dose of soy isoflavones than those found in the plasma of the population who regularly consume soy. Isoflavone dose is crucial to examine the effects of isoflavones in human health. For example, isoflavone dose affects risk of cancer. It was demonstrated that genistein enhanced cell proliferation in MCF-7 cells at concentrations of 10-100 nM. However, at higher concentrations (> 20 µM), genistein inhibited MCF-7 cell growth (8,61). The total isoflavones plasma concentration in the Asian population consuming a traditional diet, including soy-based food, is in the range of 525-775 nM. In contrast, the total isoflavones plasma concentration in European countries was found to be <10 nM in individuals with a nonvegetarian diet and 79-148 nM in those with vegetarian and vegan diet (62). For the practical application of isoflavones in human health, studies using doses of isoflavones that represent plasma concentrations should be undertaken. The dose of isoflavones used in the present study ranged from 1-100 nM. Since this was an in vitro study, the results cannot be compared with those from in vivo conditions. However, our highlight the novel possibility that isoflavones activate astrocyte, indicating that they can be a useful supplementary compound during brain development or in the injured brain. In summary, our results showed that exposure of physiological concentrations of isoflavones increased cell migration via direct binding to GPER and subsequent activation of the PI3K/FAK/Akt/ RhoGTPase signaling pathway, which induces F-actin formation (Figure 7). The present study highlights the potential use of isoflavones as an effective supplement to promote astrocyte migration during brain development or brain injury. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
v3-fos-license
2016-05-12T22:15:10.714Z
2012-10-15T00:00:00.000
15900342
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/archive/2012/396590.pdf", "pdf_hash": "a6b6a60c844cbba55717180cfdec14f5414eb63f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42679", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "sha1": "a6b6a60c844cbba55717180cfdec14f5414eb63f", "year": 2012 }
pes2o/s2orc
Fungi and Mycotoxins from Pre- and Poststorage Brewer's Grain Intended for Bovine Intensive Rearing The aim of the study was to determine the mycobiota and natural levels of mycotoxins such as aflatoxin B1 (AFB1), ochratoxin A (OTA), fumonisin B1 (FB1), and deoxynivalenol (DON) present in brewers grains pre- and poststored intended for bovine intensive rearing. Poststored (80%) samples had counts higher than 1 × 104 colony-forming units (CFU/g). Cladosporium spp. and Aspergillus spp. were isolated at high frequencies. Aspergillus flavus was the prevalent isolated species. Prestored (70%) and poststored (100%) samples showed AFB1 levels over the recommended limits (20 μg/Kg), and OTA levels were below the recommended limits (50 μg/Kg) while pre- and poststored samples did not show FB1 and DON natural contamination levels. The presence of mycotoxins in this substrate indicates the existence of contamination. Regular monitoring of feeds is required in order to prevent chronic and acute toxic syndromes related to this kind of contamination. Introduction The use of agroindustrial residues as a food supplement for animal production plays a significant economic role due to the availability and versatility of these materials. Brewer's grains (beer industry residue) are an interesting alternative option as feeding for animal production, being a rich source of protein and fibber at a low price [1,2]. Inadequate management of raw materials during storage can result in excessive moisture or dryness, condensation, heating, leakage of rainwater, and insect infestation, leading to undesirable growth of fungi [3]. Worldwide, the contamination of animal feed and the potential contamination of animal meat by mycotoxins represent a serious hazard to humans and animals. Mycotoxins are toxic, chemically diverse secondary substances or metabolites produced by a wide range of fungi. They are mainly produced by Aspergillus, Penicillium, and Fusarium genera [4]. Due to the diversity of their toxic effects and their synergetic properties, mycotoxins are considered as risky to the consumers of contaminated foods and feeds [5]. Aflatoxins (AFs), the fungal metabolites produced by some strains of A. flavus and A. parasiticus, are of great concern because of their detrimental effects on the health of humans and animals, including carcinogenic, mutagenic, teratogenic, and immunosuppressive effects [6,7]. Ochratoxin A (OTA) is one of the most common and dangerous mycotoxin in food and feed, naturally produced by A. ochraceus, A. carbonarius and A. niger aggregate mainly in tropical regions, and P. verrucosum in temperate areas [8][9][10]. This toxin has a potent toxicity, and the nephrotoxic, hepatotoxic, teratogenic, carcinogenic, and immunosuppressive effects have been demonstrated in all mammalian species [11]. Fumonisins (FBs), produced by Fusarium verticillioides and F. proliferatum, occur worldwide and are predominantly found in maize and in maize-based animal feeds. Fumonisin B 1 (FB 1 ) is the most common and the most thoroughly studied, causing toxicities in animals such as equine leukoencephalomalacia (ELEM) and porcine pulmonary edema (PPE), diseases long associated with the consumption of mouldy feed by horses and pigs, respectively [12]. Deoxynivalenol (DON) or vomitoxin is a commonly occurring mycotoxin produced primarily by F. graminearum and F. culmorum [13]. This toxin can cause vomiting, feed refusal, gastrointestinal irritation, and immunosuppression [14]. Previous studies performed in Brazil determined the fungal biota as well as the presence of different mycotoxins in brewer's grain and barley rootlets intended for cattle and pigs [15][16][17][18]. There are no data about the contamination with fungi and mycotoxins in brewer's residue stored in farms in a similar manner of trench type silos during 3 months. Therefore, the aims of this work were to determine mycobiota occurrence and to evaluate AFs, OTA, FBs, and DON incidence on pre-and poststorage brewer's grains. Characteristic of Storage. Brewer's grains were transported from brewery industry to farms in trucks, deposited in five (5) structures similar to a trench type and stored in large pits dug into ground (1 m) and covered with a black plastic sheet. Storage and compaction was performed during 3 months, and brewer's grains were kept closed until to be used. The removal of material for animal feeding was made by shovelling. Samples Source. Brewer's grain samples were collected from bovine intensive rearing (feedlot) 2 farms in São Paulo State, Brazil. These samples were collected at different times: day zero (0) (immediately deposited) and after 90 days of storage (before feeding animals). To guarantee a correct sampling, each structure similar to trench type was imaginary divided along its length into three equal parts with four sections each: upper, lower, border, and middle sections. Six subsamples (500 g) were collected from each section to obtain a total of three kilograms sample. A total of 100 samples (3 kg each) of brewer's grains were taken at different times: 50 were taken at day 0, and 50 samples were taken at day 90. Samples were properly packed in bags and immediately sent to the laboratory. Samples were immediately processed for physical and mycological analyses and kept at −4 • C until mycotoxins analyses. Physical Properties of Samples. The pH and dry matter percentage for 100 g of each sample were determined according to Ohyama et al. [19]. Mycological Analysis. The quantitative enumeration of fungi as colony-forming units per gram of food (CFU/g) was performed using the surface-spread method described by Pitt and Hocking [20]. Ten grams of each sample were homogenized in 90 mL distilled water solution for 30 min in an orbital shaker. Serial dilutions (from 10 −2 to 10 −5 ) were made, and 0.1 mL aliquots were inoculated in duplicates onto the media dichloran that rose bengal chloranphenicol agar (DRBC) for estimating total culturable fungi [21] and dichloran 18% glycerol agar (DG18) that favors xerophilic fungi development. The plates were incubated at 25 • C for 5-7 days. All samples were also inoculated onto Nash and Snyder agar (NSA) to enumerate Fusarium species [22]. Nash-Snyder plates were incubated at 24 • C for 7 days under a 12 h cold white/12 h black fluorescent light photoperiod. Only plates containing 10-100 CFU were used for counting, with results expressed as CFU per gram of sample. On the last day of incubation, individual CFU/g counts for each colony type considered to be different were recorded. Colonies representative of Aspergillus and Penicillium were transferred for subculturing to tubes containing malt extract agar (MEA) whereas Fusarium spp. were transferred for subculturing to plates containing carnation leaf agar (CLA). Fungal species were identified according to Klich [23], Nelson et al. [22], and Samson et al. [24]. The results were expressed as isolation frequency (% of samples in which each genus was present) and relative density (% of isolation of each species among the same genera). Mycotoxins Detection and Quantification 2.5.1. Aflatoxin B 1 and OTA Determination. The extraction of AFB 1 was determined according to Soares and Rodriguez-Amaya [25]. Quantitative evaluation was made using highperformance liquid chromatography (HPLC). The detection limit of the technique for AFB 1 was 1.0 µg/kg. Fumonisin B 1 Determination. A commercially available enzyme-linked immunosorbent assay (ELISA) plate kit (Beacon Analytical Systems Inc., Portland, USA) was applied for the extraction and quantification of FB 1 . Mycotoxin extraction and testing were performed according to manufacturer's introductions. A 20 g portion of each sample was extracted with 100 mL methanol:water (70 : 30, v/v) during 3 min into a blend jar. The mixture was filtered through filter paper Whatman N • 4 (Whatman, Inc., Clifton, NY, USA) and an aliquot taken and placed into a culture plate. Detection limit of the technique was 0.3 µg/g. Deoxynivalenol Determination. An ELISA tube kit (Beacon Analytical Systems Inc., Portland, USA) was also applied for the extraction and quantification of DON. Mycotoxin extraction and testing (ELISA) were performed according to manufacturer's introductions. A 20 g portion of each sample was extracted with 100 mL distilled water during 3 min into a blend jar. The mixture was filtered through filter paper Whatman N • 4 (Whatman, Inc., Clifton, NY, USA) and an aliquot taken and placed into a culture tube. Detection limit of the technique was 0.5 µg/g. Statistical Analyses. Statistical analysis of data was by the general linear models model (MLGM). Fungal counts were transformed to log 10 (x + 1). Means obtained from CFU/g mycotoxin analyses were compared using Fisher's protected LSD test. Table 1 shows the physical properties of the sorghum samples. The pH mean levels ranged from 5.7 to 6.0 in prestored brewer's grain while the values of pH, from poststored were from 4.5 to 5. In both types of samples, dry matter values were from 39.7 to 41%. Table 2 shows fungal counts from pre-and poststored brewer's grains in different culture media. Total fungal count analyses from prestored shown values with means ranging from 1.7 × 10 3 to 2.9 × 10 3 CFU/g and 1.5 × 10 3 to 1.8 × 10 3 CFU/g in DRBC and DG18, respectively. Eighty percent of poststored samples had counts higher than 1 × 10 4 CFU/g. Means varied from 2.5 × 10 4 to 2.3 × 10 5 CFU/g in DRBC and from 6.2 × 10 3 to 1.5 × 10 5 CFU/g in DG18. There were significant differences between pre-and poststored brewer's grain samples. No statistically significant differences were found between different layer of the silo prestored brewer's grain in DRBC and DG18 while there were significant differences between fungal counts from poststored samples (P < 0.05). Figure 1 shows the isolation frequency of fungal genera (%) from pre-and poststored brewer's grain samples. Cladosporium spp., Aspergillus spp., Mucor spp., and yeasts were isolated at high frequencies. Eurotium spp., Penicillium spp. and Alternaria spp. were isolated at low frequencies. Mycological Survey. Fusarium spp. were isolated only from poststored brewer's grain samples. Figure 2 shows the relative density of isolated Aspergillus spp., Penicillium spp., and Fusarium spp. from pre-and poststored brewer's grain samples. Three Aspergillus spp. were isolated. Aspergillus flavus was the prevalent isolated species, followed by A. fumigatus and A. terreus. Aspergillus flavus was isolated at levels that ranged from 50 to 78% for pre-and poststored samples, respectively. While A. fumigatus and A. terreus were isolated from pre-and poststored samples. Penicillium citrinum was the only species isolated within this genus. Fusarium verticillioides was only present in prestored brewer's grain samples. Table 3 shows the AFB 1 and OTA levels found in pre-and poststored brewer's grain samples. Pre-and poststored samples did not show FB 1 and DON natural contamination levels. Four percent of pre- and poststored samples were contaminated with AFB 1 at levels that varied from 10 to 35 µg/Kg and 24 to 47 µg/Kg, respectively. Seventy percent of prestored and all poststored samples (100%) showed AFB 1 levels over the recommended limits (20 µg/Kg). None of the analyzed of prestored samples showed OTA levels. While 5% of poststored samples were contaminated with average levels of 9.8 µg/Kg. However, none of these samples were contaminated with OTA levels over the recommended limits (50 µg/Kg). No statistically significant differences were found between pre-and poststored brewer's grain for AFB 1 and OTA contamination (P < 0.05). Discussion Mycobiota and natural occurrence of AFB 1 , OTA, FB 1 , and DON in pre-and poststored brewer's grain were studied. Physical properties of brewer's grain samples showed that there was no difference in dry matter comparing pre-and poststored brewer's grains. The dry matter content is one of the main factor for well-preserved samples. The ideal values of this parameter are between 26 and 38% with pH around 4.0 [27]. The physical factor that assures the preservation is pH. The pH difference between pre-and poststored samples is due to the acidification of carbohydrates present in the raw material by microorganisms present in this ecosystem. In this work, this substrate was acidified through time, and the pH values in poststored brewers' grain were between 4.5 to 5.0 after 90 days of storage. In this study, the average of fungal colony counts from all prestored brewer's grain samples had counts lower than the maximum proposed limit (1 × 10 4 CFU/g) [26]. However, poststored brewer's grain samples had high values, which were over the maximum of the recommended limits. These results suggest a high fungal activity that could affect the palatability of feed and reduce the animal nutrients absorption, determining a low-quality substrate [28,29]. Simas et al. [15] and Rosa et al. [17] studied the same substrate intended for dairy cattle feed. They found media counts of 1 × 10 3 CFU/g and 6 × 10 5 CFU/g in potato dextrose agar and DRBC media, respectively. Cavaglieri et al. [18] obtained counts ranging between 1 × 10 3 and 1 × 10 6 CFU/g in DRBC; however, they studied other waste derived from processing of barley intended for pigs (barley rootlet). This substrate was storage between 8 and 15 days while in this study the period was 90 days. In this work, Cladosporium spp. and Aspergillus spp. were the most prevalent genera isolated from pre-and poststored samples. Similar percentages of Aspergillus spp. were found by Cavaglieri et al. [18] in barley rootlets; in addition they found Fusarium spp. as the prevalent genus. In this study, the scarce presence of Fusarium sp. may be the result of brewer's grain storage and processing conditions. These conditions may have favoured the development of storage and contaminant fungi instead of those known as field fungi, which include the genus Fusarium, more frequently found on recently harvested grain than on processed and stored ones [30]. Several studies have proved that Aspergillus and Penicillium genera were predominant in brewer's such as Simas et al. [15], Rosa et al. [17], and Gerbaldo et al. [31]. A high frequency of yeasts was also found. The significance of yeasts, which were frequently isolated, is not known in this substrate. In this study, A. flavus was the most prevalent species followed by A. fumigatus and A. terreus. These results agree with those of Gerbaldo et al. [31] who reported high percentages of A. flavus and A. fumigatus in brewer's grains intended for pigs in Argentina. Rosa et al. [17] found A. niger aggregate as prevalent followed by A. ochraceus, A. terreus and A. flavus from dairy cattle feed. Penicillium citrinum was only species of Penicillium genus isolated. Previous studies in the some substrate have demonstrated high frequency of P. citrinum together with P. funiculosum, P. janthinellum, P. rugulosum, and P. viridicatum [17]. Fusarium verticillioides was isolated at low frequency in our study. Cavaglieri et al. [18] studied barley rootlets as feed for pigs. They found F. verticillioides as the only species within Fusarium genus, but at high frequency. Other researchers did not identify species of Fusarium sp. from the same substrate of this work [15,17,31]. Scientific reports on the contamination of brewer's grain with mycotoxins in Brazil are scarce. Simas et al. [15] studied the presence of AFB 1 and OTA in this substrate. In this study, levels of AFB 1 found from prestored samples were higher than those obtained by Simas et al. [15]. Considering the vast territory of Brazil, this may be due to different climatic conditions between the two states. Regulations on standard products in the animal feed sector established that the current maximum permitted level for AFB 1 is 20 µg/Kg [26]. In this work, 75% and 100% of the samples contaminated at 0 and 90 days of storage, respectively, showed AFB 1 levels higher than the recommended limits for feedstuffs. The OTA concentrations were observed in samples derived from poststored samples. Rosa et al. [17] found higher amounts of OTA in samples of brewer's grains. In this work, OTA levels were below the recommended limit which is 50 µg/Kg [26]. The presence of this mycotoxin in this substrate indicates the existence of contamination, a fact that would require periodic monitoring. Brewer's grains samples did not show FB 1 and DON contamination. Our results did not agree with Batatinha et al. [16] and Cavaglieri et al. [18] who found FB 1 in brewer's grains and barley rootlets, at levels that ranged from 198 to 295 µg/Kg and from 564 to 1383 µg/Kg, respectively. Preharvest contamination of the barley crop could be considered possible, barley could support F. verticillioides/F. proliferatum growth when grain is remoistened during the germination and malting process, and it might even continue during storage prior to use, providing that the water activity remained high. The malting process requires water to allow barley germination. If fumonisins were present, they could be diluted during the steeping process. No information is available about the study of DON in this substrate. While this report does not detect this toxin, this is the first study to investigate its presence. The presence of mycotoxins in these substrates indicates the existence of contamination. Inadequate storage conditions promote the proliferation of mycotoxin-producing fungal species. Regular monitoring of feeds is required in order to prevent chronic and acute toxic syndromes related to this kind of contamination.
v3-fos-license
2017-05-17T20:14:24.696Z
2017-05-10T00:00:00.000
12453580
{ "extfieldsofstudy": [ "Environmental Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2017.00731/pdf", "pdf_hash": "a232279d241a9c841666e96968ca79227e222d57", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42680", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "sha1": "a232279d241a9c841666e96968ca79227e222d57", "year": 2017 }
pes2o/s2orc
Nitrogen Cycling from Increased Soil Organic Carbon Contributes Both Positively and Negatively to Ecosystem Services in Wheat Agro-Ecosystems Soil organic carbon (SOC) is an important and manageable property of soils that impacts on multiple ecosystem services through its effect on soil processes such as nitrogen (N) cycling and soil physical properties. There is considerable interest in increasing SOC concentration in agro-ecosystems worldwide. In some agro-ecosystems, increased SOC has been found to enhance the provision of ecosystem services such as the provision of food. However, increased SOC may increase the environmental footprint of some agro-ecosystems, for example by increasing nitrous oxide emissions. Given this uncertainty, progress is needed in quantifying the impact of increased SOC concentration on agro-ecosystems. Increased SOC concentration affects both N cycling and soil physical properties (i.e., water holding capacity). Thus, the aim of this study was to quantify the contribution, both positive and negative, of increased SOC concentration on ecosystem services provided by wheat agro-ecosystems. We used the Agricultural Production Systems sIMulator (APSIM) to represent the effect of increased SOC concentration on N cycling and soil physical properties, and used model outputs as proxies for multiple ecosystem services from wheat production agro-ecosystems at seven locations around the world. Under increased SOC, we found that N cycling had a larger effect on a range of ecosystem services (food provision, filtering of N, and nitrous oxide regulation) than soil physical properties. We predicted that food provision in these agro-ecosystems could be significantly increased by increased SOC concentration when N supply is limiting. Conversely, we predicted no significant benefit to food production from increasing SOC when soil N supply (from fertiliser and soil N stocks) is not limiting. The effect of increasing SOC on N cycling also led to significantly higher nitrous oxide emissions, although the relative increase was small. We also found that N losses via deep drainage were minimally affected by increased SOC in the dryland agro-ecosystems studied, but increased in the irrigated agro-ecosystem. Therefore, we show that under increased SOC concentration, N cycling contributes both positively and negatively to ecosystem services depending on supply, while the effects on soil physical properties are negligible. Soil organic carbon (SOC) is an important and manageable property of soils that impacts on multiple ecosystem services through its effect on soil processes such as nitrogen (N) cycling and soil physical properties. There is considerable interest in increasing SOC concentration in agro-ecosystems worldwide. In some agro-ecosystems, increased SOC has been found to enhance the provision of ecosystem services such as the provision of food. However, increased SOC may increase the environmental footprint of some agro-ecosystems, for example by increasing nitrous oxide emissions. Given this uncertainty, progress is needed in quantifying the impact of increased SOC concentration on agro-ecosystems. Increased SOC concentration affects both N cycling and soil physical properties (i.e., water holding capacity). Thus, the aim of this study was to quantify the contribution, both positive and negative, of increased SOC concentration on ecosystem services provided by wheat agro-ecosystems. We used the Agricultural Production Systems sIMulator (APSIM) to represent the effect of increased SOC concentration on N cycling and soil physical properties, and used model outputs as proxies for multiple ecosystem services from wheat production agro-ecosystems at seven locations around the world. Under increased SOC, we found that N cycling had a larger effect on a range of ecosystem services (food provision, filtering of N, and nitrous oxide regulation) than soil physical properties. We predicted that food provision in these agro-ecosystems could be significantly increased by increased SOC concentration when N supply is limiting. Conversely, we predicted no significant benefit to food production from increasing SOC when soil N supply (from fertiliser and soil N stocks) is not limiting. The effect of increasing SOC on N cycling also led to significantly higher nitrous oxide emissions, although the relative increase was small. We also found that N losses via deep drainage were minimally affected by increased SOC in the dryland agro-ecosystems INTRODUCTION Soils provide multiple ecosystem services that meet human needs (Robinson et al., 2014). In agro-ecosystems these services include both provisioning services, such as food production, and regulating services, such as filtering of nutrients (Millennium Ecosystem Assessment, 2005;Dominati et al., 2010). Soils are therefore a critical natural capital stock (Dominati et al., 2016). The characteristics of soils that influence their capacity to provide ecosystem services can either be inherent or manageable (Dominati et al., 2010). Inherent soil properties, such as soil texture, result from soil formation conditions, and change little over timescales of hundreds of years. Manageable properties, such as organic carbon content or pH, are those more easily modified by management or natural variability on shorter timescales. Land use and management primarily impact these manageable characteristics of soils, and through this the capacity of soils to contribute to ecosystem services provision. As a manageable property, soil organic carbon (SOC) contributes to ecosystem services through its effect on multiple soil processes and functions. Soil organic carbon (SOC) affects nutrient cycling and soil fertility status. Decomposition of soil organic matter releases nutrients, including nitrogen (N), into soil (Havlin et al., 1990;Hoyle et al., 2011;Murphy, 2015). Thus, a soil with a higher SOC concentration results in a greater the release of organic N to the soil than a soil with a lower SOC concentration (Aggarwal et al., 1997;Kusumo et al., 2011;Murphy, 2015). In addition, SOC affects multiple soil physical properties. An increase in SOC concentration decreases bulk density (Adams, 1973;Manrique and Jones, 1991;Tranter et al., 2007), generally increases soil water holding capacity (Vereecken et al., 1989;Wosten et al., 1999;Saxton and Rawls, 2006) and has a variable effect on hydraulic conductivity (Vereecken et al., 1990;Saxton and Rawls, 2006;Weynants et al., 2009). Many agricultural soils have been significantly depleted of SOC stocks (Cole et al., 1993;Davidson and Ackerman, 1993;Lal, 2004). Therefore, there is considerable interest in increasing SOC concentrations in agro-ecosystems globally to both sequester carbon for climate change mitigation and improve soil quality to enhance productivity and agroecosystem sustainability (Reeves, 1997;Post and Kwon, 2000;Lal, 2004;Smith, 2008). These reflect improvements in key desirable ecosystem services, regulating (e.g., carbon sequestration) and provisioning (e.g., crop productivity). However, in order to better inform investments for increasing SOC, it is essential to test and quantify how increased SOC concentration is likely to contribute to various ecosystem services across a range of agro-ecosystems. Food production is an ecosystem service that is affected by SOC concentration. Increased SOC concentration have been linked with a direct increase in food production; although this varies with land use, soil type, environmental conditions, and management practice (Barzegar et al., 2002;Lal, 2006;Zhang et al., 2012). Other ecosystem services affected by SOC are the regulating services of flood mitigation and water recharge . These services are related to the infiltration into, storage in and transmission of water through the soil profile, and so the role of SOC in increasing soil water storage (Gupta and Larson, 1979;Hudson, 1994;Saxton and Rawls, 2006) and changing soil hydraulic conductivity (Vereecken et al., 1990;Saxton and Rawls, 2006) is important in determining the provision of these services. While increased SOC concentrations can positively influence ecosystem services deemed beneficial, increased SOC may also increase the environmental footprint of a given agro-ecosystem. The increase in organic N associated with an increase in SOC concentration (Hoyle et al., 2011;Murphy, 2015) influences the filtering of N ecosystem service, which refers to capacity of soils to store and retain N . While this increase in soil N can be beneficial for crop growth, it can increase net losses of N from agro-ecosystems to ground water aquifers or rivers (Beckwith et al., 1998;Knappe et al., 2002) and have negative environmental impacts such as eutrophication in downstream water bodies (Carpenter et al., 1998). Furthermore, increased SOC can also increase nitrous oxide emissions from agro-ecosystems (Qiu et al., 2009;Burgin et al., 2013). Quantifying both the positive and negative effects of SOC on the provision of ecosystem services is essential to provide a more comprehensive understanding of the impact of SOC on agro-ecosystems (e.g., Swinton et al., 2007;Zhang et al., 2007;Power, 2010). However, few studies have quantified the effect of SOC on multiple ecosystem services. Ghaley and Porter (2014) found that increased SOC concentration in a wheat production system increased food production and carbon sequestration for a site in Denmark, but did not quantify impacts on other ecosystem services such as nitrous oxide regulation. Balbi et al. (2015) found that reductions in manure application to cropping systems in Spain provided the ecosystem service benefit of reducing loss of N to the environment at the expense of yield and carbon sequestration. Furthermore, as described above, increased SOC concentration affects both N cycling (the store of N in soil) and soil physical properties (e.g., soil water holding capacity). However, the relative contribution of these soil attributes to ecosystem services provision has not been quantified previously for multiple ecosystem services across a range of agro-ecosystems. We hypothesised that the effect of increased SOC concentration on both N cycling and soil physical properties (e.g., influencing soil water holding capacity) would contribute both positively and negatively to ecosystem services provided by wheat agro-ecosystems. The aim of this study was thus to quantify the effect of specific soil attributes (N cycling and soil physical properties), as affected by increased SOC concentration, on proxies for ecosystem services (food provision, water recharge, flood mitigation, filtering of N, and nitrous oxide regulation) in a diverse range of wheat cropping agro-ecosystems from around the world. Wheat agro-ecosystems were chosen as the focus of the study as wheat is an agricultural crop of global significance, with a large proportion of the world's population relying on wheat for their main source of nutrition and energy needs (FAO, 2015). Overview The Agricultural Production Systems sIMulator (APSIM; v.7.7; www.apsim.info; Holzworth et al., 2014) was used to simulate soil carbon, N, and water dynamics, as well as proxies for ecosystem services related to these dynamics, in a wheat cropping system at seven sites around the world. Simulations of wheat production in agro-ecosystems that had been previously parameterised and validated were used with slight modifications to make them applicable to this study. Proxies for ecosystem services that were likely to be influenced by N status and/or soil physical properties investigated in this study were selected from the available model outputs. At each site, simulations were undertaken at two SOC concentrations, one being the SOC concentration measured in the soils at the sites under long term agriculture and a second with higher SOC concentration reported in literature or estimated from simulations of management practices that aim to increase SOC. Four scenarios were simulated to isolate the relative effect of SOC [(a) measured SOC and (b) increased SOC] on soil properties [(a) N cycling and (b) soil physical properties including soil water holding capacity, bulk density, and saturated hydraulic conductivity]. While N cycling is currently linked to SOC in APSIM v7.7, there is no dynamic link between SOC and soil physical properties in this version of the model. A quantitative framework was thus developed to relate the values of the main parameters affecting soil water in APSIM to SOC concentration. Site Descriptions The seven study sites covered a range of soil textures, SOC contents, water management regimes, and climates ( Table 1). Six sites were dryland agro-ecosystems and one (New Delhi) was irrigated. Wheat crops had been grown at all sites, and there was sufficient information available for the sites to allow configuration and, where necessary, testing of the model. Further details about the sites can be found in the publications relevant to each site ( Table 1). These sites were selected for use in this study as they represented a diverse range of wheat production agro-ecosystems and had publications either providing details on previous successful modelling or sufficient information to allow model parameterisation and testing. Ecosystem Services and Outcome Proxies This study focussed on five proxies for ecosystem services (yield, drainage, infiltration, loss of N via deep drainage, and nitrous oxide emissions) for which SOC effects on N status and/or soil water holding capacity could be quantified ( Table 2). For the filtering of N and nitrous oxide regulation services, APSIM could not provide direct measures of the services so we used N loss via deep drainage (kg N ha −1 ) and nitrous oxide emissions (kg N ha −1 ): These are not measures of the services but measures of outputs from the system and here used as proxies for the services. In addition, to better understand the effect of increased SOC concentration on nitrous oxide emissions, we determined the number of days that soil water was above Drained Upper Limit (DUL; equivalent to field capacity) as high soil moisture content (below saturation) facilitates denitrification. Model Description APSIM is a deterministic, daily time-step modelling framework, capable of simulating plant, soil, climate, and management interactions (Holzworth et al., 2014). It includes modules for: soil N and carbon dynamics (SoilN, Probert et al., 1998); soil water dynamics (SoilWat, Probert et al., 1998); surface organic matter (SurfaceOM, Probert et al., 1998); and a range of crop modules (e.g., Wheat, Wang et al., 2003). All modules are one dimensional and driven by meteorological data. APSIM dynamically simulates changes in SOC and the resultant effect on N cycling (including soil mineral N dynamics), which in turn influences N supply to crops and N losses via deep drainage or denitrification. Carbon inputs to soils affect carbon flows between the carbon pools in APSIM, which in turn affects the corresponding N flows that are calculated using the C:N ratio of the receiving N pool. This functionality is central to the SoilN module in APSIM (Holzworth et al., 2014). APSIM (v7.7) does not currently have the inbuilt capacity to dynamically simulate the effect of SOC on soil physical properties such as DUL, Lower Limit (LL15), and bulk density. The simulations were therefore modified (Section Representing the Effect of SOC on Soil Physical Properties in APSIM) so that values of these parameters varied in response to changes in SOC. General Model Parameterisation The Brigalow, Liebe, Wageningen, Balcarce, and New Delhi sites were parameterised with the soil and crop parameter values and management practices previously used to simulate these sites ( Table 1, Tables S1, S2, and Figure 1). For consistency, for the Wageningen, Balcarce and New Delhi sites, the number of layers and layer thickness defined in the soil modules were adjusted from previous studies so there were three 0.1 m deep soil layers in the top 0.3 m of the soil. These changes to the soil layers had no effect on key output variables. The Pendleton and Canterbury sites were parametrised (Table 1, Tables S1, S2, and Figure 1) using published information ( Table 1; Supplementary Material Section 4). Management operations were specified to reflect common practice in each region with a wheat cropping rotation followed by bare fallow simulated at all sites (Figure 1). To avoid long-term changes in soil model parameters during the simulations, SOC, mineral N, water, and surface residue values were reset annually to initial values. Reset dates were specified for each site to take into account the site and the agro-ecological conditions (Figure 1). Following the approach by Asseng et al. (2013), parameters for SOC, mineral N, water, and surface residue were reset to measured values of conditions at sowing for the Wageningen, Balcarce, and New Delhi sites. For the other sites measurements of conditions at sowing were not available. Thus, mineral N, water, and surface residue parameters were reset during the fallow with sufficient time to allow soil, water, and surface residue dynamics to establish prior to sowing. The date of annual output of non-yield parameters was 31st December. The simulation time frame depended on the availability of reliable climate data ( Table 3). For the Brigalow and Liebe sites, daily climate data were obtained from the Australian Bureau of Meteorology (via the SILO database, https://www.longpaddock. qld.gov.au/silo/; Jeffrey et al., 2001) for the meteorological stations nearest to the sites. On-site measurements of climate data were available for the Canterbury and Pendleton sites, with missing data in-filled with data from nearby meteorological stations. Climate data for the Balcarce, New Delhi, and Wageningen sites is described by Asseng et al. (2013). Scenarios Four scenarios were simulated to determine the effect of N cycling and soil physical properties, as affected by increased SOC, on ecosystem service proxies ( Table 4). In the Control scenario, the soil was simulated based on the measured SOC. In the second, the Nitrogen Cycling scenario, SOC was increased and that increase affected only N cycling. In this scenario the APSIM 1981-2010 1963-2012 1972-2014 1963-2012 1980-2010 1930-2010 1980-2010 See Tables S1, S2 for additional information about site specific parameter values. parameter for SOC was changed. In the third, the Soil Physical Properties scenario, increased SOC affected only soil physical properties and hence water supply to crops. In this scenario, the APSIM parameters bulk density, LL15, DUL, saturation, saturated hydraulic conductivity, and wheat lower limit were changed. To overcome the separation between SOC and soil physical properties in APSIM v7.7, we determined the effect that increased SOC would have on soil physical properties by using a method that was external to the model (Section Representing the Effect of SOC on Soil Physical Properties in APSIM) and modified the relevant parameters in the model to reflect the higher SOC. In the fourth, the Combined Properties scenario, SOC affected both N cycling and soil physical properties. Each scenario was simulated with seven N fertiliser rates (0, 50, 100, 150, 200, 250, 300 kg N ha −1 ). While some levels of N fertiliser would not be sensible to apply in particular wheat agroecosystems, these seven N fertiliser rates were simulated across all sites to understand the full response of the scenarios to N fertiliser applications. At each site, simulations were undertaken using the measured SOC (0.0-0.3 m soil depth) concentrations in the soils, and with site-specific higher SOC concentrations reported in literature or estimated from simulations of management practices that aim to increase SOC (Table 3). This approach to obtaining estimates of higher SOC concentration was used, as opposed to being increased by an arbitrary amount, to account for the differing soil carbon sequestration and storage capacities of different agro-ecosystems due to variation in climate, soils and past (Luo et al., 2014). For the Canterbury site the higher SOC value was based on SOC accumulation in field studies of different long-term crop management regimes (Francis et al., 1992;Francis and Knight, 1993). For the other sites, the higher SOC values were based on results of long-term (1,000 years, using repeated cycling of the existing meteorological record) simulations of the sites with management designed to increase SOC (e.g., manure application or cropping intensification) following the approach of Luo et al. (2014). Details of these simulations are given in the Supplementary Material Section 3. Representing the Effect of SOC on Soil Physical Properties in APSIM In the SoilWat module in APSIM, the primary parameters governing soil water dynamics are the water contents at saturation, DUL, and LL15, and saturated hydraulic conductivity (Probert et al., 1998). Bulk density is also implicated because of its relationship with saturation (Dalgliesh and Foale, 1998). Thus, our aim was to develop a system of equations that made these parameters a function of SOC (Supplementary Material Section 2). We used two different general approaches to link the parameters to SOC. For DUL and LL15, we used published pedotransfer functions (PTF) that predicted these water contents from SOC (and other soil parameters in some cases). For bulk density, saturation and saturated hydraulic conductivity we used more mechanistic approaches. There are numerous PTFs reported in the literature that link DUL and LL15 to SOC (Table S3). Each PTF reflects the location and number of soils upon which it was developed (Cichota et al., 2013). Selecting a single PTF, e.g., as has been done previously (e.g., Porter et al., 2010), risks having a model framework that is only relevant for the soils on which the PTF was developed. In an effort to provide a more generally applicable framework we used an "ensemble" of 12 PTFs to develop equations for making DUL and LL15 dependant on SOC. Values of DUL and LL15 were predicted with each PTF over a range of SOC values and then fitted a function (which we term an ensemble PTF) across the range of SOC values (Supplementary Material Section 2.1) which reflects not the absolute DUL or LL15 but the change in DUL or LL15 for a unit change in SOC. For bulk density, we used the approach of Adams (1973) which relates bulk density to the amount and density of soil organic matter in the soil (Supplementary Material Section 2.2). The water content at saturation was calculated from bulk density (Supplementary Material Section 2.3). To estimate saturated hydraulic conductivity, we used the semi-mechanistic function of Saxton and Rawls (2006) that relates saturated hydraulic conductivity to (1) water held at low suctions within larger pores that most effectively conduct water and (2) the slope of the soil moisture characteristic (Supplementary Material Section 2.4). This function is thus based on the water contents at saturation, DUL and LL15 and saturated hydraulic conductivity, and was calculated from these water contents at a given SOC value. These parameters were modified in the Soil Physical Properties and Combined Properties scenarios. In these scenarios, the plant available water capacity in the top 0.3 m increased by between 2.5 and 16.7 mm, depending on the level of SOC increase and the soil texture at the site ( Table 3). Statistical Analysis An analysis of variance was undertaken with the RStudio statistical package (v0.99.465) to test the simulated ecosystem services proxies for a significant difference between group (four scenarios and seven N fertiliser levels) means within a given site. Simulation years were used as replication in the analysis. If required, data were log transformed to meet the assumptions of normality and homogeneity of variance. The post-hoc Tukey's honest significant difference (HSD) test was used to analyse the ecosystem services proxies for a significant difference (p < 0.05) between the means for all scenario comparisons for a given fertiliser rate. Food Provision Ecosystem Service Quantified Using Yield as a Proxy Simulated mean wheat yield for the Control scenario varied between 464 and 5,783 kg ha −1 depending on site and N fertiliser rate (Figures 2A-G). At all sites, there was a significant (p < 0.05) effect of N fertiliser rate and of scenario, except at the Balcarce site where the effect of scenario was significant at p = 0.06. There was a significant interaction between scenario and N fertiliser rate at the New Delhi, Pendleton and Liebe sites (p < 0.05). Examples for the range of yield for the New Delhi and Pendleton sites are shown in (Figures 3A,B). The response of yield to the scenarios and N fertiliser rates for the Pendleton site is generally representative of the response for the other dryland sites. In the Nitrogen Cycling scenario, higher SOC concentration significantly increased simulated wheat yields at low N fertiliser rates (i.e., 0 and 50 kg N ha −1 ) at all sites (Figures 2A-G). However, with the exception of the Wageningen site (Figure 2G), at high N fertiliser rates (i.e., 200, 250, and 300 kg N ha −1 ), higher SOC concentration had no significant effect on yields at any site (Figures 2A-G). For example, for the Pendleton site simulation with 0 kg N ha −1 , the mean yield in the Nitrogen Cycling scenario was 1,374 kg ha −1 higher than the mean yield in the Control scenario (1,391 kg ha −1 ), whereas with 300 kg N ha −1 there was no difference in simulated yield ( Figure 2F). In the Soil Physical Properties scenario, the effect on simulated yields was much smaller than for the Nitrogen Cycling scenarios (Figures 2A-G). At low N fertiliser rates (i.e., 0 and 50 kg N ha −1 ), the yields in the Soil Physical Properties scenarios were generally either similar to or lower (although not significantly) than those in the Control scenarios. The exceptions to this were the Pendleton site, where yields were significantly lower than the Control at 0 kg N ha −1 (Figure 3B), and the New Delhi site, where yields were significantly higher at 50 kg N ha −1 (Figure 3A). At high N fertiliser rates (i.e., 200, 250, and 300 kg N ha −1 ), yields were slightly higher (although not significantly) than the Control at all sites except Wageningen. For example, for the Brigalow site with 0 kg N ha −1 , the mean yield in the Nitrogen Cycling scenario was 95 kg ha −1 lower than the mean yield in the Control FIGURE 3 | A boxplot of simulated yield and nitrous oxide emissions for the Control and the Nitrogen Cycling, Soil Physical Properties, and Combined Properties scenarios, given increased soil organic carbon, for the New Delhi (irrigated) and Pendleton (dryland) sites and seven nitrogen fertiliser rates from 0 to 300 kg N ha −1 . The data displayed represents ecosystem service proxies that have been simulated for 30 and 80 years, respectively. Boxes display the 25th and 75th quantile, the line in the box indicates the median and the whiskers extend from the minimum data value to the maximum. (A,B) display yield and (C,D) display nitrous oxide emissions for the Balcarce, Brigalow, Canterbury, Liebe, New Delhi, Pendleton, Wageningen sites. scenario, whereas with 300 kg N ha −1 , it was 40 kg ha −1 higher ( Figure 2B). In the Combined Properties scenario, at low N fertiliser rates (i.e., 0 and 50 kg N ha −1 ) the effect of higher SOC concentration significantly increased simulated wheat yields at all sites to a similar degree as for the Nitrogen Cycling scenario (Figures 2A-G). The exception to this was the Brigalow site with 50 kg N ha −1 where there was no significant effect of the scenario. However, at high N fertiliser rates, simulated yields were higher (although not significantly) than the Control at all sites. The magnitude of the increases was similar to those in the Soil Physical Properties scenario, except for at the Wageningen site. Water Recharge Ecosystem Service Quantified Using Drainage as a Proxy Simulated mean drainage for the Control scenario was between 9 and 485 mm yr −1 depending on site and N fertiliser rate (Figures 2H-N). For all scenarios, drainage was lower for the dryland sites (mean drainage was between 8 and 309 mm yr −1 ) than for the irrigated New Delhi site (mean drainage was between 283 and 485 mm yr −1 ). There was a significant effect (p < 0.05) of N fertiliser rate on drainage at the Balcarce, Canterbury, New Delhi, and Wageningen sites, but not the Pendleton site (mean drainage at the Brigalow and Liebe sites was very small; between 8 and 30 mm yr −1 . It is not considered further here). While increased SOC concentration tended to decrease drainage for most sites, this was not significantly different from the Control for any site, N fertiliser rate, and scenario combination (Figures 2H-N). Simulated mean drainage was between 0.03 and 60 mm yr −1 lower than the Control for the Nitrogen Cycling scenario, between 7 mm higher and 69 mm lower for the Soil Physical Properties scenario, and 1 mm higher and 111 mm yr −1 lower for the Combined Properties scenario (Figures 2H-N). For the Combined Properties scenario, the largest decrease in drainage occurred at the irrigated New Delhi site where mean drainage was between 73 and 111 mm yr −1 lower than the Control (Figure 2L) whereas for the dryland sites, the greatest decrease in drainage was between 17 and 22 mm yr −1 lower than the Control at the Wageningen site ( Figure 2N). Flood Mitigation Ecosystem Service Quantified Using Infiltration as a Proxy Simulated mean infiltration for the Control scenario varied between 314 and 833 mm yr −1 depending on site and N fertiliser rate (Figures 2O-U). There was no significant effect of N fertiliser rate or scenario on annual infiltration at any site. Depending on site, scenario, and N fertiliser rate, mean infiltration was between 9 mm yr −1 higher and 35 mm yr −1 lower than the Control (Figures 2O-U). Increased SOC had the greatest effect on infiltration at the New Delhi site, which was an irrigated site, where mean infiltration was between 3 and 35 mm yr −1 lower than the Control (Figure 2S). Filtering of N Ecosystem Service Quantified Using Loss of N via Deep Drainage as a Proxy Simulated mean nitrate losses via deep drainage for the Control scenario varied between 0.2 and 96 kg ha −1 yr −1 depending on site and N fertiliser rate (Figures 2V-AB). There was a significant effect of N fertiliser rate on annual nitrate losses via deep drainage (p < 0.05) at the Balcarce, Canterbury, New Delhi, and Pendleton sites (Figures 2V,X,Z,AA). Simulated mean nitrate losses via deep drainage was between 1 kg ha −1 yr −1 lower and 27 kg ha −1 yr −1 higher than the Control for the Nitrogen Cycling scenario, between 0.1 kg ha −1 yr −1 higher and 24 kg ha −1 yr −1 lower for the Soil Physical Properties scenario, and between 2 kg ha −1 yr −1 lower and 21 kg ha −1 yr −1 higher for the Combined Properties scenario (Figures 2V-AB). While there was a varied effect of scenarios on annual nitrate losses via deep drainage, this was only statistically significant for the New Delhi site (Figure 2Z). For the New Delhi site, the effect of higher SOC concentration on the Nitrogen Cycling scenario, significantly increased nitrate losses via deep drainage for lower N fertiliser rates of 0, 50, 100 kg N ha −1 (Figure 2Z). For the Soil Physical Properties scenario, higher SOC concentration decreased (although not significantly) annual nitrate losses via deep drainage for all N fertiliser rates. For the Combined Properties scenario, higher SOC concentration significantly increased annual nitrate losses via deep drainage for N fertiliser rates of 0 and 50 kg N ha −1 . Nitrous Oxide Regulation Ecosystem Service Quantified Using Nitrous Oxide Emissions as a Proxy Simulated mean nitrous oxide emissions for the Control scenario varied between 0.04 and 6.7 kg N 2 O-N ha −1 yr −1 depending on site and N fertiliser rate (Figures 2AC-AI). At all sites, there was a significant effect of N fertiliser rate and scenario (p < 0.05) on nitrous oxide emissions. Increased N fertiliser rate tended to increase nitrous oxide emissions, with the exception of the Balcarce and New Delhi sites where nitrous oxide emissions were lower for the 50 kg N ha −1 fertiliser rate than for 0 kg N ha −1 . This counter-intuitive result can occur when low productivity from N stress for the 0 kg N ha −1 fertiliser rate scenarios can leave more N in the soil that is available for environmental loss than the 50 kg N ha −1 fertiliser rate scenarios, which yield higher (thus using a greater amount of soil N). At the Liebe, New Delhi and Pendleton sites, there was an interaction between scenario and N fertiliser rate (p < 0.05). Examples for the range of nitrous oxide emissions for the New Delhi and Pendleton sites are shown in (Figures 3C,D). In the Nitrogen Cycling scenario, simulated mean annual nitrous oxide emissions were between 0.1 and 2.8 N 2 O-N kg ha −1 yr −1 higher than the Control, depending on site and N fertiliser rate (Figures 2AC-AI). The effect of higher SOC concentration significantly increased simulated nitrous oxide emissions across all N fertiliser rates at all sites. The exceptions to this were the Wageningen site where nitrous oxide emissions were significantly higher only at the 0 kg N ha −1 fertiliser rate and the Brigalow site where nitrous oxide emissions were only significantly higher at N fertiliser rates between 0 and 200 kg N ha −1 . In the Soil Physical Properties scenario, simulated mean nitrous oxide emissions were between 0.01 and 0.8 kg N 2 O-N ha −1 yr −1 lower than the Control (Figures 2AC-AI). However, Liebe and New Delhi were the only sites where nitrous oxide emission was significantly affected by this scenario. While the simulated nitrous oxide emissions were significantly lower than the control for the Liebe site, the values were extremely small (mean emissions were between 0.04 and 0.20 kg N 2 O-N ha −1 yr −1 ) and are not considered further here. For the New Delhi site, simulated nitrous oxide emissions were significantly lower than the Control for fertiliser rates between 100 and 300 kg N ha −1 (Figure 3C). For the Combined Properties scenario, simulated mean nitrous oxide emissions were between 0.1 and 2.8 kg N 2 O-N ha −1 yr −1 higher than the Control (Figures 2AC-AI). Nitrous oxide emissions were significantly higher across all N fertiliser rates at the Canterbury, Liebe, New Delhi, and Pendleton sites (Figures 2AE,AF,AG,AH). The exception was Liebe site with 250 and 300 kg N ha −1 fertiliser where there was no significant effect of this scenario. For the Balcarce site, nitrous oxide emissions were significantly higher for N fertiliser rates of 150 kg N ha −1 and below ( Figure 2AC). For the Brigalow site, nitrous oxide emissions were significantly higher for N fertiliser rates of 100 kg N ha −1 and below ( Figure 2AD). For the Control scenario, the mean number of days that soil water exceeded DUL was between 7 and 149 days yr −1 , depending on site ( Table 5). Scenarios reduced the days that soil water exceeded the DUL by between 0 and 16 days yr −1 , depending on scenario and site. DISCUSSION We disaggregated the effects of SOC on soil N cycling and soil physical properties to gain greater insights into the mechanisms underlying the effect of increased SOC concentration on ecosystem services. We found that increased SOC concentration in wheat production agro-ecosystems provided limited increase in ecosystem services. It is expected that an increased SOC will increase the N supply that contributes to grain yield and food provision (Aggarwal et al., 1997;Wani et al., 2003;Lal, 2006). Our results were consistent with that expectation, showing that with increased SOC concentration, N cycling was the major contributor to increased food provision (i.e., yields) when N was limiting (Figures 2A-G). However, when N was not limiting, as would often be likely for fertilised wheat production agroecosystems, N cycling from the increased SOC concentration provided no significant productivity benefit. This negligible effect of N cycling on yields at higher fertiliser rates was expected, as N fertiliser dominated the N supply at high fertiliser rates. When N limited crop growth, i.e., at low N fertiliser rates, the N supply to the crop was dominated by N derived from mineralisation of organic N and our simulations showed the benefits of increased SOC concentration on crop production (Figures 2A-G). Importantly, N supply to crops from SOC is derived from the decomposition of soil organic matter, which can Averages are for simulations run for between 30 and 80 years depending on the site for scenarios with seven N fertiliser rates. be considered "consumption" of the SOC natural capital as SOC stocks run down. In this study, SOC concentration was annually reset in simulations. This was done to avoid the confounding effects of long-term changes (run down) in SOC that would have eventuated in many of the wheat cropping systems simulated (e.g., at low rates of applied N fertiliser, in the absence of organic matter applications). An artefact of our methodology is that the average effect of SOC on crop N supply and production simulated in this study will be greater than generally seen in agro-ecosystems, where rundown would commonly occur (Dalal and Chan, 2001;Lal, 2004). The "consumption" of the SOC natural capital means that to derive the N-supply benefit in agro-ecosystems, SOC carbon levels benefit would need to be maintained through the use of management practices that increase SOC such as minimum tillage, stubble retention, and/or additions of organic matter (Paustian et al., 1992;Rasmussen and Parton, 1994;Luo et al., 2014;Smith et al., 2016). Even so, maintaining the high levels SOC concentration similar to those used in this study may be challenging in wheat agro-ecosystems because the SOC inputs from primary production may not be sufficient to sustain those SOC levels. When all sites had the increased level of SOC concentration, we predicted that soil water holding capacity (0.0-0.3 m) would increase by <10 mm at six of the seven sites (Table 3) and 16 mm at the New Delhi site ( Table 3). The findings from this study indicate that with increased SOC concentration, soil physical properties, and their effect of soil water dynamics caused no significant change to yield productivity [The exceptions to this were the Pendleton site, where yields for the Soil Physical Properties scenario were significantly lower than the Control at 0 kg N ha −1 (Figure 3B), and the New Delhi site, where yields were significantly higher at 50 kg N ha −1 ; Figure 3A] These findings contrast the widely held view that increased plant available water capacity from SOC increases productivity of farming systems (Duiker and Lal, 1999;Díaz-Zorita et al., 1999;Lal, 2006;Zhu et al., 2010;Hoyle et al., 2011;Barton et al., 2016;Williams et al., 2016). Many of these studies suggest (rather than provide empirical evidence) that increased water holding capacity from increased SOC plays a role in increasing productivity (Duiker and Lal, 1999;Zhu et al., 2010;Hoyle et al., 2011;Barton et al., 2016). However, some studies do provide evidence in support of this view. Williams et al. (2016) showed that increased water holding capacity from increased SOC reduced temporal variability of maize yields. In addition, Díaz-Zorita et al. (1999) found wheat yields were positivity correlated with soil water retention in dry years. The contrasting conclusions between this study and those found by Díaz-Zorita et al. (1999) and Williams et al. (2016) may reflect differences in scale or experiment design of the studies, and highlights the importance of multi-regional studies. Furthermore, it may be that the change in plant available water in this study was too small to show the benefits demonstrated in other studies. Further research is required to better understand the contribution of increased plant available water from increased SOC to crop productivity. The increased level of SOC concentration increased the environmental footprint of agro-ecosystems. With increased SOC concentration, N cycling was the major contributor to nitrous oxide emissions (Figures 2AC-AI). Our results are consistent with other studies that have found nitrous oxide emissions increase with increased SOC (Li et al., 2005;Qiu et al., 2009;Ciais et al., 2010). While the increase in nitrous oxide emissions was relatively small, nitrous oxide is a potent greenhouse gas, and the extent of broadacre dryland cropping agro-ecosystems means small increases per unit area can lead to a considerable increase in nitrous oxide emissions for the agricultural sector globally. Conversely, there was no significant effect of soil physical properties, as affected by increased SOC concentration, on nitrous oxide emissions. High soil water contents contribute to increased denitrification, the process responsible for a large proportion of nitrous oxide emissions from soils (Weier et al., 1993;Smith et al., 1998). While increased SOC concentration increased soil water holding capacity compared with the Control (Table 3) it decreased the number of days that soil water content exceeded the threshold for denitrification to occur in the simulations (Table 5). Thus, the increased nitrous oxide emissions simulated with increased SOC resulted from the effect of SOC on N cycling. There was no significant effect of increased SOC on the N losses via deep drainage from the dryland sites. Mean annual drainage was up to 30 times lower at the dryland sites compared with the New Delhi irrigated site (Figures 2H-N). At the New Delhi site, N cycling was the major contributor to increased loss of nitrate via deep drainage ( Figure 2Z). Conversely, there was no significant effect of soil physical properties on loss of nitrate via deep drainage at this site. This supports studies that found high loss of N via deep drainage is associated with N inputs (Di and Cameron, 2002). The quantity of nitrate losses via deep drainage (filtering of N ecosystem service) is intimately linked with the quantity of water drained from the soil profile (water recharge ecosystem service). This highlights the links between services such as water recharge and filtering of N. On the one hand, water recharge is a desirable ecosystem service, but on the other hand, it may provide the catalyst for N to be lost via deep drainage, which can result in an increased environmental footprint for an agro-ecosystem. While this study considered the effect of increased SOC on N cycling and soil physical properties, there are other soil processes such as cation exchange capacity and biological functions that are influenced by SOC. Further research could quantify the effect of SOC on processes such as structural stability, cation exchange capacity, and biological processes. Our study focussed on wheat agro-ecosystems, and the increases in SOC that might be achievable in those cropping systems. In other agricultural systems, larger increases in soil carbon are achievable. For example, in grazed pastoral systems considerable increases in SOC (Mg SOC ha −1 ) are possible in a relatively short time (Soussana et al., 2010;Machmuller et al., 2015) and is likely to have greater impacts on ecosystem service provision. Furthermore, climate change is likely to impact on the provision of ecosystem services from soils (Orwin et al., 2015). A similar approach to the one taken in this study could be used to investigate the effects of SOC on a wider range of agro-ecosystems under different climate change scenarios: such a study may enrich our understanding of the effects of SOC on ecosystem service provision. The results of this study indicate that few of the ecosystem services provided by wheat agro-ecosystems are likely to be significantly affected by increased SOC. Furthermore, this study found that both the benefits and disadvantages to agroecosystems are likely to flow from the effect of SOC on N cycling, rather than from any effects of SOC on soil physical properties. Food provision may be significantly increased by increased SOC when N supply is limiting, which highlights that the ratio of contributions between natural and added fertility to total yield is a major sustainability indicator. However, when N supply is not limiting, no significant benefit will likely be derived. Increasing SOC is also likely to produce outcomes that will increase the environmental footprint of agro-ecosystems via two processes identified in this study. (a) Increased SOC led to significantly higher nitrous oxide emissions (nitrous oxide regulation ecosystem service) due the effect of SOC on N cycling, although the extent of increase was relatively small. (b) Higher N losses via deep drainage (filtering of N ecosystem service) at low N fertiliser rates in irrigated agro-ecosystems. Our results showed little change in annual infiltration or drainage and therefore the flood mitigation and water recharge ecosystem services are unlikely to be affected by increased SOC in dryland and irrigated wheat production agro-ecosystems.
v3-fos-license
2014-10-01T00:00:00.000Z
2011-05-10T00:00:00.000
9825146
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-11-292", "pdf_hash": "430117928411a44eab406d826685961bd5b3ad47", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42681", "s2fieldsofstudy": [ "Sociology" ], "sha1": "ab1b140de628c898ff8c77fe00d4212bb56a786c", "year": 2011 }
pes2o/s2orc
Social participation reduces depressive symptoms among older adults: An 18-year longitudinal analysis in Taiwan Background Relatively little empirical attention has focused on the association between social participation and depressive symptoms amongst older adults in Asian nations, where persons over the age of 65 represent a rapidly growing segment of the population. This study explores the dynamic relationship between participation in social activities and trajectories of depressive symptomatology among older Taiwanese adults surveyed over 18 years. Methods Data are from a nationally representative sample of 1,388 adults aged 60-64 first surveyed in 1989 and followed over an 18-year time period for a total of six waves. Individual involvement in social activities was categorized into continuous participation, ceased participation before age 70, initiating participation in older adulthood, never participated, and dropped out before age 70. Two domains of depressive symptoms--negative affect and lack of positive affect--were measured using a 10-item version of the Center for Epidemiologic Studies-Depression Scale. Results Analyses using growth curve modeling showed that continuously participating or initiating participation in social activities later life is significantly associated with fewer depressive symptoms among older Taiwanese adults, even after controlling for the confounding effects of aging, individual demographic differences, and health status. Conclusions These findings suggest that maintaining or initiating social participation in later life benefits the mental health of older adults. Facilitating social activities among older adults is a promising direction for programs intended to promote mental health and successful aging among older adults in Taiwan. Background Depression is one of the most common chronic mental health conditions among older adults in Chinese communities [1,2]. Symptoms of depression experienced in later life have serious implications for the health and functioning of older persons as emotional distress is consistently associated with higher levels of cognitive [3,4] and functional impairment [5,6], and the increased risk of physical illnesses such as heart disease and stroke. Depressive symptoms also place older adults at the increased risk for suicide [7][8][9][10], which can devastate families and communities. Growing evidence suggests that involvement in social activities improves the mental health of older adults. Research has demonstrated that socially active older adults have better health outcomes than their inactive counterparts, such as lower mortality rates [11,12], better physical functioning [13], and higher cognitive functioning [14,15]. Work in this area suggests that participation in social activities provides older adults with social support from informal social networks (i.e., relationships with other social group members and peers), which in turn benefits their emotional functioning [16,17]. Prior studies examining the relationship between social participation and mental health also suggest that various * Correspondence: cchiao@ym.edu.tw 1 Insitute of Health and Welfare Policy, Research Center for Health and Welfare Policy, School of Medicine, National Yang-Ming University, Taipei, Taiwan, China Full list of author information is available at the end of the article forms of social participation have psychological advantages for older adults. For example, religious participation [18] and volunteer work [19] increased individual social resources (measured by meeting attendance and informal social interaction), which, in turn, lowered depressive symptom levels. Research by Li and Ferraro [20], Musick and Wilson (2003) [19], and Thoits and Hewitt (2001) [21] used longitudinal data from the Americans' Changing Lives study to explore the relationship between volunteering and depressive symptoms among older adults. These analyses each suggested that older adults psychologically benefited from sustained volunteering. Sugihara et al (2008) [22] examined the role of social participation in mitigating psychological distress in a nationally representative sample of Japanese adults aged 55 to 64. They found that volunteer work was significantly associated with fewer depressive symptoms for both males and females. Although these studies involved analysis of longitudinal data and identified a relationship between a particular type of social participation and mental health, one of the outstanding questions raised by prior investigations was how change and duration of social participation affected the development of psychological distress in later life. That is, variations in exposure to a protective resource-such as the continuity, initiation, or cessation of social participation-may differentially increase or decrease the likelihood of experiencing psychological distress among older adults [23]. This study seeks to increase our understanding of the complex association between social participation and depressive symptoms by analyzing this relationship from a lifecourse perspective. Our analysis uses a nationally representative sample of older adults from Taiwan, as Asian nations are less frequently the focus of empirical investigations of protective effects in mental health outcomes. Sample and Data Collection This study uses data collected from the Taiwan Longitudinal Study on Aging (TLSA), a nationally representative survey designed to study the impact of socioeconomic development on physical and emotional well-being of the older adult population in Taiwan. This prospective, longitudinal study involved data collection from 1989 to 2007 for a total of six waves. Detailed information on the study methodology and TLSA data collection is provided by the Bureau of Health Promotion at the Department of Health in Taiwan http://www.bhp.doh.gov.tw. The sample was derived with a multi-stage sampling framework. First, a total of 56 neighborhoods-defined as blocks or lins-were selected from nationwide administrative units. Second, individual persons aged 60 and older were selected within blocks, yielding the total original sample of N = 4,049. Participants were administered structured questionnaires in their homes by trained interviewers at baseline (1989), with follow-up surveys administered in 1993, 1996, 1999, 2003 and 2007. The response rates were 92%, 91%, 89%, 90%, 91% and 91% for each wave of data collection [24]. The analytic sample used for this study was restricted to those participants in the 60-64 year old age group at the baseline with complete data on the short form (10 items) of the Center of Epidemiological Studies-Depression scale from at least one follow-up. Our selection of this age group was based on three substantive considerations. First, this age group was not officially retired at baseline, which allowed us to examine changes in the relationship between participation and the distress both before and after retirement. Second, the life expectancy in 1989 in Taiwan was 71 years for males and 76 years for females [25]. As this study was based on the lifecourse perspective, we chose to examine the age group that was likely to survive for a large portion of the 18year study. Thirdly, the age of 70 also provided a point of reference prior to the period of the lifecourse when we expected the risk for morbidity and mortality in our sample to increase. This selection yielded a final analytic sample of 1,388 older adults aged 60-64 years at the baseline, 1,174 in 1993, 1,047 in 1996, 960 in 1999, 800 in 2003, and 601 in 2007. Study attrition over time was experienced in part due to longitudinal study design and the older adult sample. We assessed differences in social participation and individual characteristics between continuing participants and those participants that were lost to follow-up (the results not tabled). The analyses indicated that continuing participants at the sixth wave were significantly more likely to initiate participation in social activities in later life (OR = 1.43, p < 0.05) and had less physical limitations (OR = 0.61, p < 0.001) in comparison to the group that was lost to follow-up. The decline in sample size was primarily due to death. Dependent variable Depressive symptomatology was measured by a 10-item version of the Center of Epidemiological Studies-Depression (CES-D) scale at each wave of the TLSA survey. The original 20-item CES-D [26] has been widely used in survey research to assess emotional distress in the general population, and has been demonstrated to have good validity and reliability when used with Asian populations [27][28][29][30]. Each of the 10 items was rated on a four-point scale (scored 0-3), indicating the frequency of experiencing each symptom in the past week. Responses were reversely scored when necessary such that higher scores represented greater levels of symptom frequency. Based on prior analyses using this sample [30][31][32], two factors were identified from the 10 CES-D items: a negative affect domain and a lack of positive affect domain. These CES-D items adopted in TLSA across waves are listed in the Appendix. More detailed information on psychometric properties of these two domains could be found in Chiao et al (2009) [29]. For the analysis, the items were summed within the two domains. The total score on the negative affect domain ranged from 0 to 24 with good internal consistency and reliability (α ranging from 0.79-0.87 across waves). The total score on the lack of positive affect domain ranged from 0 to 6 with an internal consistency reliability coefficient α of 0.79-0.95 across the waves. Explanatory variable Social participation was operationalized using items that measured social engagement. Participants reported whether they participated in group activities through any one of six types of social organizations: hobbyrelated clubs, religious or church groups, political groups, retired or elderly-related associations, or volunteer groups. For each type of social activity, older adults were further asked how long they had participated in an organization at each wave. As the objective of this study was to assess the potential dynamics between participation in social activities and distress during older adulthood over time, we used both participation in least one activity and participation duration to construct a measure of participation continuity by age of 70 using survey information from wave 1 to 3. The final social participation variable consists of the following five categories: (1) continuous social participation (from baseline to age 70); (2) ceased participation in older adulthood (between baseline and age 70); (3) initiating participation in older adulthood (after baseline); (4) never participated; and (5) dropped out before age 70. Covariates Age was included in all growth curve analyses as the time-varying covariate to assess change over time in depressive symptoms over the 18-year period. The relationship between aging and depressive symptoms has been reported as linear with a minor curvilinear effect [31]. Prior research has documented a robust association between physical limitations, chronic illness, and the mental health of older adults [33]. Therefore, indicators of physical health were assessed as covariates and measured by the presence or absence of physical disability and chronic illness, respectively. Physical disability was assessed from eight ADLs and IADLs items. The disability items assessed a person's difficulty with crouching, standing, stooping, lifting heavy objects, walking, climbing stairs, grasping small objects with their fingers, and taking a bus alone. We dichotomized disability status into those with no functional problems (coded "0") and those with at least one limitation (coded "1") following an approach used frequently in prior studies [34][35][36][37]. Chronic illness was assessed as a dichotomous measure (0 = no; 1 = yes) indicating whether respondents had medical diagnosis of at least one of the following five health problems: hypertension, diabetes, stroke, respiratory disease, and cardiovascular disease. Both disability and chronic illness status were based on health status reported at baseline. The socio-demographic variables included gender and ethnicity. Ethnicity was categorized as Fukianese, Hakka, and Mainlander (i.e., individuals who fled the communist government of the People of Republic of China). Socioeconomic status (SES) was assessed by measures of education, employment status, and home ownership. The presence of family members in the immediate environment can be a source of both stress and social support for older adults [38,39] and is also an important feature of Taiwanese and Asian societies. Therefore, family living arrangements was included in the analysis and it was divided into two categories: living alone and living with extended family members. All sociodemographic indicators were based on measures obtained at baseline. Analytic Strategy All analyses were conducted using STATA [40]. Sampling weights and statistical procedures with robust standard errors were also used throughout the analyses to correct for any potential biases. Bivariate tests (i.e., ANOVA and Bonferroni post hoc tests) were used to assess differences in the distribution of individual characteristics by types of social participation. Growth models calculated using the gllamm commands in STATA, were used to model depressive trajectories over the six waves of the study separately for the negative affect domain and the lack of positive affect domain. Twolevel growth models were specified with individuals at level 2 and age at level 1 [41]. We used a sequential modeling strategy for the multivariate portion of the analysis, progressively adjusting our growth curve models to assess the relationship between social participation and change in depressive symptoms over time. The first model included individual social participation and age to examine whether there was significant variability in depressive symptoms over time for different categories of social participation. As suggested by prior research [31], the longitudinal relationship between aging and depressive symptoms is non-linear for Taiwanese older adults, and therefore we included a quadratic term in all growth curve models. The second model then added individual baseline controls (including health conditions, socio-demographic characteristics, and socioeconomic status) to the first model in order to assess the relative effect of social participation after adjusting for individual controls. As suggested by previous studies [22], an interaction term for individual social participation by gender was tested in all the models to explore any possible gender differences in depressive symptoms across the different participation categories. Table 1 summarizes the distributions of individual characteristics stratified by categories of social participation. Approximately two-thirds of the sample reported no physical disability at baseline. Bivariate tests indicated that older adults who had better physical health status were more likely to be represented among the group that reported continuous social participation before age 70 (by wave 3) (71%) in contrast with 59% group who reported never participating and 57% of the group lost to follow-up. A relatively small portion of the sample (17%) reported experiencing chronic illness at baseline; unsurprisingly, persons with chronic illness were disproportionately represented among the group lost to follow-up midway through the TSLA data collection (i.e., by the age of 70). Due to a large number of male migrants from China in 1949, males comprised the majority of the sample (63.33%), and as expected, males and females exhibited significantly different patterns of participation in later life. We also observed significant differences in patterns of social participation by ethnicity, education level, work status, homeownership, and family living arrangement, as reported on Table 1. The continuous group reported lower levels of depressive symptoms on both domains in comparison to the never participated group and the dropped-out group. Table 2 presents the results of the growth curve analysis for the negative affect domain. Model 1 shows a significant effect of social participation on negative affect, independent of aging. In comparison to those who never participated in social activities, persons who continued or initiated their social participation as older adults have a significantly lower level of depressive symptoms (β 01 = -1.41, p < 0.001 and β 02 = -1.13, p < 0.001 respectively) over time. We also observed a modest negative association between participation and negative affect over time (β 03 = -0.71, p < 0.05) for those who ceased participation before age 70, suggesting that even unsustained social activity (as compared to no involvement) in older adulthood is psychologically beneficial in the long term. As for the aging effect, levels of depressive symptoms on the negative affect domain increased with age (mean linear growth rate = 0.18, p < 0.001) with the acceleration in the likelihood of symptoms diminishing slightly over time (mean quadric growth rate = -0.004, p < 0.01). Results In order to assess the effects of confounding factors, the individual covariates were included in subsequent models. Although though the coefficients for those who continued or initiated their social participation remain significant, the magnitude of these associations was appreciably reduced after adjusting for individual differences in health status (i.e., presence of disability or chronic illness) socio-demographic characteristics, and socioeconomic status. The observed relationship between participation and negative affect for the group who ceased participation was rendered non-significant when health and socio-demographic differences were taken into account. Experiencing physical disability increased the level of negative affect over time. Conversely, the level of negative affect decreased with higher education attainment, with full-or part-time employment (versus not employed), with home ownership (versus not), and with living with family members (versus living alone). Contingencies between gender and social participation were found to be non-significant (model not tabled), suggesting that the protective effect of social participation on negative affect did not differ between men and women. Table 3 presents the results for the lack of positive affect domain. Model 1 showed that the likelihood for a lack of positive affect was decreased by either continued or initiated social participation in older adulthood (β 01 = -0.74, p < 0.001 and β 02 = -0.48, p < 0.001 respectively), as compared to never participating. We also observed a modest negative relationship among those individuals who ceased participation in older adulthood; as before, this suggests that some participation in social activities in later life is better than nothing at all. The overall relationship between social participation and lack of positive affect persisted even after adjusting for individual differences (Model 2). As with the negative affect domain, presence of a disability increased the tendency to experience a lack of positive affect over time and slightly attenuated some of the psychological benefit of participation on long-term depressive symptomatology. Our results demonstrated that lack of positive affect decreased with age, suggesting that this aspect of depressive symptomatology changes, at least in part, with the passage of time. Discussion This study suggests that overall, social participation benefits the mental health of older Taiwanese adults. Our analysis also illustrates the dynamic nature of this relationship; that is, our results demonstrate that social involvement that is continued or initiated in later life is protective of mental health over and above individual differences in social circumstances and health status. However, our observation of a modest effect for those adults who had to cease social participation in their 70's also suggested that some involvement in social activities in later life is better for mental health as compared to no social participation at all. This observed association was accounted for by individual differences -specifically, physical disability -suggesting that functional impairments are a major threat to social activity later in life. This study adds to the body of research showing the benefits of remaining socially active in later life, as demonstrated by lower depressive symptom levels for adults who reported continued involvement in social activities versus adults who reported no social participation. As suggested by our analyses, making a continuous effort to participate in social activities in late life is a commitment to preserving the older person's mental health, even though such participation may be varied by different types of social activities and it may also be caused by many other potentially motivating factors such as the desire to attend social functions and a search for emotional support. Our analysis also demonstrates that it is never too late to reap the psychological benefits of human interaction. Previous studies have suggested a gender difference in the relationship between social participation and mental health status among older adults [22]. However, our analyses did not yield any gender differences in both domains of depressive symptoms. This discrepancy may be due to the relatively small female subgroup (N = 51) that reported continuous social participation over the course of the study. In other words, the statistical power for detecting this particular relationship might be quite low. More work is needed to assess the potential gender differences in social participation and the effect of this dynamic variable on psychological distress among Taiwanese older adults [42]. Although our work provides important insights into the social aspects of mental health for older adults in non-Western countries using longitudinal data, we acknowledge that our approach has several limitations. First, the scope of this study was limited to social participation in general. As the Taiwanese place a high value on family, future research needs to specifically compare the influence of social participation through family versus community activities. Second, in order to maximize our analytic sample by minimizing the risk of the attrition over time, we focused on a sample aged from 60 to 64 years with 18 years of follow-up data, a relatively "young" and healthy group of older adults. This selected sample limits the generalizability of the results to the "youngest old," who are likely to have higher rates of social participation than members of the older cohorts due to better health, less disability, and wider social circles. Third, the TLSA data is based on self- reported recall of social activities and depressive symptoms, raising the issues of recall bias. Fourth, the individual controls used in the analyses are based on baseline measures. Several of these variables such as health status and family living arrangement were likely to have changed over the 18-year period of study. Analysis of the additional time-varying covariates was beyond the scope of this investigation which focused on the time-varying nature of social participation and the relationship of this focal construct with mental health. The next logical step in this line of inquiry is to investigate the role of changes in social support, health and disability status experienced by older adults in the pathway between social participation and emotional health. Conclusions This study extends prior research concerning the longitudinal relationship between aging and depressive symptomatology to address gaps in the empirical literature on the association between social participation and distress among older members of the Taiwanese population. Our analyses demonstrated that social participation is globally beneficial to the psychological health of older adults and that specifically, continued or initiated social activities mitigates depressive symptoms that are likely to be experienced in later life. This study also contributes to the growing interest in late-life social participation and mental health for a population that is not frequently the focus of empirical studies in mental health and aging. Public policy and healthcare interventions aiming at promoting social participation for older adults represent a promising area for maintaining good mental health among a growing segment of Taiwanese society. Note TLSA data are openly available and can be applied for research use by approval of the Bureau of Health Promotion at the Department of Health in Taiwan.
v3-fos-license
2023-01-15T15:16:04.535Z
2021-08-23T00:00:00.000
255806916
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s12870-021-03175-3", "pdf_hash": "c20c0cafdbd33ec4a4b73623fbabed113d3f24d6", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42683", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "sha1": "c20c0cafdbd33ec4a4b73623fbabed113d3f24d6", "year": 2021 }
pes2o/s2orc
Comparative phylogenetic analysis of CBL reveals the gene family evolution and functional divergence in Saccharum spontaneum The identification and functional analysis of genes that improve tolerance to low potassium stress in S. spontaneum is crucial for breeding sugarcane cultivars with efficient potassium utilization. Calcineurin B-like (CBL) protein is a calcium sensor that interacts with specific CBL-interacting protein kinases (CIPKs) upon plants’ exposure to various abiotic stresses. In this study, nine CBL genes were identified from S. spontaneum. Phylogenetic analysis of 113 CBLs from 13 representative plants showed gene expansion and strong purifying selection in the CBL family. Analysis of CBL expression patterns revealed that SsCBL01 was the most commonly expressed gene in various tissues at different developmental stages. Expression analysis of SsCBLs under low K+ stress indicated that potassium deficiency moderately altered the transcription of SsCBLs. Subcellular localization showed that SsCBL01 is a plasma membrane protein and heterologous expression in yeast suggested that, while SsCBL01 alone could not absorb K+, it positively regulated K+ absorption mediated by the potassium transporter SsHAK1. This study provided insights into the evolution of the CBL gene family and preliminarily demonstrated that the plasma membrane protein SsCBL01 was involved in the response to low K+ stress in S. spontaneum. Background Sugarcane cultivars (Saccharum spp.) are mainly grown in tropical and subtropical regions of the world. Potassium leaching and soil acidification are common in these regions, thus decreasing the soil potassium content in sugarcane cultivation regions, particularly if the cultivation layer is low. A lack of soil potassium adversely affects the yield and quality of sugarcane [1]. Calcineurin B-like (CBL) protein, a plant calcium-binding protein initially identified in Arabidopsis, is a member of a group of small proteins that are strongly homologous with the regulatory B subunit of calcineurin in yeast [2]. When the cells of plant roots perceive reduced K + concentrations in the external environment, Ca 2+ signals are generated and relayed by CBLs to activate K + channel proteins and potassium transporters, enhancing the uptake and utilization of K + [3]. Subsequent studies have revealed that rice (O. sativa L.) and Arabidopsis (A. thaliana) possess 10 distinct CBL proteins [4,5]. Using comparative genomic methods, different numbers of CBLs have been identified in maize (Z. mays L.), sorghum (S. bicolor L.), poplar (Populus L.), and cotton [6][7][8][9]. All CBLs are characterized by the presence of at least three conserved calcium-binding EF (elongation factor)-hand motifs. The EF-hand motif is defined by its helix-loop-helix secondary structure as well as in the ligands presented by the loop to bind the Ca 2+ ion [10]. To relay Ca 2+ signals, CBLs interact with target proteins, such as kinases, cytoskeletalassociated proteins, and metabolic enzymes to regulate gene expression [11,12]. CBL-interacting protein kinases (CIPKs) are important target proteins of CBLs [3]. The CBL-CIPK complex has an indispensable role in plant response to abiotic stresses such as salinity, potassium starvation, low temperature, and drought [2,13]. Besides, it also serves as an important signaling network regulating growth and development, uptake and transport of NO 3 − , NH 4 + , and iron, H + homeostasis, and reactive oxygen species (ROS) signal transduction in plants [13,14]. To date, studies on the CBL family have mainly focused on Arabidopsis, rice, and maize. In Arabidopsis, AtCBL1/9 was first reported to positively regulate K + absorption by forming a CBL1/9-CIPK23 complex and activating the inward-rectifier K + channel AKT1 (Arabidopsis K + transporter 1) by phosphorylation [15,16]. The AtCBL1-AtCIPK23 complex also affects abscisic acid (ABA)-induced stomatal aperture and ROS signaling [17,18]. The AtCBL2-AtCIPK11 complex has been recognized to have a negative role in activating the plasma membrane H + -ATPase (PMA) [19,20]. AtCBL2/3 was found to be involved in protecting plants from high Mg 2+ toxicity by regulating the sequestration of Mg 2+ in vacuole by forming a multivalent network with AtCIPK3/9/23/26 in Arabidopsis [21]. AtCBL2/3-CIPK12 complexes localized on the tonoplast participate in the controlling the germination of pollen grains and tube growth [22]. The AtCBL4-CIPK6 complex is a crucial regulator for the translocation of AtAKT2 from the endoplasmic reticulum to PM [23]. Both AtCBL4 and AtCBL10 are involved in the salt stress response through activation of AtCIPK24. AtCBL4 is primarily expressed in the root tissues under high-salt conditions [24], while AtCBL10 is expressed in leaves [25]. The AtCBL4-AtCIPK24 complex functions at the plasma membrane to drive the Na + /H + exchanger SOS1 to extrude Na + out of the cell to improve salt tolerance [24]. The AtCBL10-AtCIPK24 complex functions at the tonoplast and stimulates the Na + /H + exchanger to sequestrate Na + into the vacuole in the case of salt toxicity [25]. In rice, the homologous OsCBL4 protein has been found to interact with OsCIPK24 and participates in the SOS signaling pathway in response to salt stress [26]. The OsCBL1-OsCIPK23 complex also enhances OsAKT1-mediated K + uptake in rice roots [27]. In maize, ZmCBL9 can interact with eight maize CIPKs, i.e., ZmCIPK8/9/15/ 23/24/31/32/39, to regulate abiotic stress including dehydration, salt, ABA, and low K + level [8]. Sugarcane is the chief sugar and biofuel feedstock crop, contributing 80 and 40% of the world's sugar and ethanol, respectively [28]. Sugarcane cultivars are interspecific hybrids of S. officinarum (2n = 8 x = 80, x = 10) and the wild species S. spontaneum with many aneuploid forms (2n = 5 x~16 x = 40~128; x = 8) and various cytotypes [29]. S. officinarum contributes to the high sugar content and S. spontaneum is added for incorporating hardiness, resistance against diseases, and ratooning capacity by backcrossing to S. officinarum to recover its high sugar content as well as high biomass [29], which is a major breakthrough in sugarcane breeding history. Consequently, the sugarcane cultivars are interspecific hybrids, polyploid, aneuploid with around 80% chromosomal content derived from S. officinarum, 101 5% from S. spontaneum and the remaining 5~10% from interspecific recombinants [30,31]. Genes of the CBL family have been reported to be engaged in the regulation of the low potassium response in Arabidopsis, rice, and maize [4,5,8] but their roles and mechanisms of regulation in sugarcane remain unknown. This study revolving around the recently released S. spontaneum genome [32], comprehensively analyses the CBL family in S. spontaneum and other plants. The expression patterns of SsCBLs were monitored during development and in response to low K + stress. SsCBL01 was selected for further functional analysis. Taken together, this study performed a systematic analysis of the evolution of the CBL family and identified some robust CBL genes as candidates for responding to low K + stress in sugarcane. Recognition of CBL genes in S. spontaneum A total of nine distinct SsCBLs were identified from the AP85-441 tetraploid S. spontaneum genome [32] by excluding alleles. Eight SbCBLs were correspondingly identified from S. bicolor, the closest relative of sugarcane, based on comparative genomics (Table 1). For consistency, these SsCBLs were named according to the CBL nomenclature in O. sativa [4] and phylogenetic relationships. Each SsCBL has 1 to 4 alleles with a mean value of 3 in the S. spontaneum genome (Table S1). Among the nine SsCBLs, SsCBL01 and SsCBL03 were located on chromosome 1; SsCBL05, SsCBL09, and SsCBL10 were located on chromosome 3; SsCBL04 and SsCBL06 were located on chromosome 7; SsCBL02 was located on chromosome 2, and SsCBL08 was located on chromosome 4. All SsCBLs have 8 exons except for SsCBL09 and encode 211 to 294 amino acid residues. The predicted subcellular location of SsCBLs was the plasma membrane (PM), consistent with their role as calcium sensors to sense and decode extracellular signals. In addition, some SsCBLs may be also located on the tonoplast or in the cytoplasm. Sequence alignment of SsCBLs with their S. bicolor orthologs illustrated that the identities ranged from 78.54 to 100%, with an average of 95.15% (Table 1). Pairwise protein sequence comparisons among SsCBLs showed that the lowest similarity was 47.2% (between SsCBL04 and SsCBL03/06), the highest similarity was 92.5% (between SsCBL02 and SsCBL03), and the average was 61.8% (Table S2). Multiple alignments of the SsCBLs protein sequence revealed that all nine SsCBLs have three typical conserved EF-hand domains, and a conserved V-F-H-P-N motif is present at the end of the first EF-hand domain (Fig. 1). Four SsCBLs (SsCBL01, SsCBL04, SsCBL05, and SsCBL08) harbor conserved myristoylation and palmitoylation sites (M-G-C-X-X-S/ T) in their N-terminal regions (Fig. 1), which have crucial roles in protein aggregation, stability, and trafficking [33,34]. Cis-elements analysis of CBL genes in S. spontaneum Analysis of the cis-elements in the upstream promoter region of SsCBLs was carried out. The most frequently identified cis-elements were light-responsive, phytohormone responsive, and putative stress-responsive, as shown in Fig. S1. All SsCBLs except SsCBL05 contain an abscisic acid-responsive element (ABRE) and all SsCBLs except SsCBL01 contain a methyl jasmonate responsive element (JARE). Other phytohormone-responsive cis-elements such as ethylene-responsive element (ERE), auxin response factor (ARF), salicylic acid-responsive elements (SARE), and gibberellin responsive element (GARE) were identified in most SsCBLs. Putative stressresponsive cis-elements, such as the dehydrationresponsive element (DRE), low-temperature responsive element (LTR), and MYB transcription factor binding site involved in drought-inducibility were also identified. The G-Box cis-element involved in light response was enriched in the promoters of five out of the nine SsCBLs (SsCBLs01, SsCBLs02, SsCBLs03, SsCBLs06, and SsCBLs08). Analysis of Ka/Ks values of the CBLs To analyze the selection pressure of SsCBL genes in evolution, the ratios of nonsynonymous to synonymous substitution (Ka/Ks) among SsCBLs and their orthologous genes in S. bicolor were estimated. The results revealed that the Ka/Ks value of CBL gene pairs between S. spontaneum and S. bicolor was less than 1 (Fig. S2), which suggests that the CBL gene family went through strong purifying selection after the split between S. spontaneum and sorghum, and these genes are functionally conserved. Phylogenetic analysis of CBL genes in S. spontaneum and other representative angiosperms To have a better understanding of the phylogenic evolution of the CBL gene family, a total of 112 CBL genes, identified from 12 representative angiosperms (8 monocotyledons, 3 dicotyledons, and Amborella trichopoda) were used to construct a phylogenetic tree, while selecting a CBL gene from C. reinhardtii as the outgroup (Fig. 2). The 112 CBL genes included 9 from S. spontaneum, 8 from S. bicolor, 12 from Z. mays, 9 from S. viridis, 10 from S. italica, 10 from O. sativa, 9 from B. distachyon, 8 from A. comosus, 10 from A. thaliana, 9 from V. vinifera, 13 from S. lycopersicum and 5 from Amborella trichopoda (Fig. 3). The results suggested that the total count of CBL genes in a plant species is not proportionate to the size of its genome. The 112 CBL genes belonging to various species could be categorized into four clades (I, II, III, IV) (Fig. 2). A. trichopoda, the earliest diverging angiosperm contained 5 CBL genes, whereas, in dicots and monocots, the total count of CBL genes varied from 8 to 13, suggesting that the CBL gene family has undergone gene expansion, and this gene expansion is mainly attributed to the wholegenome duplications (WGDs) in both the dicot and monocot lineages (Figs. 2 and 3). Clades I, III, and IV contained CBL genes from all 12 angiosperms, suggesting that the ancestors of these genes predated the split of angiosperms. Moreover, the CBLs from S. spontaneum often clustered together with those from gramineous plants, especially S. bicolor. These results are consistent with S. bicolor being the closest relative of sugarcane. Exon/intron organization of the CBL family in S. spontaneum and other angiosperms To analyze the intron-exon structure and evolution of the CBL genes in different species, cDNA sequences were mapped onto their genomic sequence. According to the results, the exon number of CBL family in these species varied from 1 to 12, with most CBLs (76 out of 113, 67.3%) containing 8 exons, indicating that the last common ancestor (LCA) of CBLs in angiosperms had 8 exons (Fig. 2, Fig. S3). CBL09 and its orthologous genes in clade II had an extra exon compared with other clades, which presumably originated after an exonization event, a main driving force causing exon-intron structural differences in homologous genes [35]. The CBL gene size in clade IV varied the most among the four clades. The CBL gene structure in each subfamily was similar, and the exon sizes of the examined CBLs remained relatively conserved. However, the sizes of the genes varied widely, which is majorly attributed to different intron sizes or intron insertion (Fig. 2). The intron phases of the splice sites of these CBL genes were analyzed based on phases 1, 2, and 0 representing alternative splicing occurring after the 1st, 2nd, and 3rd nucleotide of the codon, respectively [36]. The results indicated that splicing phases of almost all introns in the same relative positions were identical, suggesting that the splicing phase of the CBL genes was highly conserved during evolution (Fig. 2). Expression analysis of SsCBLs in different tissues at different stages Transcriptome profiles of CBLs in S. spontaneum were analyzed based on RNA-seq data for investigating the expression patterns of all SsCBL genes in different tissues at different stages (Fig. 4A, Table S3). SsCBL01 showed the highest expression level among all SsCBLs; moreover, the expression level at the pre-mature stage or mature stage was higher than that at the seedling stage. Both SsCBL03 and SsCBL06 showed a higher degree of expression in the stem compared to the leaf. SsCBLs in clade I and clade II, including SsCBL04/05/ 08/09/10 all showed very low or undetectable levels of expression in the tissues examined, indicating that their role is rather limited in growth and development. Expression analysis of SsCBLs during the circadian rhythms CBLs had been reported to respond to light in Arabidopsis [37]. To analyze the expression patterns of SsCBLs during diurnal cycles, transcriptome profiles of the mature leaves in S. spontaneum at 2 h intervals within 24 h and at 4 h intervals within another 24 h were investigated. The expression levels of SsCBLs in clade I and clade II were very low or undetectable (Fig. 4B, Table S4), further supporting their limited role and is consistent with the expression pattern at different developmental stages. It is noteworthy that the expression of Fig. 5 The relative expression levels of SsCBL genes measured by RT-qPCR under low K + stress in the sugarcane hybrid YT55. The relative expression level was shown as the mean of three biological replicates and three technical replicates, * and ** respectively represent a significant difference at p ≤ 0.05 and p ≤ 0.01 based on Student's t-test SsCBL02 was regulated by circadian rhythm, with the peak expression level in the morning and the lowest in the middle of the day (Fig. 4). This can be explained by the effects of light on phytohormone activity; the hormone cycle is directly related to the calcium signaling cascade which plays a feedback role in the hormone signaling pathway [38]. Expression analysis of SsCBLs under K + -deficient stress CBLs play vital roles in decoding the signals triggered by environmental stimuli, such as K + deprivation or salt stress in Arabidopsis, rice, and maize [13,27,39]. Accordingly, we investigated the expression patterns of the nine SsCBLs under K + starvation by RT-qPCR. Generally, low K + stress moderately altered the transcription of SsCBLs (Fig. 5). The nine SsCBLs could be categorized into two groups based on their expression patterns. One group contained SsCBL01/03/04/05 and showed increased expression at 6 h with decreasing expression with the prolongation of low K + stress. The other group contained the SsCBL02/06/08/09/10 genes that showed reduced expression under low K + stress. The expression levels of SsCBL01 increased significantly at 6 h but decreased at 12 h, 24 h, 48 h, and 72 h under K + starvation, suggesting that SsCBL01 may play a valuable role in the response to low K + stress in S. spontaneum. These results also indicated that calcium signaling is an early event in the stress signaling pathway [38]. Subcellular localization of SsCBL01 SsCBL01 harbors a conserved myristoylation site in the N-terminal region, which is one of the characteristics of cell membrane localization. To confirm whether SsCBL01 localizes on the cell membrane, a fusion construct of SsCBL01 with green fluorescent protein (GFP) was used for transient transformation in the rice protoplast. The results showed that the control GFP was detected throughout the cells, while the SsCBL01-GFP fluorescence signal was only observed on the plasma membrane, suggesting that SsCBL01 is a plasma membrane protein (Fig. 6). This is also consistent with the subcellular localization of the homologous CBL1 gene in Arabidopsis and rice [18,27]. Functional analysis of SsCBL01 in the defective yeast mutant Expression pattern analysis showed that SsCBL01 may have an integral role in response to stress caused by low K + levels. Heterologous expression of SsCBL01 in yeast showed that there was no obvious difference in growth between the yeast strain transformed with SsCBL01 and the empty vector (Fig. 7), suggesting SsCBL01 alone could not absorb K + . Our previous study showed that SsHAK1 could partly recover K + absorption under K + starvation in the yeast mutant [40]. To study the regulatory effect of SsCBL01 on SsHAK1, we constructed cotransformed yeast with both genes and observed the progress of their growth in SC/−ura medium with 100 mM, Fig. 6 SsCBL01 localized at plasma membrane. SsCBL01 protein was fused with GFP. OsMAC1 protein was fused with mCherry. SsCBL01-GFP was individually expressed (A-D) or co-expressed with OsMAC1-mCherry (E-G) in rice protoplasts. The OsMAC1-mCherry was used as a plasma membrane localization marker. Scale bar = 10 μm 10 mM, and 0 mM KCl. The results showed that the growth of the yeast transformed with both SsHAK1 and SsCBL01 was better than the yeast transformed with SsHAK1 at 100 mM, 10 mM, and 0 mM KCl (Fig. 7). All these results suggested that SsCBL01 alone could not absorb K + but it could promote K + absorption by SsHAK1. Discussion As calcium sensors in plants, the CBL gene family has been found to regulate potassium stress in Arabidopsis, rice, and maize [4,5,8]. However, the CBL gene family in S. spontaneum has not yet been characterized. This work identified nine CBL genes from S. spontaneum; these genes together with 104 orthologous CBL genes from 12 plant species, and an outgroup was used to create a phylogenetic tree to study their evolution. Analysis of the expression pattern of SsCBLs in various tissues at individual developmental stages, during the circadian rhythms, and on exposure to low K + stress was performed to study the functional divergence of SsCBLs. Heterologous expression in yeast revealed SsCBL01 positively regulates K + uptake by SsHAK1. Evolution of CBL gene family in S. spontaneum and representative angiosperms WGD or polyploidy is considered to be a key trigger in the evolution of angiosperms [41,42]. Extensive analysis of the entire Arabidopsis genome sequence revealed two recent WGDs (named α and β) in the cruciferous lineage and a triplication event (γ) that may be shared by all core dicotyledonous plants [43,44]. Monocots have experienced two WGDs (named σ and ρ) [45], and recent research found that pineapple (A. comosus) had one lesser WGD event (ρ) compared with other grasses (Poaceae) [46]. Angiosperms underwent an even earlier ancient WGD event (ε) during evolution [47]. A. trichopoda has attracted much attention since it is the earliest angiosperm known to evolve separately from other angiosperms. The above WGD information in angiosperms along with the phylogenetic analysis of 112 CBL genes made it possible to study the gene evolution. The CBL genes from 12 angiosperms can be divided into four clusters in duplicated descending order: clade I, clade II, clade III, and clade IV. The CBL genes in four clades were distributed unevenly, and CBL gene family members from dicotyledon and monocotyledon species were always found to be clustered in different subfamilies in all four clades. These results indicated that the CBLs might have undergone divergent evolution to adapt to drastic changes in the environment. The gene structure in clade I was the most conserved among the four clades, which was consistent with clade I being the most ancient clade. In clade II, all angiosperms except A. trichopoda contained CBL genes, indicating that the last common ancestor (LCA) of the CBLs in clade II originated after the divergence of A. trichopoda from the angiosperms. The gene expansion seen in clades I, II, and IV was mainly ascribed to the γ WGD in the dicot and the σ WGD in the monocot lineages; besides, the ρ WGD also led to CBL expansion in clades I and II in grasses (Poaceae). Clade III only contained one CBL gene family member, CBL01, and all 12 representative angiosperms, including A. trichopoda, had this gene, suggesting the importance and conservation of CBL01. In clade IV, the CBL gene structure varied the most among the four clades, which was also consistent with the phylogenetic analysis showing clade IV as the latest branch during evolution. Gene expression and functional divergence of CBLs in Saccharrum Expression pattern analysis can shed light on the potential functions of the CBL gene family. In our study, the CBL genes in clade I and clade II were either not expressed or expressed to a very low degree in tissues at different developmental stage and in response to circadian rhythms, while the CBL genes in clade III and clade IV were the predominantly expressed genes. Clades I and II appeared earlier than clades III and IV based on the rooted phylogenetic tree. These results suggested that functional redundancy and divergence may have occurred in the CBL family during evolution. According to the phylogenetic analysis, SsCBL01 was an important and conserved gene. This was further confirmed by transcriptome analysis since SsCBL01 was the predominantly expressed gene among the nine SsCBL genes at different developmental stages as well as during circadian rhythms. The expression of SsCBL02 is regulated by the circadian rhythm. In Arabidopsis, AtCBL2 transcription was also found to be influenced by illumination and AtCBL2 responded to light signals by interacting with the SNF1-related protein kinase AtSR1 [37]. These results indicated that SsCBL02 may be involved in signal transduction in response to light. As a whole, K + starvation had a moderate effect on the expression levels of the SsCBL genes. Similarly, the CBL genes in cotton also had similar expression patterns under K + deficiency [6]. These results indicated that multiple SsCBL genes most likely regulate the response to low K + stress in S. spontaneum. Strikingly, the expression of AtCBL1 and AtCBL9 was found to be stable under low-potassium conditions [17]. Therefore, those SsCBLs not induced by potassium deficiency may also be involved in the adaptation to potassium deficiency in S. spontaneum. The expression of AtCBL10 in roots was observed to be moderately reduced under potassium deprivation [48]. These results suggest that the constitutive expression of some CBL genes may be sufficient to transmit calcium signals to downstream targets in response to low potassium stress in plants. The potassium content in roots and shoots was measured at 0 h, 6 h, 12 h, 24 h, 48 h, and 72 h under low potassium stress. It is interesting to note that the potassium content both in the shoots and roots did not change significantly under low K + stress within 72 h (Fig. S4). This could be explained by reduced K + concentrations in the external environment triggering a Ca 2+ signal in the cells of the plant root, which is then recognized and relayed by CBLs to activate K + channel proteins or potassium transporters, leading to enhanced uptake and transport of K + in the plants over the short term [3,16]. Gene structure of SsCBL and function of SsCBL01 in response to low K + stress All SsCBL genes except SsCBL09 have eight exons, similar to CBL genes in other monocots and eudicots, which suggests conservation of CBL gene structures in different plant species. Moreover, all SsCBL genes contain three typical conserved EF-hand motifs, which bind to Ca 2+ to transduce calcium signals. These properties are also strongly reminiscent of those in Arabidopsis and rice [5]. The conserved structure of CBL family members in different plants may imply similar modes of interaction with their target proteins, such as CIPKs. Studies have shown that CBLs play key roles in response to low potassium stress in Arabidopsis and rice [16,27]. In this study, low K + stress had a moderate effect on the transcription of SsCBLs, with SsCBL01 expression up-regulated and then down-regulated under K + starvation. Heterologous expression of SsCBL01 in yeast showed that SsCBL01 alone could not mediate K + uptake Further studies found that SsCBL01 could regulate SsHAK1 and enhance K + uptake mediated by SsHAK1 under both low potassium and normal potassium supply conditions. As in Arabidopsis, rice, and barley, CBL1 has been shown to interact with CIPK23 and phosphorylate AKT1, thereby activating its K + uptake function [16,27,49]. Lee et al. [50] found that AtCBL1 could interact with different CIPKs, such as CIPK6, CIPK16, and CIPK23 to activate AtAKT1. The CBL1-CIPK23 complex can also phosphorylate AtHAK5 and up-regulate its activity to facilitate absorption of K + in the roots of Arabidopsis [51]. In this study, it was found that SsCBL01 could enhance the K + absorptionpromoting activity of SsHAK1. However, whether this enhancement is due to the interaction between SsCBL01 and SsCIPKs in S. spontaneum needs further investigation. Conclusions Sugarcane cultivars are polyploid interspecific hybrids with large and complex genomes. In this study, nine SsCBL genes were initially identified in S. spontaneum using comparative genomics The systematic evolutionary analysis revealed that CBL gene families have undergone gene expansion and strong purifying selection. Gene structure and protein sequence analysis showed that SsCBLs were conserved and contained three typical calcium-binding EF-hand domains. Expression pattern analysis in different tissues at different developmental stages and during the circadian rhythms revealed the functional divergence of SsCBLs. Functional verification in yeast suggested that the cell membrane-localized SsCBL01 alone could not absorb K + but it positively regulated K + absorption by the potassium transporter SsHAK1. In summary, this study provides a comprehensive view of the CBL gene family and robust candidate SsCBLs for the further study of functional mechanisms in S. spontaneum. Methods Plant materials, growth conditions and yeast mutant strain S. spontaneum SES208 (2n = 8x = 64, from Jisen Zhang's laboratory of Fujian Agriculture and Forestry University, FAFU) and the sugarcane commercial hybrid Yuetang 55 (YT55, bred by Institute of Bioengineering, Guangdong Academy of Sciences) were used in this study. SES208 was grown in plastic pots with soil in a greenhouse following standard growing procedures. YT55 was grown in plastic pots with an improved Hoagland nutrient solution. The yeast mutant R5421 (Saccharomyces cervidinus) is a potassium-deficient strain and is mainly used for the identification of potassium transporters, potassium channels, or sodium pumps. The strain grows normally in medium containing 100 mM K + , and rather slowly in medium containing 5-10 mM K + . However, when the concentration of K + in the medium was less than 0.5 mM, R5421 cells were not able to grow. SES208 tissues were sampled to investigate the pattern of expression at various stages of development and at different points in the diurnal cycle. To analyze the pattern of expression at different developmental stages, samples including leaf and stem from 35-day-old plants of SES208 (seedling stage), leaf roll, leaf, upper stem (i.e., Stem3), central stem (i.e., Stem6), and lower stem (i.e., Stem9) from plants at 9 (pre-mature stage) and 12 (mature stage) months of age were collected. The internodes on the S. spontaneum stalk were numbered from top to bottom. A previously described approach was followed for the collection of tissues [52], and three biological replicates were collected for each sample. To investigate the expression patterns in response to circadian rhythms, mature SES208 plant leaves were gathered at an interval of 2 h within the first 24 h starting from 6:00 a.m. on March 2, 2017. In the next 24 h, the collection was carried out at an interval of 4 h. The tissue collection method was the same as described previously [46], and three biological replicates were collected for each sample. To investigate the expression pattern under conditions of low potassium stress, the hybrid sugarcane variety YT55 was bred at a normal potassium level (3.0 mM KCl) for 20 days in plastic pots with Hoagland nutrient solution (the solution was changed once a week) in greenhouse conditions (temperature: around 26°C; relative humidity: 60-80%; light cycle: 12-12 h light-dark cycle), and subsequently transferred into the K + -deficient nutrient solution (0.1 mM KCl) for exposure to low K + stress. Mixed root tissues from six plants in a pot (a biological replicate, a total of three biological replicates were gathered) was sampled at 0 h, 6 h, 12 h, 24 h, 48 h, and 72 h, respectively following stress treatment, then frozen immediately in liquid nitrogen and stored − 70°C for total RNA extraction. The potassium contents in the roots and shoots of YT55 under low potassium stress at 0 h, 6 h, 12 h, 24 h, 48 h, and 72 h were measured. The plant tissues of roots and shoots were sampled (six plants in a pot represented a biological replicate, a total of three biological replicates were gathered). These samples were placed in an oven at 105°C, fixed for 2 min, and then dried to constant weight at 70°C. Plant samples were crushed in a mill and then passed through a 20-mesh sieve. A total of 0.2 g sample was weighted and digested by the wet ashing method of H 2 SO 4 -H 2 O 2 on a cooker. After digestion, the volume was adjusted to 250 mL. The potassium content (mg/g, dry weight) of each treated sugarcane plant tissue was measured using a Model 425 flame photometer (Sherwood, UK). Sequence analysis of CBL family from S. spontaneum Prediction of the properties of the CBL proteins from S. spontaneum and S. bicolor (S. bicolor) was carried out using ExPASy (https://web.expasy.org/compute_pi/). TBtools software was used to extract the GFF3 files of CBL gene family members, and the gene structures were mapped and visualized by TBtools [53]. The subcellular location of the CBL proteins was predicted by the online tool WoLF PSORT (https://www.genscript.com/wolfpsort.html). To analyze the type and distribution of ciselements in CBLs, a 2 kb sequence in its upstream promoter region was selected and submitted to the online tool Plant CARE (http://bioinformatics.psb.ugent.be/webtools/plantcare/html/) for the prediction of ciselements in promoters. Phylogenetic and evolutionary analysis The 113 CBL protein sequences from 12 angiosperms and an outgroup were aligned using ClustalW 2.0 by pairwise and multiple alignments with default parameters [54]. A phylogenetic tree of the CBL family was created based on the alignment results using MEGA7.0 [55]. The evolutionary history was inferred by using the Maximum Likelihood method based on the JTT matrixbased model [56]. Assessment of the reliability of the internal branches of the phylogenetic tree was completed using 1000 times bootstrap trials, displaying the percentages next to the branch. The Ka (non-synonymous substitution rate), Ks (synonymous substitution rate), as well as the ratio of Ka to Ks of the eight pairs CBL orthologs from S. spontaneum and sorghum were estimated using Easy_KaKs calculation program (https://github.com/tangerzhang/ FAFUcgb/tree/master/easy_KaKs). Expression profiling of CBLs in S. spontaneum based on RNA-seq and RT-qPCR For investigating the pattern of expression at various stages of development and circadian rhythm, RNA from all tissues was extracted by an RNA extraction kit and employed to create cDNA libraries according to the protocol described by TruSeq™. The Illumina Hiseq 2500 platform was used to sequence the RNA-seq libraries with 100 nt paired-end. The S. spontaneum AP85-441 genome was used as a reference genome [32]. The raw reads were first filtered by trimming the adapter and the low-quality sequences, including reads with unknown bases of over 10% and those containing more than 50% of the nucleotides with Q-value ≤5. Then, the obtained clean reads were mapped to reference sequences. Quantitative analysis of RNA-seq was performed through Trinity (https://github.com/trinityrnaseq/trinityrnaseq/ wiki), the transcriptional expression level for the individual genes was quantified by the FPKM value (fragments per kilobase of exon per million fragments mapped) using RESM in Trinity [57]; these data can be downloaded from http://sugarcane.zhangjisenlab.cn/sgd/html/ index.html. To assess the expression patterns of CBLs in YT55 at low K + stress, 1 μg RNA extracted from the roots of YT55 under low K + stress was reverse-transcribed using the PrimeScript RT reagent kit (Monad Biotech, Suzhou, China) to cDNA. Subsequently, real-time quantitative PCR (RT-qPCR) was carried out using the cDNA and SYBR Green Realtime PCR Master Mix (TOYOBO, Japan) on an ABI 7500 real-time PCR system. The specific RT-qPCR primers of the CBLs (Table S5) were designed by online tools Integrated DNA technologies (https://sg.idtdna.com/PrimerQuest/Home/Index), utilizing two constitutively expressed genes, β-actin and the eukaryotic elongation factor 1a (eEF-1a) as an internal control to normalize the gene expression levels. Three technical replicates for each sample were conducted. The relative expression levels of each SsCBL gene in samples treated with low K + stress for different times were calculated using the 2 -△△Ct method [58]. Subcellular localization According to the CDS sequence of SsCBL01, primers (Table S6) containing enzyme digestion sites were designed to amplify the full-length cDNA of SsCBL01. cDNA synthesized by reverse transcription of YT55 RNA sample under low potassium stress for 6 h was used as the template. The PCR product was recovered and fused in frame with the coding region of green fluorescent protein (GFP) in the pBWA(V) HS-Glosgfp vector, to create the SsCBL01-GFP fusion construct controlled by the CaMV 35S promoter. The binding product was converted into E. coli-competent DH5α cells. Positive clones were chosen for PCR amplification and confirmed via sequencing, then the GFP fusion construct was employed for the transient transformation in rice protoplast. The transmembrane protein OsMAC1 was fused with mCherry [59]. The OsMAC1-mCherry construct was used as a plasma membrane localization marker. GFP and RFP fluorescence signals were observed using a confocal laser microscope (Nikon C2-ER) after dark culture on MS medium at 28°C for 48 h and examined at 488 nm/640 nm (excitation) making use of an argon laser with an emission band at 510 nm/675 nm. Expression vector construction and heterologous expression of SsCBL01 in yeast Cloning of the full-length cDNA of SsCBL01 was carried out using YT55 cDNA. The PCR product was recovered and was ligated using In-Fusion enzyme (TaKaRa Biotechnology Co., Ltd., Dalian, China) to the yeast expression vector pYES3/CT. The PCR products were converted into E. coli-competent JM109 cells and monoclonal colonies were selected for PCR. The plasmid confirmed to be carrying SsCBL01 and the empty vector pYES3/CT were transformed into the yeast mutant strain R5421 competent cells to screen positive colonies in SD/−trp medium. Positive yeasts were incubated in a liquid medium overnight to saturation before adjusting the yeast concentration to OD600 = 0.8. Yeast strains (R5421) with the empty vector pYES3/CT and SsCBL01 were employed for gradient dilution and inoculated in SD/−trp media containing 0 mM, 10 mM, or 100 mM KCl for 3-5 days.
v3-fos-license
2018-04-03T03:41:31.684Z
2017-11-27T00:00:00.000
4546825
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://aapm.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acm2.12223", "pdf_hash": "d659e989bf2e4651880ff2534351af6d7c5c1450", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42684", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "sha1": "d659e989bf2e4651880ff2534351af6d7c5c1450", "year": 2017 }
pes2o/s2orc
Evaluation of AAPM Reports 204 and 220: Estimation of effective diameter, water‐equivalent diameter, and ellipticity ratios for chest, abdomen, pelvis, and head CT scans Abstract Purpose To confirm AAPM Reports 204/220 and provide data for the future expansion of these reports by: (a) presenting the first large‐scale confirmation of the reports using clinical data, (b) providing the community with size surrogate data for the head region which was not provided in the original reports, and additionally providing the measurements of patient ellipticity ratio for different body regions. Method A total of 884 routine scans were included in our analysis including data from the head, thorax, abdomen, and pelvis for adults and pediatrics. We calculated the ellipticity ratio and all of the size surrogates presented in AAPM Reports 204/220. We correlated the purely geometric‐based metrics with the “gold standard” water‐equivalent diameter (DW). Results Our results and AAPM Reports 204/220 agree within our data's 95% confidence intervals. Outliers to the AAPM reports’ methods were caused by excess gas in the GI tract, exceptionally low BMI, and cranial metaphyseal dysplasia. For the head, we show lower correlation (R2 = 0.812) between effective diameter and DW relative to other body regions. The ellipticity ratio of the shoulder region was the highest at 2.28 ± 0.22 and the head the smallest at 0.85 ± 0.08. The abdomen pelvis, chest, thorax, and abdomen regions all had ellipticity values near 1.5. Conclusion We confirmed AAPM reports 204/220 using clinical data and identified patient conditions causing discrepancies. We presented new size surrogate data for the head region and for the first time presented ellipticity data for all regions. Future automatic exposure control characterization should include ellipticity information. | INTRODUCTION Dose from computed tomography (CT) has always been a general concern in the medical community. 1,2 This is primarily due to the growing number of CT examinations 3 and the high dose from CT relative to other imaging modalities. 2,4 It is always a challenge for radiologists and medical physicists to establish adequate image quality with the lowest radiation exposure to the patient, in agreement with the ALARA (As Low As Reasonably Achievable) principle. 5 Unfortunately, in CT, the current scanner output dose metrics, such as volume CT dose index (CTDI vol ), do not reflect the dose the patient actually receives. [6][7][8] The CTDI vol only represents the system's radiation output for a very specific set of conditions in a cylindrical acrylic polymethyl methacrylate (PMMA) phantom with diameters of 16 or 32 cm in a contiguous axial or helical examination. 4,7,[9][10][11][12] Ideally, a method would exist to normalize these dose values to make them reflect the dose a patient actually receives. The American Association of Physicists in Medicine (AAPM) Report 204 12 introduced the concept of a size-specific dose estimate (SSDE). The SSDE is a patient size-corrected estimate of patient dose which uses a surrogate for patient size to scale the scannerreported CTDI vol . 12 Many previous studies have used and/or evaluated size surrogates to estimate patient size which include body weight, body mass index (BMI), age cross-sectional diameter, effective diameter, and a combination of these parameters for individual dose adaptation for adults. 13 and pediatric CT scans of the torso and truncated axial images. 8,[24][25][26][27] The size surrogates of AAPM Report 204, however, are based only on patient geometry and do not consider the different attenuation of various tissue types. For example, the lung was considered a caveat 28 because of its much lower density compared to water or PMMA, therefore reducing the attenuation of the patient's chest significantly from the 32 cm reference CTDI vol phantom. This limitation was addressed in the AAPM Report 220 29 in detail, and the sole use of water-equivalent diameter (D w ), which considers tissue attenuation in addition to patient geometric size, for calculations of SSDE is recommended. The use of D W had previously been proposed before AAPM Report 220. 13,18,30,31 Wang et al. 30 demonstrated that the use of D W is more accurate in calculating SSDE in thoracic CT compared to the geometric size surrogates, but D W and the geometric size surrogates both perform and correlate well for the abdomen and pelvis. AAPM Report 220 collected experimental data acquired using cylindrical phantoms and Monte Carlo simulations. The analysis assumed that the collection of a limited number of different size elliptical phantoms and the family of Monte Carlo phantoms used was intended to span what is seen clinically. Ikuta et al. 25 evaluated D E and D W and found good correlation; however, their method differed from AAPM Report 220 where they used four slices separately corresponding to the lung apex, the superior aspect of the aortic arch, the carina, and immediately superior to the diaphragm without averaging for thorax and abdomen. However, the AAPM 204/220 Reports allow the use of the center of the scan range calling it a "shortcut" relative to averaging a size surrogate over the entire scan range. Leng et al. 32 can also vary the dose angularly about the patient. 6,36,37,[40][41][42][43][44][45][46] respectively. The data shown in Fig. 1 were collected using the scan parameters listed in Table 1 for the routine adult abdomen pelvis dataset which used angular dose modulation. The ellipticity ratio is involved in setting the angular dose modulation value; however, there is only one paper reporting ellipticity values to our knowledge in the literature. 42 Therefore, in this paper, we report the ratio of LAT to AP for multiple body regions, including the head for hundreds of patients. We do not report on how this value influences a CT scanners' dose modulation since that is highly vendor dependent and "black box" in nature. However, there are several papers in our field that are actively "reverse engineering" vendors AEC algorithms for research and clinical purposes. [47][48][49] The ellipticity data we report here can be included in such efforts. As motivated in the previous paragraphs, the purpose of this paper is to confirm AAPM reports 204/220 and provide data for the future expansion of these reports by: (a) presenting the first largescale confirmation of the reports using clinical data, (b) providing the community with size surrogate data for the head region which was not provided in the original reports and additionally provide the measurements of patient ellipticity ratio for different body regions. 2.A | Experimental data collection A total of 884 patients were included in our analysis. The patients' data were collected from three different examination types and binned into six different sets. Table 1 For the purpose of analysis, we calculate the AP, LAT, and D E for each slice in each dataset and then report the average for all slices for each patient or each subset of patient data as defined in Table 1. We define the ellipticity ratio as r = LAT/AP. The variable r is calculated for every slice and then averaged overall slices in a given dataset for each patient as described in Table 1. We also report the standard deviation in r and the minimum and maximum r values observed for each dataset shown in Table 1. AAPM 204 uses a second-order fit to relate D E to AP or LAT. The authors of AAPM 204 use a first-order fit to relate D E to AP + LAT. We believe the reason that a second-order fit gave a better result for AP or LAT was due to the phantoms used in the AAPM study. For a fixed ellipticity ratio, D E should be proportional to AP or LAT. The relationship between D E and LAT (or AP with a simple substitution using r = LAT/AP) is where k = 1/r. In other words, for a fixed ellipticity ratio, a firstorder fit should be adequate to relate D E to LAT or AP. The AAPM 204 report, however, includes cylindrical phantoms (r = 1) and some elliptical phantoms of a fixed r but varying size. This is why we believe the authors used a second-order fit between D E and AP or T A B L E 1 Experimental data collection of human patients of routine adult abdomen and pelvis, adult chest, adult head, and pediatric abdomen pelvis cases (the pediatric data included five different protocols hence the range in NI, pitch, and slice thickness). † Denotes datasets that are derived from the adult chest dataset scan range. The Noise Index (NI) refers to a vendor-specific automatic exposure control setting. Other vendor-specific reconstruction options were set as follows: "PLUS" mode, recon kernel of "STANDARD" for the body and "SOFT" for the head, and an ASiR level of 40%. LAT. Not because the underlying relationship between D E and AP or LAT warranted this, but because the combination of varying r values made their data nonlinear. Therefore, we chose to use a first-order fit of our clinical data since it includes hundreds of patients with varying r values. We assumed a given body region in a human would have a distribution of r values with a mean that would be characteristic of that body region. Furthermore, as seen in our results, the second-order fits of AAPM 204 phantom data fall within our confidence intervals. 2.C | Water-equivalent diameter Previous studies show the x-ray attenuation of a patient in terms of a water cylinder with a water-equivalent diameter (D W ). 12,13,[30][31][32]35,36,50 In other words, the D W represents the diameter of a cylinder of water that contains the same total x-ray attenuation as that contained within the patient's axial cross section and depends on both the cross-sectional area of the patient and the attenuation of the contained tissues. This method of calculating D W was described in AAPM Report 220 and implemented it here with equation The ROI represents the mean CT number within the recon- Table 1 and plotted D W as a function of D E for each subset. All plots were fitted using a linear fitting routine (polyfit function from MATLAB, the Mathworks INC, Natick, MA, USA). We applied a first-order linear fit and linear regression (R 2 ) to all data points combined and 95% confidence intervals for all data points. A 95% confidence interval indicates that a 0.95 probability of data points contain the true population mean. We report the confidence interval in millimeters and this number is the distance from the trend line to the confidence interval, so the range between confidence intervals is double the reported confidence interval in millimeters. We considered points outside this confidence interval to be outliers and we analyzed each of them to characterize deviations from the correlation shown in the AAPM reports that may be present in the clinic. We Figure 4(b) shows both the UW fit of D W as a function of D E with data points taken from AAPM Report 220 Table 1 for abdomen and Table 2 for thorax, and shows that these points fall within our 95% confidence interval. | DISCUSSION For all data excluding the head, we show in Fig. 4(a), that our linear fits of D E as a function of (AP + LAT)/2, LAT, and AP compare well to the results of AAPM Report 204. For D E as a function of AP or LAT as shown in Fig. 4(a), we did not observe the same curvature as AAPM Report 204; however, the spread in our data, shown by the 95% confidence interval, could have been hiding such behavior. As reported in AAPM Report 204, the physical phantoms used Table 2. Only the LAT comparison from the AAPM Report 204 data is outside our 95% confidence interval for patient LAT dimensions over 400 mm. In Fig 4(b), all AAPM Report 220 data points lie within our 95% confidence intervals for both the abdomen and thorax AAPM data. We obtained results agreeing with the phantom-based results of AAPM Reports 204 and 220 using a large set of patient data. Our dataset is the largest clinical dataset used for this purpose to date and has allowed us to identify a number of outlier cases not previously reported on in the literature. There were a few outlier cases that deviated from our fits and the correlation shown in the AAPM task group reports. Figure 6 displays the outliers seen in Fig. 3(a) Fig. 6(a) corresponds to the adult chest outlier in Fig. 3(a) which was also 20 mm below the fit line. This is due to the relatively higher ratio of lung space to soft tissue in the thorax to other adult chest scans (e.g., compare to Fig. 6(b)) and relatively lower amount of subcutaneous fat relative to other adult chest patients of the same geometric size. The head outlier case shown in Fig. 6(c) was 23 mm above the fit line for all head scans. The head outlier case presents with cranial metaphyseal dysplasia (excess bone in the head), which when compared to a "normal" adult head (i.e., compare to Fig. 6(d)), it is obvious that the excess bone is the reason for the higher D W relative to other heads of the same geometric size. We do not show the adult abdomen pelvis outliers that can be seen in Fig. 3(a). We analyzed these cases and noted that these cases were always below the fit line, corresponded to cases that included more of the thorax region than was typical for a routine abdomen pelvis scan. Clinically, this is warranted in some cases when: (a) a radiologist requests coverage into the thorax or (b) for patients with lung bases that extend deep within the abdomen or conversely a diaphragm/liver dome that extends deep within the thorax. Therefore, Comparison of AAPM Report 220 D W as a function of D E calculated our fit for pediatric and adult abdomen pelvis data (blue) with 95% confidence intervals (blue dashed line) and our fit for adult thorax only (green) with 95% confidence intervals (green dashed line). In (b), AAPM Report 220 points for abdomen (red asterix) and thorax (red plus sign) are plotted over our fits. when one scans an abdomen pelvis and includes more of the lungs than is typical for such a scan, D W will decrease. Ikuta et al. compare D W to D E for the thorax and abdomen and report poor (R 2 = 0.51) and good (R 2 = 0.90) correlation in those regions, respectively. 25 Our correlation coefficients are much higher than the Ikuta result. We believe that the source of this difference is sample size. We analyzed on average 110 image slices for each of our chest datasets whereas Ikuta looked at 50 patients and measured four slices per patient. The four slices corresponded to the lung apex, the superior aspect of the aortic arch, the carina, and immediately superior to the diaphragm. Ikuta et al. reported fitting statistics not on the average of their four measurements per scan, but for each measurement point individually. If we compare D W and D E for each point in our chest dataset individually (not plotted in this paper) and perform no examination averaging, our correlation coefficient drops from 0.937 to 0.589 for the adult chest data. This can be understood by looking at Fig. 3(b), the four measurement points taken by Ikuta et al. span the three different anatomical regions within a routine chest scan, the shoulders, thorax, and abdomen. These regions, for the same geometric size, do exhibit relatively large differences in D W . For the chest relative to the abdomen, we expected the D W to be much lower because of the thorax (air-filled regions of the lung). 28 We examined a few adult chest patients' scans and noticed that the shoulders and abdomen were included and it is necessary to include them in a routine adult chest procedure in order to ensure the lung apices and bases are covered. We separated the chest region into subset regions of adult shoulders, adult thorax, and adult abdomen only, shown in Fig. 3 results. This takeaway is that the contributions from all body regions included within an examination must be considered when discussing patient size surrogates. This is especially true since x-ray attenuation will change drastically as one moves from the abdomen to the thorax and from the mid-thorax up into the lung apices (e.g., and moves into the shoulders). 32 At such boundaries between patient body regions, vendors' AEC algorithms are likely to greatly change the tube output. We were also surprised to notice that the adult shoulder data appeared to have a much lower D W than the abdomen for the same D E as shown in Fig. 3(b). Looking at the adult shoulder data is clinically relevant as this body region corresponds to cervical spine imaging, neck CTA imaging, and shoulder imaging. The shoulders are also included in the scanning of other body regions like the chest as shown in the present analysis. One would expect the shoulders to have a higher D W relative to the abdomen for the same D E because of bony anatomy of the shoulders and arms. Albeit, some air-filled regions could also present due to the lung apices. However, we found that the D W for the shoulders is shifted to the right (e.g., decreased D W value) because the adult shoulders' LAT dimension is relatively larger compared to the adult thorax and adult abdomen only, and For the head, we related D W to LAT, AP, and (AP + LAT)/2 in Fig. 5 and D W to D E in Fig 3(a). We found poor correlation between the size surrogates for the head overall. We noted that our image processing steps for obtaining the geometric size-based metrics AP and LAT (from which D E is derived) included the ears and nose. Therefore, for patients with their ears protruding far from their head, the LAT measurements would increase, predicting the patient was more attenuating than we would desire for the purposes of SSDE calculations. We noticed the same behavior for the nose and the AP length calculation. We also noted that the angle of the head (defined by a line connecting the orbits and ear cannel, e.g., the orbital-meatal line) varied patient to patient and effected AP and LAT measurements. We confirmed that we were able to remove the head holder and couch from the geometric size measurements of AP and LAT, so size contributions from these non-patient objects were not present in our data. We confirmed the head holder and/or couch was not present in AP and LAT length calculations by manually reviewing the thresholded and segmented axial images described in Section 2.B. The relatively poor correlation (R 2 = 0.81206) for D W vs D E for the adult head scan in Fig 3(a) was not surprising considering the correlation was similar to the one in the work by McMillan et al. (R 2 = 0.87). In their work, they used the slice above the eyes (e.g., a single slice) differing from our use of the entire head scan range which could explain their slightly better correlation coefficient. Another external comparison of our data can be made to that of Aman et al. 35 Table 2 to determine the LAT or AP dimension given an AP or LAT measurement, respectively. One limitation of our study is that we did not relate our patient size surrogates directly to dose as other studies have done, and although this was not the purpose of this study, it is important to note. Such a comparison will be highly vendor dependent, as each vendor's AEC implementation will respond differently to the size surrogates presented in this paper and additionally to other influences like patient ellipticity and geometric magnification. We did not remove the couch or head holder when calculating Dw. We think that this is fine because Anam et al. 35 show that the effect of the Our results as shown in Fig. 4(b) agree with the AAPM Report 220 results. This agreement provides us confidence that a couch removal strategy is not required. Furthermore, Anam et al. 35 demonstrated that the couch has minimal impact on D W . | CONCLUSION Following AAPM Reports 204/220 using a clinical dataset containing 884 patients we made the following specific conclusions: 1. We identified sources of outliers in our data that deviate from the trend lines shown in AAPM Reports 204/220 including: medical conditions causing excess bone formation inside the skull (cranial metaphyseal dysplasia), lack of subcutaneous fat relative to others in the patient population (low BMI), and deviations from typical scan ranges for a particular examination type (e.g., including parts of the thorax in an abdominal pelvis scan). 2. We applied the methodologies of the size surrogates of AAPM Report 204 and AAPM Report 220 to different body regions and age groups including the head. The head has not previously been reported on using the framework of the AAPM Reports 204/220. Our fit lines for D E and D W for the abdomen and chest agreed with the AAPM 204 and 220 within our 95% confidence intervals. 3. For the first time to our knowledge, we report patient ellipticity values derived from clinical scans. We report values for adult chest, adult abdomen pelvis, adult head, pediatric abdomen pelvis, adult shoulder, adult thorax, and adult abdomen body regions. Such a description of patient form/shape will be needed to understand and reverse engineer some CT vendors "black box" AEC algorithms. IRB STATEMENT All data were collected under an IRB-approved protocol in a retrospective manner in which the patient consent was waived. ACKNOWLEDG MENTS This work is supported by an equipment grant from GE Healthcare. The authors thank Dominic Crotty, Ph.D. and John Boudry, Ph.D. from GE Healthcare for discussions related to this work. The authors give special thanks to Sebastian Schafer, Ph.D., for translating the references written in German. CONFLI CT OF INTEREST TPS receives research support, is a consultant, and supplied CT protocols under a licensing agreement to GE Healthcare. TPS is the founder of protocolshare.org. CSB receives research support from GE Healthcare. For CSB there are no disclosures. TPS is on the MAB of iMALOGIX LLC.
v3-fos-license
2022-01-20T05:28:02.639Z
2022-01-01T00:00:00.000
246033998
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "2606c116f2189a99405283f4dc72605d28e8aed3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42685", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2606c116f2189a99405283f4dc72605d28e8aed3", "year": 2022 }
pes2o/s2orc
Clinical impact of vivax malaria: A collection review Background Plasmodium vivax infects an estimated 7 million people every year. Previously, vivax malaria was perceived as a benign condition, particularly when compared to falciparum malaria. Reports of the severe clinical impacts of vivax malaria have been increasing over the last decade. Methods and findings We describe the main clinical impacts of vivax malaria, incorporating a rapid systematic review of severe disease with meta-analysis of data from studies with clearly defined denominators, stratified by hospitalization status. Severe anemia is a serious consequence of relapsing infections in children in endemic areas, in whom vivax malaria causes increased morbidity and mortality and impaired school performance. P. vivax infection in pregnancy is associated with maternal anemia, prematurity, fetal loss, and low birth weight. More than 11,658 patients with severe vivax malaria have been reported since 1929, with 15,954 manifestations of severe malaria, of which only 7,157 (45%) conformed to the World Health Organization (WHO) diagnostic criteria. Out of 423 articles, 311 (74%) were published since 2010. In a random-effects meta-analysis of 85 studies, 68 of which were in hospitalized patients with vivax malaria, we estimated the proportion of patients with WHO-defined severe disease as 0.7% [95% confidence interval (CI) 0.19% to 2.57%] in all patients with vivax malaria and 7.11% [95% CI 4.30% to 11.55%] in hospitalized patients. We estimated the mortality from vivax malaria as 0.01% [95% CI 0.00% to 0.07%] in all patients and 0.56% [95% CI 0.35% to 0.92%] in hospital settings. WHO-defined cerebral, respiratory, and renal severe complications were generally estimated to occur in fewer than 0.5% patients in all included studies. Limitations of this review include the observational nature and small size of most of the studies of severe vivax malaria, high heterogeneity of included studies which were predominantly in hospitalized patients (who were therefore more likely to be severely unwell), and high risk of bias including small study effects. Conclusions Young children and pregnant women are particularly vulnerable to adverse clinical impacts of vivax malaria, and preventing infections and relapse in this groups is a priority. Substantial evidence of severe presentations of vivax malaria has accrued over the last 10 years, but reporting is inconsistent. There are major knowledge gaps, for example, limited understanding of the underlying pathophysiology and the reason for the heterogenous geographical distribution of reported complications. An adapted case definition of severe vivax malaria would facilitate surveillance and future research to better understand this condition. What did the researchers do and find? • We reviewed the literature on the clinical impact of vivax malaria focusing on children and pregnant women and performed a rapid systematic review of published evidence for severe disease and meta-analysis of selected studies. • Vivax malaria in young children is associated with severe anemia and increased mortality. Infections in pregnancy lead to maternal anemia, prematurity, fetal loss, and low birth weight. We estimated the proportion of patients with World Health Organization (WHO)-defined severe disease as 0.7% [95% confidence interval (CI) 0.19% to 2.57%] in all patients with vivax malaria and the mortality from vivax malaria as 0.01% [95% CI 0.00% to 0.07%] in all patients and 0.56% [95% CI 0.35% to 0.92%] in hospital settings. What do these findings mean? • Reporting of severe vivax malaria has increased dramatically over the last 10 years, but different case definitions have been applied. • Limitations of our analysis include high risk of bias of published studies, the majority of which are in hospitalized patients (who are therefore more likely to be severely unwell), while most vivax malaria is managed in primary care. As a result, estimates should be interpreted with caution. • Prospective cohort studies using an adapted case definition of severe vivax malaria would improve estimates of incidence. • Preventing vivax malaria and subsequent relapses in children and pregnant women is a priority to reduce morbidity and mortality. Introduction Like Plasmodium falciparum, the Plasmodium vivax parasite has an intraerythrocytic life cycle lasting approximately 48 hours from red cell invasion by merozoites to schizont rupture. In synchronous infections, schizogony triggers a febrile paroxysm every 3 days, hence the term "tertian fever" given to early descriptions of patients with vivax or falciparum malaria. Malaria caused by P. vivax was termed benign tertian malaria because the clinical course was generally considered to be mild compared to falciparum malaria even though occasional descriptions of severe, including cerebral, presentations of vivax malaria have been reported since 1902 [1]. The notion that P. vivax only rarely causes severe malaria has been challenged over the last 15 years [2], and reports of severe disease have been increasing. More evidence for the negative impacts of vivax malaria on young children and pregnant women has emerged over the same time period. Here, we describe the clinical symptoms and major clinical impacts of malaria caused by P. vivax taking a narrative approach but incorporating a rapid systematic review of the evidence for severe disease. Rapid review methods We searched PubMed for articles in English (including those with an English abstract only) documenting severe vivax malaria published until August 1, 2021, using a variety of terms to search title and abstract (S1 Table). Articles were screened by one reviewer (APP) with 20% screened by a second reviewer (PD or EAA). Extra references were identified from other systematic reviews, bibliographic searches, and searching in Google Scholar. This review is reported as per the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline (S2 Table). Definitions We included articles if the authors classified the episode as severe vivax malaria. We noted whether the cases met the case definition of severe (falciparum) malaria published by the World Health Organization (WHO) in 2014 [3], without taking parasite density into account in the definitions of jaundice or anemia (Table 1). Statistical analysis The proportion of patients with severe vivax malaria was estimated using a meta-analysis of single proportions using data from studies with clearly defined denominators (case reports, case series, and studies exclusively in pregnant women were excluded), after applying logit transformation. Heterogeneity was assessed using the I 2 statistic, which quantifies the proportion of total variability attributable to between-study differences [4]. Pooled estimates derived from the fixed-effect and random-effects meta-analyses were expressed as percentages and presented together with the associated 95% confidence intervals (CIs). The estimates were stratified by geographical region and by hospitalization status (when the information was available) defined as hospitalized only or studies in which the denominator was patients diagnosed in community/outpatient settings as well as inpatients. Similar meta-analyses were carried out for estimating the proportion of patients with the 3 leading complications of cerebral malaria, acute kidney injury (AKI), and pulmonary edema/acute respiratory distress syndrome (ARDS). Severe anemia was not included in the meta-analysis because our search strategy was not designed to capture the literature on vivax malaria and anemia comprehensively. Small study effect was assessed by using a linear regression test of funnel plot asymmetry and biasadjusted estimates were reported using trim-and-fill method. Risk of bias assessment The following aspects of the studies regarding patient selection and outcome assessment domains were considered: representativeness of the cohort (or cases in case-control studies), methodology used for ascertainment of exposure (microscopy, rapid diagnostic tests, and PCR), and if the assessment of exposure and outcome was blinded or not (laboratory procedures blinded or not). Case series and case reports were not included in the risk of bias assessment. Clinical features of vivax malaria Studies of experimentally induced malaria and studies of returned United States servicemen provided early detailed descriptions of clinical features of vivax malaria in nonendemic areas. In a case series of 195 nonimmune adult prison volunteers given sporozoite-induced infections of the Chesson strain of P. vivax by US researchers in the 1940s, the most common symptoms reported, apart from fever, were headache, anorexia, nausea, myalgia, abdominal pain, eye pain, chest pain, cough, asthenia, and dizziness [5,6]. Vomiting, splenomegaly, epistaxis, urticaria, diarrhea, edema (not specified), jaundice, and hepatomegaly were typical clinical observations. A mean drop of hemoglobin (Hb) of 2.9 grams/dL was reported over the first 7 days of illness. Five participants who were observed over 15 days of illness experienced a mean Hb drop of 6.4 g/dL. By comparison, a systematic review of 1,975 patients of all ages in endemic areas with vivax malaria treated with chloroquine reported a maximum fall in mean Hb from Clinical impact of vivax malaria in young children Symptoms of vivax malaria in children are nonspecific. Fever is typical but not universal and is often accompanied by rigors, vomiting, loose stools, poor feeding and irritability in infants, and sometimes, convulsions. Hepatosplenomegaly is common with an enlarged spleen described more frequently than an enlarged liver [10][11][12][13][14]. In areas where the population acquire immunity following repeated vivax infections, asymptomatic parasitemia is well recognized, and a high proportion of this is submicroscopic [13,[15][16][17]. In a longitudinal study from a low-transmission area along the Thai-Myanmar border from 1991 to 1992, vivax malaria was most common in young children, and 43% of infections in children aged 4 to 15 years were asymptomatic, compared to only 16% of falciparum infections. Incidence of P. vivax declined with increasing age, while P. falciparum incidence peaked between 20 and 29 years [13]. A cohort study in 2006 of 264 children aged 1 to 3 years in a higher transmission setting in Papua New Guinea found that incidence of P. vivax decreased with age, while incidence of P. falciparum increased until 30 months of age and then remained stable. There was marked seasonal variation in P. falciparum infections, but not P. vivax infections, attributed to the propensity of P. vivax to relapse [18]. Relapse is common, and the periodicity depends on the infecting strain, the entomological inoculation rate, the number of sporozoites inoculated, the degree of immunity, and the treatment administered [19,20]. In Papua, Indonesia, young children experience a high number of recurrent episodes of vivax malaria early in life, and P. vivax in this area is highly chloroquine resistant. Observational data of 7,499 children below 5 years of age who presented to a large hospital between 2004 and 2013 and were diagnosed with vivax malaria showed that they were at greater risk of recurrent malaria than children with falciparum malaria [21]. Among 1,375 children with P. vivax infection admitted to hospital for parenteral antimalarial treatment, 28 (3.4%) died within 30 days. In the same analysis, the risk of dying within 30 days among children with P. vivax infection who were undernourished (weight for age or height/length for weight z scores below −3 SD of standard reference) was 15.5% (15/97). Children were followed up for 1 year, and the risk of dying between 31 and 365 days after their initial presentation in children diagnosed with vivax malaria was 1.0% (95% CI 0.8 to 1.3). The risk increased to 3.9% (3/77) in undernourished children. In another study from Venezuela, 26% of 78 children requiring hospitalization for vivax malaria were malnourished [22]. The relationship between malaria and nutritional status is complex and not well understood, with different studies suggesting that malnutrition may result from malaria, may increase susceptibility to malaria, or may be protective against malaria [21,23,24]. A serious consequence of repeated episodes of vivax malaria in children is severe anemia [25,26]. Another study from Papua, Indonesia found that infants aged 3 months or younger with vivax malaria were at higher risk of severe malarial anemia (Hb <5g/dL) than infants from the same population with falciparum malaria (odds ratio 2.4; 95% CI 1.0 to 5.9; p = 0.041) [12]. Malaria caused by P. vivax has been associated with cognitive impairment and poor school performance. In a study of children aged 6 to 14 years in Sri Lanka, who were monitored for 6 years, repeated episodes (�3) of vivax malaria were associated with impaired performance in language and mathematics compared to children experiencing fewer than 3 episodes [27]. The same research team went on to conduct a double-blind, placebo-controlled trial of chloroquine prophylaxis in 587 children using attendance rates and language and mathematics scores to evaluate the intervention [28]. The children who received chloroquine experienced fewer episodes of malaria (55% reduction in incidence), were less likely to be absent from school, and scored higher in language and mathematics. Clinical impact of vivax malaria in pregnancy Vivax malaria in pregnancy is associated with fetal loss, maternal anemia, low birth weight, and preterm birth with occasional reports of congenital infection [29,30]. Asymptomatic infection also occurs in pregnant women [31]. Much of the evidence for the impact of vivax malaria in pregnancy comes from large observational cohort studies of women receiving antenatal care in Asia and has accrued gradually [32,33]. A retrospective analysis of records of 17,613 pregnant women in the first trimester who received antenatal care in clinics along the Thailand-Myanmar border, where falciparum and vivax malaria were coendemic, between 1986 and 2010, found that the odds of miscarriage increased in women with asymptomatic (adjusted odds ratio 2.70, 95% CI 2.04 to 3.59) or symptomatic (3.99, 3.10 to 5.13) malaria with similar results for both P. falciparum and P. vivax [34]. A systematic review and meta-analysis examining the risk of stillbirth related to malaria in pregnancy, which included 59 studies, found that vivax malaria at the time of delivery, but not earlier during the pregnancy, increased the odds of stillbirth (2.8 [95% CI 0.8 to 10.2]) [35]. The relationship between vivax malaria and stillbirth was explored further in a retrospective analysis of a cohort of 61,836 women attending antenatal clinics along the Thailand-Myanmar border between 1986 and 2015, of whom 9,350 (15%) had malaria in pregnancy, and 526 (0.8%) had stillbirths [30]. The hazard of antepartum (and not intrapartum) stillbirth increased 2.2-fold (95% CI 1.1, 4.3; p = 0.021) among women who had symptomatic vivax malaria in the third trimester. Between 2004 and 2006, a cross sectional study in Papua, Indonesia examined the relationship between malaria at delivery and pregnancy outcomes in 2601 women [37]. A decrease in birth weight associated with maternal P. vivax infection of 108 g (95% CI 17.5 to 199) was reported. P. vivax infection was also associated with an increased risk of moderate, but not severe, maternal anemia compared to no malaria with a reduction of Hb concentration of 0.4 g/dl (95% CI 0.1 to 0.7 g/dL). A prospective observational study of 1,180 Brazilian pregnant women over 1 year from 2015 to 2016, 8% of whom had malaria (75% caused by P. vivax), showed that malaria episodes were associated with decreased birth weight and length and maternal anemia [38]. Birth weight z-scores showed a mean reduction of 0.35 (95% CI 0.14 to 0.57) in women with malaria in the second or third trimester compared to those with no malaria. The corresponding mean reduction in birth length z-scores was 0.31 (95% CI 0.08 to 0.54). Women with malaria in the second or third trimester also had a Hb concentration on average 0.33 g/100 mL lower (95% CI 0.05 to 0.62 g/100 mL). Of 637 cord blood samples tested by PCR, P. vivax was found in 4 cases. Increased infant mortality has been associated with maternal vivax malaria during pregnancy. In a prospective study from a low-transmission setting along the Thailand-Myanmar border of a cohort of 1,495 mothers followed weekly throughout pregnancy and then with their infants until 1 year of life, malaria during pregnancy (any species) increased neonatal mortality by causing low birth weight, and fever in the week before delivery was also associated with increased infant mortality [39]. To date, interventional chemoprevention studies in pregnancy have not demonstrated a reduction in P. vivax malaria-related adverse pregnancy outcomes. In a randomized, doubleblind, placebo-controlled trial of weekly chloroquine prophylaxis in 1,000 pregnant women on the Thai-Myanmar border, no women who received chloroquine had an episode of vivax malaria compared to 10% women in the placebo arm who experienced at least one [40]. Despite preventing episodes of vivax malaria, the proportions of women with anemia (hematocrit <30%) during the study in chloroquine and placebo arms were similar in both groups at 71.2% (336/472) and 71.4% (342/479), respectively (p = 0.76). No difference was observed between the groups in mean birth weight, proportion of low birth weight infants, or preterm delivery. Placental changes with vivax malaria. Histopathological studies of placentas from women with vivax malaria during pregnancy have generally shown less evidence of intervillous inflammation or placental involvement in comparison to falciparum malaria [41][42][43][44]. In an ex vivo study reported from Thailand, P. vivax isolates were shown to adhere to chondroitin sulfate, the main receptor for adhesion of P. falciparum parasitized erythrocytes in the placenta [45]. Severe vivax malaria Results of rapid systematic review. The search for all severe manifestations of vivax malaria yielded 2,053 results. After screening title, abstract, and full text and removing duplicates, 1,696 articles were excluded (S1 Fig), leaving 423 for quantitative synthesis. There were 208 case reports or case series, 100 articles reporting prospective studies, and 115 retrospective studies. Fewer than 7 articles were published per year until 2002. Then the numbers gradually increased, peaking at 60 in 2013 (Fig 1). A total of 11 articles did not present the number of patients, only the number of severe complications; therefore, we cannot give the precise number of patients in our review. A total of at least 11,658 patients with at least 1 manifestation of severe vivax malaria were reported from 423 articles (Fig 2, S3 Table) across 45 countries, with over 60% of cases reported from 2 countries: India (43.9%, n = 5,114) and Indonesia (23.5%, n = 2,743). To confirm the diagnosis of P. vivax monoinfection, microscopy only was used in 35% (148) articles, molecular methods were used to confirm microscopy findings in 20% (85) articles, rapid diagnostic tests were used alone or in combination to confirm microscopy in 33% (139), PCR only was used in 4 cases, and diagnostic methods were not available in 8% (35) articles. There were only 5 cases with a parasite density reported in excess of 100,000/μL. Laboratory diagnostics to exclude other comorbidities or concurrent illness, e.g., blood culture and dengue rapid diagnostic test, were performed to varying degrees of comprehensiveness in 224 (53%) articles and not done or not mentioned in 189 (44%). Results of meta-analysis. A total of 92 studies were eligible for inclusion in the metaanalysis using both random effects and fixed effect models (S4 Table). Heterogeneity, assessed using the I 2 statistic, was very high. From our risk of bias assessment focusing on 3 domains, representativeness of the population was at high or unclear risk of bias for 42% studies, half of the studies were retrospective and judged to be at high risk of bias and for ascertainment of exposure, which took diagnostic methods into account, while 72% were at moderate or unclear risk of bias (S5 Table). Most of the studies in the meta-analysis were from Asia (n = 67), and the majority (69) were conducted in hospital settings. In terms of patient numbers, of 4,547 patients with severe malaria included, approximately one-third were from Latin America. Only 85 studies could be assessed in the meta-analysis to estimate the proportion of severe vivax malaria because not all studies reported the number of patients with severe malaria, only the number of complications ( Table 2). The random-effects estimate of the proportion of patients with WHO-defined severe disease was 0.7% [95% CI 0.19% to 2.57%] in studies that included all patients diagnosed with vivax malaria (inpatient and outpatient) and 7.11% [95% CI 4.30% to 11.55%] in studies of hospitalized patients (Table 2). Estimates using author definitions of severe malaria which included severe thrombocytopenia without bleeding were approximately 2-fold higher. Estimates adjusted for small study effects for all proportions are shown in the Supporting information (S6 Table). There were 4 studies exclusively in pregnant women (S7 Table). In a random-effects meta-analysis of these studies including 225 women, we estimated the proportion with severe malaria as 4.14% [95% CI 0.82% to 18.43%]. Further estimates of proportion with severe malaria stratified by hospitalization status within each region are presented as a Supporting information table (S8 Table). Cerebral malaria We found 163 articles describing 1,502 patients with cerebral manifestations of vivax malaria. Among the cerebral complications in these patients, 645 (42.9%) fulfilled WHO criteria for cerebral malaria (S3 Table), 278 (18.5%) had seizures only with no other features of cerebral malaria), and 579 (38.6%) had cerebral symptoms such as altered consciousness or sensorium, confusion, and/or low but unquantified Glasgow Coma Scores. Over 75% of cerebral malaria cases were reported from one of India 53.7% (807/1,502), Colombia 11.9% (178/1,502), or Indonesia 10.2% (153/1,502) (Fig 2). A small number of articles described retinal changes, including hemorrhages, cotton wool spots, and papilledema occurring in complicated vivax malaria [46], Vivax malaria has also been reported as presenting as a cerebral infarct or cerebral venous sinus thrombosis [47,48]. From these reports, 90 articles were eligible for inclusion in the meta-analysis of cerebral vivax malaria. The random-effects estimate of the proportion of the patients from all studies with WHO-defined cerebral malaria was 0.62% [95% CI 0.33% to 1.18%] among hospitalized patients and 0.02% [95% CI 0.00% to 0.13%] among patients from studies that included all patients diagnosed with vivax malaria (inpatient and outpatient) ( Table 3). 1 Hb <5 g/dl for under 12 years old and <7 g/dl for above. 2 Pulmonary edema defined by WHO (CXR confirmed or SPO2<92% + respiratory rate >30) or ARDS. 3 Respiratory manifestations of severe vivax malaria Of 1,202 patients with respiratory complications reported from 97 articles, 258 (21.5%) fulfilled WHO criteria for pulmonary edema (chest radiograph confirmed or O 2 saturations <92% with respiratory rate more than 30/min); 330 (26.6%) were reported to have ARDS; and 593 (48.6%) had other respiratory complications such as respiratory distress (not otherwise specified), pleural effusion, and pneumothorax (Fig 2). From the meta-analysis of respiratory complications in 87 studies (Table 3), the proportion of patients with WHO-defined pulmonary edema was 0.66% [95% CI 0.38% to 1.13%] in Table 2. Estimating the proportion of vivax malaria that were classified as severe in studies eligible for inclusion in the meta-analysis. Studies that were carried out exclusively among pregnant women are excluded; studies that include few or some pregnant women were not excluded. b Studies that predominantly reported data on outpatients settings were also included. Estimates derived from meta-analysis of proportion c Other includes studies that did not mention the settings and the studies in which the number of patients who were hospitalized or treated outpatients could not be reliably extracted. d Estimates stratified by settings within each region is presented in additional file (S8 Table). When region and settings were adjusted using a multivariable logistic regression model with study as the random effects, the difference between the settings malaria treatment; symptoms developed before malaria therapy in 96 patients, after in 84 patients, and for the remainder the time of onset were not specified. Lung histology. Histological findings from one fatal case report from India included alveolar damage and hyaline membrane formation, typical of ARDS [49]. A Brazilian postmortem series of 13 patients found ARDS and pulmonary edema in 2 and 5 cases, respectively. The pathologist reported scattered parasitized red blood cells in capillaries in the lungs of one of the patients with ARDS. Renal manifestations of severe vivax malaria Among 1,694 patients with biochemical evidence of impaired renal function, 915 (54.0%) met WHO criteria for AKI (S3 Table), 233 (13.8%) cases were classified using the Kidney Disease: Improving Global Outcomes (KDIGO) [50] or Risk, Injury, Failure; Loss, End-Stage Renal Disease (RIFLE) [51] classifications, and 546 (32.2%) were categorized using local definitions with different urea or creatinine cutoffs. More than 75% (1,694/1,272) of renal complications were reported from India (Fig 2). From the random-effects meta-analysis of renal complications from 89 eligible articles, we estimated that 0.51% [95% CI 0.26% to 0.98%] patients met WHO criteria for AKI in studies of hospitalized patients and 0.01% [95% CI 0.00% to 0.10%] in studies that included all patients diagnosed with vivax malaria (inpatient and outpatient) ( Table 3). Renal histology. We found reports of renal biopsy results from 45 patients with renal impairment and vivax malaria (S9 Table). The most common histological diagnosis was acute tubular necrosis (n = 20), with a second diagnosis reported in 9 cases; cortical necrosis was reported in 16, thrombotic microangiopathy in 12, and mesangiocapillary glomerulonephritis in 7 patients. Mortality from vivax malaria A total of 553 deaths were associated with severe vivax malaria (S3 Table). Out of these, 334 were from 75 studies eligible for inclusion in the random-effects meta-analysis, and the overall Table 4. Estimating mortality in studies eligible for inclusion in the meta-analysis. (Table 4). There were no reports of deaths from the African region. Postmortem evidence of severe malaria In a retrospective review of the postmortem findings of 17 patients who had died with vivax malaria between 1996 and 2010 in a single centre in Brazil, it was concluded that vivax malaria was the probable cause of death or contributed to the fatal outcome in 13 cases aged between 8 and 88 years [52]. Pulmonary edema or ARDS were the leading causes of death (n = 7), coexisting with splenic rupture in 2 patients. The other causes of death were multiple organ dysfunction (n = 3, including one with a cerebrovascular accident), primaquine-induced hemolytic anemia in the context of glucose-6-phosphate dehydrogenase (G6PD) deficiency (n = 2), and isolated splenic rupture (n = 1). A total of 7 patients had comorbidities such as hepatic cirrhosis or pulmonary emphysema. Postmortem findings consistent with ARDS were reported from one fatal case in India [49]. Postmortem histology from one fatal case of "infection with (the) large tertian parasite" in 1902 [1] found a spleen containing numerous large parasites, tubular degeneration in the kidneys, pigment in lung capillaries but no parasites, and very little evidence of pigment or parasitized erythrocytes in the brain. Another postmortem study of a Vietnamese patient with severe vivax malaria who died suddenly but was fully conscious on presentation found no evidence of sequestration of parasitized erythrocytes in the cerebral microvasculature [53]. Pathophysiology of severe vivax malaria Theories as to the pathophysiological processes underlying severe disease include immune dysfunction [54][55][56][57][58][59], parasite strain-specific virulence [60][61][62], inflammation triggered by cytokines, and endothelial cell dysfunction [63]. A proteomic study testing serum from Indian patients with vivax malaria of differing levels of severity as well as controls has given signals of possible involvement of oxidative stress pathways, cytoskeletal regulation, lipid metabolism, and complement cascades [64]. Waning immunity as transmission goes down and increasing resistance of P. vivax to chloroquine have been put forward as possible explanations for increasing reports of severe vivax malaria [54,65]. Increased expression of pvcrt-o (P. vivax chloroquine resistance transporter-o), pvmdr-1 (P. vivax multidrug resistance gene-1), and vir (variant interspersed repeats) genes has been shown in small numbers of patients with severe compared to uncomplicated vivax malaria [66][67][68]. Studies of patients with malaria have shown P. vivax evokes relatively higher concentrations of pro-and anti-inflammatory cytokines, such as tumor necrosis factor (TNF), and other markers of host immune response than P. falciparum at similar parasite densities [69,70]. While these findings can explain the lower pyrogenic parasite densities of vivax infections compared to falciparum, TNF is not thought to play a causal role in coma and cerebral manifestations of severe falciparum malaria [71]. Some in vitro studies have demonstrated some cytoadhesive properties of P. vivax parasitized erythrocytes [72]; however, cytoadherence and/or sequestration have not been demonstrated in vivo, and there are no published postmortem studies of patients diagnosed with cerebral vivax malaria. Binding of uninfected erythrocytes to parasitized erythrocytes (rosetting) does occur, which may lead to impaired circulation [73,74]. Modest (2-fold) increases in peripheral parasite densities in severe disease are accompanied by larger (7-fold) increases in circulating parasite lactate dehydrogenase, suggesting the possibility of parasitized erythrocytes accumulating elsewhere [75]. Impairment of endothelial function has been shown in Malaysian patients with vivax malaria [63]. Endothelial injury and excessive inflammation could also explain ARDS presentations. In a study of pulmonary gas transfer in Indonesian adults with falciparum or vivax malaria diffusing capacity for carbon monoxide (DLCO) in patients with vivax malaria was 93% (95% CI 87 to 104) of predicted values at baseline and declined for 2 weeks after treatment to 84% (95% CI 72 to 95) [76]. Evidence for cytoadherence in the pulmonary microvasculature is not that compelling, although scattered parasitized red blood cells in pulmonary capillaries have been described in one patient postmortem, and P. vivax has been shown to adhere to human lung microvascular endothelial cells (HMVEC-L) in vivo, albeit to a lesser degree than P. falciparum [52,77]. Increased concentrations of platelet-associated IgG (PAIgG) have been reported in patients with thrombocytopenia and vivax malaria, of unclear significance [78]. It has also been postulated that increased levels of macrophage colony-stimulating factor in vivax and falciparum malaria may result in increased macrophage-mediated platelet destruction and thrombocytopenia [79]. The anemia associated with P. vivax malaria has been reviewed in detail by Douglas and colleagues [25]. Mechanisms by which malaria leads to anemia include destruction of parasitized erythrocytes, destruction of nonparasitized erythrocytes, and dyserythropoeisis. Unlike in falciparum malaria, it is unusual to have parasite densities more than 2% infected erythrocytes in acute vivax malaria [80]. P. vivax has a predilection for invading younger erythrocytes, which has been thought to be one reason why unrestricted parasite multiplication does not occur as for P. falciparum. However, severity of anemia may be comparable, or worse. Collins and colleagues postulated that destruction of reticulocytes as they are produced could account for this [81]. However, Kitchen observed in another study published in 1938 that, even when reticulocyte numbers increase, only a fixed proportion will be infected; therefore, other factors may be more important [82]. Survival of uninfected red cells is shortened after an episode of malaria [81]. A mathematical modeling study using parasitemia and Hb data from patients receiving P. falciparum malariatherapy for neurosyphilis estimated that 8.5 nonparasitized erythrocytes were destroyed for each parasitized erythrocyte [83]. In malaria caused by P. vivax, the number may be even higher, given that parasites densities are lower. However, there is only indirect evidence for this. In a study of mean erythrocyte survival after P. falciparum or P. vivax parasitemia in 35 Thai patients who received transfusions of either labeled autologous or donor erythrocytes once their parasitemia had cleared, a similar reduction in erythrocyte survival was seen, regardless of the infecting species and did not appear to be antibody or complement mediated [84]. In a Colombian study comparing patients with uncomplicated and complicated (defined as platelets <50,000/μL, abnormal liver enzymes, or hypoglycemia) vivax malaria, there was an association between autoimmune antibodies and Hb concentrations in complicated disease. There was also a correlation with levels of atypical memory B cells, which have been hypothesized to contribute to anemia in malaria by secreting antibodies against phosphatidylserine on uninfected erythrocytes [85]. In malaria endemic areas, the degree of anemia associated with P. vivax infections depends on transmission intensity, patient age, and by extension degree of acquired immunity, strain relapse periodicity, and choice of antimalarial treatment, e.g., more slowly eliminated drugs will suppress the first relapse, allowing time for hematological recovery, while treatment failure due to resistance will impede recovery. Discussion The global burden of vivax infections is vast, with estimates ranging from 7 to 14 million cases annually, leading to a substantial clinical impact, particularly in young children and pregnant women [86,15]. While it is now generally accepted that vivax malaria may manifest as severe disease, uncertainties remain. Reports of severe vivax malaria have increased over the last decade, but it is unclear to what extent this represents a genuine increase in case numbers, increased recognition of the disease, or misattribution in patients with other diagnoses and incidental parasitemia. In Manaus, Brazil, where a consistent case definition has been used over time, the incidence of severe disease has been increasing [87]. Increasing chloroquine resistance might be expected to lead to more severe disease, although in Papua, Indonesia, switching from chloroquine to artemisinin combination therapy did not lead to a mortality reduction in patients hospitalized with vivax malaria, suggesting that relapse prevention may be more important to reduce deaths [88]. This is supported by another analysis of more than 20,000 patients with vivax malaria in the same location showing that P. vivax infection was a risk factor for representation to hospital and contributed to increased mortality [89]. There are major disparities in frequencies of reporting certain complications of vivax malaria from different parts of the world (Fig 2) and a near absence of reporting of severe vivax malaria from some Southeast Asian countries, e.g., Thailand, Myanmar, Vietnam, and Lao PDR. Strain-specific virulence has been described in both induced malaria studies [90] and natural infections such as in the former United Soviet Socialist Republic where outbreaks of "fulminant malaria" caused by P. vivax were well described in the 1940s [91]. While this may be the explanation for the heterogeneous distribution of severe disease, the fact that asymptomatic vivax parasitemia is common means that a degree of misattribution is inevitable. This has been shown in individual fatal cases for whom exhaustive attempts were made to rule out other causes [52,92]. It is also suggested by variable histological findings from patients with vivax malaria who have had a renal biopsy. Only about half of the studies included in this review described attempts to exclude other diagnoses, e.g., sepsis. A 1 year prospective study from Kolkata examined the rate of concomitant bacteremia with P. vivax parasitemia and found that 6/89 (6.7% [95% CI 3.1% to 13.9%]) of patients with P. vivax infection were bacteremic [93]. Our estimate of mortality resulting from P. vivax infection from the random-effects metaanalysis ranged from 0.01% [0.00% to 0.07%] (studies of all patients, i.e., outpatient and inpatient cases) to 0.56% [0.35% to 0.92%] in studies of hospitalized patients only. These results were similar to a case fatality rate estimate of 0.3% (353/46,411) from a fixed-effect meta-analysis in another systematic review, which used a modified WHO definition of severe malaria and included thrombocytopenia [94]. However, these estimates should be interpreted with caution given the risk of bias assessment of included studies. Detailed studies from a small number of research groups so far have not identified a common pathophysiological process underlying different complications of vivax malaria. Sequestration of parasitized red blood cells in the microcirculation, which is the pathophysiological hallmark of severe falciparum malaria, has been not been demonstrated. ARDS has been reported with infection from all Plasmodium species causing malaria in humans. Incidence rates of ARDS in severe falciparum malaria have been shown to vary between 2% and 25% and are more likely to lead to a fatal outcome than ARDS with P. vivax [95,96]. The need for an adapted case definition for severe vivax malaria has been recognized for over a decade. Reporting by different groups is inconsistent, impeding gathering reliable incidence data or conducting research. A Brazilian study looking at predictors of intensive care unit (ICU) admission in patients with vivax malaria found that many of the criteria in the severe falciparum malaria definition were predictive, with the exception of hyperbilirubinemia [87]. The importance of thrombocytopenia as a diagnostic criterion for severe vivax malaria has been debated [97]. Lampah and colleagues from Papua, Indonesia reported the mortality risk of P. vivax infections in patients presenting to a referral hospital with severe thrombocytopenia to be 1.5% (25/1650) below 50 × 10 9 /L and 3.6% (6/168) below 20 × 10 9 /L in patients with P. vivax and proposed the latter threshold as a severity criterion [98]. The utility of this in routine practice would need to be demonstrated since complete blood count is not available in many outpatient settings where malaria is diagnosed and treated [98]. Conversely, incorporating platelet counts into case definitions of severe falciparum malaria using >200,000 per μL to rule out severe malaria has been proposed as a means to improve the specificity of clinical and parasitological diagnosis in a mathematical modeling study. Results suggested that one-third of 2,220 Kenyan children included in studies had been misdiagnosed as having severe malaria [99]. Very few articles reported platelet counts associated with severe syndromes in our review so we were unable to explore this. Limitations of our rapid systematic review include limiting our search to the English language and omitting to search other databases such as the Literatura Latino-Americana e do Caribe em Ciências da Saúde (LILACS) database. A systematic review of the Brazilian published and gray literature in 2012 described a similar array of complications from P. vivax infection to those reported here [100]. The literature is dominated by case series and reports and by hospital-based studies increasing the risk of bias toward more severely ill patients. Our formal risk of bias assessment of studies included in the meta-analysis indicated a moderate to high risk of bias. There was also evidence of small study effects which was shown in an earlier systematic review by Naing [101]. Conclusions Vivax malaria has emerged from the shadow of falciparum malaria over the last decade with improved recognition of the negative clinical impacts associated with this relapsing infection. Preventing severe anemia associated with relapsing vivax malaria, particularly in very young children, is a priority to reduce morbidity and mortality in this group. Progress has been slowed by low uptake of antihypnozoiticidal treatment with 8-aminoquinoline drugs due to fears of hemolysis in patients with G6PD deficiency, compounded by lack of access to G6PD testing [102]. Prevention of infection with P. vivax in pregnancy may need to target young women preconception in order to prevent the risk of relapse during pregnancy and the consequent negative impacts of maternal anemia, increased fetal loss, and low birth weight [103]. Severe clinical presentations of vivax malaria are now well recognized, although knowledge gaps persist in terms of understanding the underlying pathophysiology of different complications and the apparent heterogeneity in incidence worldwide. An adapted case definition of severe vivax malaria would facilitate surveillance and future research.
v3-fos-license
2018-06-12T03:15:39.555Z
2018-06-07T00:00:00.000
46955843
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11033-018-4204-x.pdf", "pdf_hash": "70adc25ffcec7b3d1871ea866658c3cd01080f49", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42686", "s2fieldsofstudy": [], "sha1": "70adc25ffcec7b3d1871ea866658c3cd01080f49", "year": 2018 }
pes2o/s2orc
Transcriptional profiles of PBMCs from pigs infected with three genetically diverse porcine reproductive and respiratory syndrome virus strains Porcine reproductive and respiratory syndrome virus is the cause of reproductive failure in sows and respiratory disease in young pigs, which has been considered as one of the most costly diseases to the worldwide pig industry for almost 30 years. This study used microarray-based transcriptomic analysis of PBMCs from experimentally infected pigs to explore the patterns of immune dysregulation after infection with two East European PRRSV strains from subtype 2 (BOR and ILI) in comparison to a Danish subtype 1 strain (DAN). Transcriptional profiles were determined at day 7 post infection in three tested groups of pigs and analysed in comparison with the expression profile of control group. Microarray analysis revealed differential regulation (> 1.5-fold change) of 4253 and 7335 genes in groups infected with BOR and ILI strains, respectively, and of 12518 genes in pigs infected with Danish strain. Subtype 2 PRRSV strains showed greater induction of many genes, especially those involved in innate immunity, such as interferon stimulated antiviral genes and inflammatory markers. Functional analysis of the microarray data revealed a significant up-regulation of genes involved in processes such as acute phase response, granulocyte and agranulocyte adhesion and diapedesis, as well as down-regulation of genes enrolled in pathways engaged in protein synthesis, cell division, as well as B and T cell signaling. This study provided an insight into the host response to three different PRRSV strains at a molecular level and demonstrated variability between strains of different pathogenicity level. Electronic supplementary material The online version of this article (10.1007/s11033-018-4204-x) contains supplementary material, which is available to authorized users. Introduction Porcine reproductive and respiratory syndrome (PRRS) is a viral disease of significant economic impact on a swine industry worldwide. Clinically the disease manifests by reproductive disorders in sows and respiratory lesions and poor growth performance in growing pigs. The etiological agent of the disease, PRRS virus (PRRSV) is an enveloped, positive-sense single-strand RNA virus classified in the order Nidovirales, family Arteriviridae. Two genotypes, Type 1 and Type 2, sharing approximately 60% of genetic similarity were described [1]. A newly proposed classification denotes both types as different species within Arteriviridae family (https ://talk.ictvo nline .org/taxon omy/). Type 1 can be further divided into at least four genetic subtypes, namely Pan-European subtype 1 and subtypes 2, 3 and 4 represented by strains circulating in Eastern European countries [2]. Gathering evidences suggest the existence of additional 1 3 subtypes. However, the genetic subtyping based on a small genomic fragment of ORF5 and ORF7 can be affected by genetic recombination [3][4][5]. PRRSV shows a remarkably high degree of genetic variation translating into high antigenic and pathogenic variability. High frequency of recombinations also contributes to occasional changes of biological properties and virulence level [6,7]. The most striking example is a highly pathogenic Type 2 PRRSV variant with 30 aa discontinuous deletion within nsp2 protein that emerged in 2006 in China devastating swine industry in several Asian countries [8,9]. Recent animal infection studies indicated that some East European strains of PRRSV Type 1 have been characterized by higher pathogenicity compared to the mild syndrome produced by infection with subtype 1 [10][11][12]. The differences in pathogenicity between PRRSV strains were hypothesized to originate from different degrees of immunomodulatory properties. The virus utilizes a range of mechanisms to influence the immune response including weak stimulation of interferon production and slow development of cell-mediated immunity, inhibition of the expression of pro-inflammatory cytokines and weak and delayed neutralizing response [13]. As a result, the course of infection may advance to a chronic stage lasting even up to 21 weeks [14]. Insufficient stimulation of an immune response and high level of genetic variability also constitute key problems in the development of efficient vaccines. None of the currently used vaccines provides full protection against infection and their effectiveness is often compromised towards heterologous strains [15]. Deeper knowledge on the core mechanisms of host-pathogen interactions and the relevance of genetic diversity is necessary to facilitate the development of efficient tools for PRRS control. Interestingly, previous studies revealed that the pattern of cytokines expression after infection with prototype subtype 3 strain Lena differed compared to subtype 1 strains [16,17]. Together with observations from a study on another subtype 3 strain SU1-bel [11], where greater interferon-γ response was noted, those results suggested that observed higher pathogenicity may be a result of enhanced inflammatory immune response. Therefore the aim of the present study was to explore and compare the patterns of immune response after infection with two East European PRRSV strains, genetically classified as subtype 2, but showing different pathogenicity, and a classical Danish subtype 1 strain. For this, microarraybased transcriptomic analysis of PBMCs collected at the peak of viremia from pigs infected with both subtypes was performed. Animals and infection with PRRSV strains The blood samples used in the study were collected during an animal challenge experiment described elsewhere [18]. The pigs used in the study originated from a high health pig herd maintained by the Institute and were free from infections with the following pathogens: encephalomyocarditis virus, hepatitis E virus, porcine circovirus type 1 and type 2 viruses, porcine cytomegalovirus, porcine epidemic diarrhoea virus, porcine parvovirus type 1, porcine respiratory coronavirus, PRRSV Type 1 and Type 2, influenza A virus, transmissible gastroenteritis virus, Actinobacillus pleuropneumoniae, Bordetella bronchiseptica, Brachyspira hyodysenteriae, Brachyspira pilosicoli and Brucella spp, using in-house standard diagnostic methods. In short, piglets from three sows were divided into four experimental groups, ensuring that each litter was equally represented in every group. Three groups of 8-week-old pigs were infected with a PRRSV Type 1 strains including subtype 1 strain 18,794 (DAN) isolated in 1993 in Denmark [19], a Russian isolate ILI6 (ILI) and a Belarusian isolate BOR59 (BOR) from 2009, each classified as subtype 2 strain based on ORF5 sequence, respectively. Pigs were inoculated intranasally with 2 ml of virus inoculum prepared based on 5th (titre of 5.4 log10 TCID50/ml), 3rd (titre of 6.8 log10 TCID50/ml) and 2nd (titre of 4.4 log10 TCID50/ml) passage in porcine alveolar macrophages culture for DAN, ILI and BOR strains respectively. The fourth, PRRSV-negative control group, was sham-inoculated with 2 ml of Eagle's medium. Individual pigs were subjected to daily clinical examination and measurement of rectal body temperature. The severity of clinical lesions was assessed based on a scoring system adapted to PRRS [18]. An overall well-being, respiration, eye disorders and appetite were scored as 0 (normal condition) to 3 (severe disorder). The scores for individual pigs were added up to a cumulative clinical score (CS) per day. During the experiment no mortality was recorded and none of the animals displayed the acute clinical signs defined as endpoint criteria. Euthanasia of the pigs was performed on 22-24 DPI by intravenous injection of pentobarbiturate (50 mg/ kg) followed by exsanguination by cutting arteria axillaris. Blood samples were collected on − 2, 0, 3, 7, 10, 14 and 21 days post infection (dpi) from the anterior vena cava. The peak of the level of viremia in all experimental groups was observed at 7th dpi. Moreover, at the same time point the most severe clinical lesions (measured by an objective clinical scoring system) were observed in BOR group, while in other groups no or very mild clinical lesions were observed described in details in Stadejek et al. [18]. Therefore, samples collected at 7th dpi were submitted for further microarray study. EDTA-stabilized blood samples were collected from five pigs from each group and peripheral blood mononuclear cells (PBMC) were isolated by gradient density centrifugation using Histopaque 1077 (Sigma-Aldrich), frozen and stored at − 80 °C until further processing. Microarray analysis Transcriptional profiles of PBMC of infected and control piglets were analyzed using oligonucleotide microarrays specific for Sus Scrofa from Agilent, ID 062763, 8 × 60 K format. Total RNA was extracted from PBMCs using Rneasy kit (QIAGEN) and the preparations with RNA Integrity Number (RIN) from 7.5 to 10 were used. As the experiment was performed using pigs of highly uniform genetic and environmental background, RNA samples representing template of animals from each control and experimental group were pooled. Each pool was processed in four repeats using Two-Color Microarray-Based Gene Expression Analysis, Low Input Quick Amp Labeling-Agilent Technologies. Briefly, 50 ng of total RNA (equal amount of RNA of each animal were pooled) was reverse transcribed to generate cDNA and then transcribed into Cy3-labelled cRNA (samples obtained from control animals) and into Cy5-labelled cRNA (samples obtained from infected animals). After purification of labelled RNA (Qiagen RNeasy Kit), the yield (ng of cRNA) and specific activity (pmol of Cy3 or Cy5/µg of cRNA) were quantified using Nanophotometr Pearl (IMPLEN). Hybridization was performed by preparing a target solution containing 300 ng Cy5-labeled pool cRNA from infected animals, 300 ng Cy3-labeled pool cRNA of uninfected animal and then fragmentation buffer was added and incubated at 60 °C for 30 min. After stopping fragmentation, samples were hybridized on Agilent arrays for 18 h at 65 °C in Agilent hybridization chambers in an Agilent hybridization oven rotating at 10 rpm. After hybridization the arrays were subsequently washed with 'GE wash buffer 1' for 1 min at room temperature, 'GE wash buffer 2' for 1 min at approximately 37 °C, each chamber washed twice. After washing, slides were scanned using Agilent G2505C US10353831. Images obtained after scanning were analyzed using Agilent Feature Extraction software (version 10.7.3.1). A detailed analysis including filtering of outlier spots, background subtraction from features and dye normalization (linear and LOWESS) was performed. The raw and processed data discussed in the study have been deposited in NCBI's Gene Expression Omnibus (GEO) with accession number GSE95213. Microarray gene functional analysis Analysis of microarray results was carried out using the GeneSpring GX10 expression analysis software (Agilent Technologies). The genes were determined to be differentially expressed if the fold change (FC) was greater than 1.5 in up-or down-regulation. Statistical differences in gene expression were determined with a Student's t-test at p ≤ 0.05 and also a false discovery rate (FDR) < 0.05 was used as a threshold. The lists of candidate genes identified by GeneSpring analysis were uploaded to the Ingenuity Pathway Analysis program (IPA; https ://www.qiage nbioi nform atics .com/produ cts/ingen uity-pathw ay-analy sis/) to identify most biologically relevant changes. Separate lists of up-and down-regulated genes were analyzed using Canonical Pathway analysis, which used right-tailed Fisher's Exact Test to identify pathways enriched in the gene set compared to the reference set of genes which included all genes in the human genome. RT-qPCR To validate the microarray results, six genes (OAS1, CXCL2, Il-8, CXCL10, FOS and IL-4) were selected to include genes that were found to be differentially expressed in each group of infected animals. Moreover, these genes were previously described in the context of PRRSV infection. Results were normalized using β-actin as a reference gene. These quantitative real-time reverse-transcription PCRs were carried out using the pooled RNA samples subjected to microarray study. Furthermore, to validate whether analysis of pooled samples reflects the gene expression in the individual specimens, also the relative expression of four selected genes (OAS1, CXCL2, IFN-α and IFNβ) was assessed in RNA from the individual pigs by RT-qPCR. The list of primers used in the study is shown in Table 1. Primers specific for the β-actin gene were used as described elsewhere [20], while the primers for target genes were designed using Primer 3 Plus bioinformatics tool (http:// prime r3plu s.com/cgi-bin/dev/prime r3plu s.cgi). Similarly as for microarray experiment, RNA samples from each control and experimental group were pooled ensuring the same concentration of RNA from each animal. Then 1.5 µg of each pooled sample were digested with DNase I Amplification Grade (Invitrogen) and after reverse transcribed using NG dART RT kit (EURX). The synthesis of cDNA was performed at 47 °C for 50 min. The qPCR was performed in a Rotor Gene Q (Qiagen). Each reaction mix contained 80 ng of cDNA, 2x QuantiTect SYBRGreen Mix (Qiagen) and 10 µM of forward and reverse primers specific for each tested gene and adjusted to a total volume of 25 µl. Every qPCR reaction was performed in duplicate (technical replicates). The efficiency of each reaction was determined based on a serial dilution of cDNA template (100, 10 and 1 ng) and remained within a 90-110% range. Relative gene expression levels were calculated using E-method described by Pfaffl [21] and fold-change units were calculated by dividing the normalized expression values coming from infected animals by the normalized expression values in the controls. The same procedure was applied for individual samples where 80 ng of each individual RNA sample was used. Statistical calculations were performed with STATIS-TICA ver. 10 (StatSoft, part of Dell Software, USA). For correlation analysis, Spearman r correlation coefficients and P-values were determined, since these values were not distributed normally. Microarray analysis Microarray analysis revealed differential gene regulation (FC ≥ 1.5, p-value = 0.05) of 12,518 genes in DAN group, 7335 genes in ILI and 4253 genes in BOR group compared to control group (Table 2). Interestingly, in all groups PRRSV nucleocapsid (N) protein encoding gene was found to be highly expressed. To check the level of expression the FC was calculated and was found to be significantly different in all groups, reaching 37.15 in DAN group, 68.02 in ILI and 203.98 in BOR group. These results corresponded to the peak of viremia measured in serum by qRT-PCR [18]. Comparison of the differentially expressed gene lists indicated that 802 genes were up-regulated and 1033 genes were down-regulated in all three tested groups (Figs. 1, 2). Interestingly, the overlap including both down-regulated and up-regulated genes was significantly lower between DAN/ BOR and BOR/ILI groups compared to DAN/ILI groups. Functional analysis of PBMCs gene responses to PRRSV strains The top 5 canonical pathways involving up-and down-regulated genes in each group are presented in Tables 3 and 4. Pathway analysis performed with up-regulated genes demonstrated that one pathway was common for all groups (Agranulocytes Adhesion and Diapedesis), and another two canonical pathways were the same in ILI and BOR groups (Granulocyte Adhesion and Diapedesis, Leukocytes Extravasation Signaling) ( Table 3). Interestingly the top pathway in both ILI and BOR group was the same (Agranulocytes Adhesion and Diapedesis), while in DAN group Acute Phase Response Signaling was the top process involving the highest number of up-regulated genes. On the other hand, when down-regulated genes were analysed, two pathways were common in all groups DAN, ILI and BOR (EIF2 Signaling and Regulation of eIF4 and p70S6K Signaling), one (Glucocorticoid Receptor Signaling) was identified in DAN and ILI groups and another one mTOR signalling process was noted in DAN and BOR groups (Table 4). Additional analysis was performed to determine top 5 canonical pathways involved in immunological processes (Tables 5, 6). Most of the top canonical pathways involved in immunological processes were the same as in previous analysis. Agranulocyte Adhesion and Diapedesis were the pathways involving the highest number of up-regulated genes in ILI and BOR groups, but was also second on the list in DAN group (Table 5). Two pathways observed to be common only for BOR and ILI group were: Granulocyte Adhesion and Diapedesis and Leukocyte Extravasation Signaling. Production of Nitric Oxide and Reactive Oxygen Species in Macrophages was the process noted as common between DAN and BOR Comparison analysis of canonical signaling pathways activation state induced by BOR, ILI and DAN infection IPA was also used to liken the patterns of immune dysregulation observed after infection with three different PRRSV strains. The comparison analysis of both up-and down-regulated genes simultaneously from three groups of infected animals was carried out to gain a global outlook of the canonical pathways that were modulated. The results were presented as a heat map based on activation z-score (Fig. 3), which represents the bias in gene regulation that predicting whether the particular pathway is in an activated or inactivated state. In general, most of the processes were downregulated in DAN group, in contrast to ILI and BOR. More similarities were observed between ILI and BOR groups, although there were some pathways regulated differently in both groups. From 20 canonical pathways presented in Fig. 3 the pathways with applicable z-score were chosen for further analysis. Quantitative RT-qPCR analysis of selected genes In order to confirm microarray analysis after infection with three different PRRSV strains, six genes involved in immune response were selected to be analysed by RT-qPCR. Spearman's correlation analysis performed to compare the results of both, microarray and RT-qPCR analyses, revealed strong correlation (R = 0.878225, p-value = 0.000002) between fold changes of selected genes determined by both methods (Table 7). Such concordance between the expression values determined by two methods supports the reliability of the observed expression differences and allows accurate interpretation of obtained data. Furthermore, statistical analysis confirmed very strong correlation (R = 0.831, p-value = 0.001) between individual samples and pooled mRNA quantitation what supported the validity of the pooling strategy applied in this study (Online Resource 2- Table S2) [22]. Discussion The very high genetic diversity of porcine reproductive and respiratory syndrome virus, as well as its ability to strongly interfere, modulate or inhibit numerous processes during the development of both innate and adaptive immunity, makes it a difficult subject for investigation. Despite many complex studies carried out over the years, understanding of the disease is still far from complete. The answer to such a challenge might be the use of high-throughput analysis tools, like DNA microarrays or deep sequencing, to study transcriptional response of the host after virus infection. Most of such studies included only Type 2 PRRSV strains [23][24][25], while only single ones enrolled Type 1 strains [26]. Even recently published data on the virulence of subtype 2 strains of Type 1 showed that some of those strains may characterize with high pathogenicity which implies distinct mechanisms shaping the course of infection [18]. However, transcriptional response to infection by subtype 2 strains of Type 1 PRRSV was not studied before. The aim of a present study was to shed a light on the host-virus interactions during the course of infection with PRRSV strains of commonly present in Central and Western Europe subtype 1 and much less studied subtype 2 strains. The biological material analysed in the present study was collected in the first animal experiment with the use of such strains [18]. Changes in expression of genes involved in immunological processes were evaluated in each infected group of pigs and compared to control pigs. The demonstration of clear variations in the number of differentially expressed genes in particular groups of animals between BOR infected group and other two groups ILI and DAN indicates that various number of genes were involved in particular processes. Interferon Signalling pathway was significantly associated to up-regulated genes in DAN and ILI group, but not in BOR group. Previous reports showed that PRRSV proteins, including nsp1, nsp2, nsp11, and N, have been identified and characterized as IFN antagonists [27,28]. Nsp1 has been considered as multifunctional protein regulating type I IFN responses [29][30][31][32], and nsp11 and N protein have been described to suppress IFN-β induction by antagonizing IRF3 activation [33,34]. Interestingly, we did not observe upregulation of Interferon Signalling pathway in BOR group, where the expression of N protein encoding gene reached the highest level. On the other hand, several genes showing very high increase of expression in all or particular tested groups of animals (Table S1) have been identified as interferon stimulated genes (ISGs): OAS 1 (2′-5′-oligoadenylate synthase 1), MX1 (Myxovirus influenza virus resistance), IFIT1 (interferon-induced protein with tetratricopeptide repeats 1), IFIT2 (interferon-induced protein with tetratricopeptide repeats 2), IFIT3 (interferon-induced protein with tetratricopeptide repeats 3) and ISG15, what shows that all PRRSV strains used in the study activated interferon-induced antiviral response to some degree. The best studied ISGs so far are OAS1, MX1, IFIT1, ISG15, RNaseL and PKR [35]. Previous reports described already some interactions between PRRSV and ISGs, like inhibition of ISIG15 [36] and PKR [37] functions. Unfortunately, still little is known about particular interactions between PRRSV and ISGs like OAS, IFIT1 or Mx1. Recently Badaoui et al. [38] described up-regulation of OAS1 expression in LV-infected PAMs. In addition, Zhao and co-workers [39] observed in an in vitro system that PRRSV infection led to induction of OAS1, while knockdown of endogenous OAS1 increased the PRRSV mRNA In our analysis also other genes from OAS family were observed to be strongly up-regulated. OAS2 gene showed increased expression in BOR and ILI groups, while OASL gene was found to be up-regulated in all three groups of infected animals. Those observations suggest the involvement of OAS proteins in various mechanisms engaged in the defense against PRRSV infection, also observed in studies of other viral infections [40][41][42]. Another ISG which was found to be highly up-regulated, especially in BOR group (FC = 22.739) and ILI (FC = 12.287), but also in DAN group (FC = 5.812) was Mx1 gene, encoding dynamin-like GTPase involved in the innate host defense against RNA viruses [43]. Recently Overend and co-workers showed that Mx1 expression by infected PAMs was generally correlated with IFNβ production [44]. Additionally, recent studies focused on the possibility to use of Mx1 gene as a potential DNA marker for PRRS resistance in pigs [45]. Further genes regarded as ISGs are those encoding interferon-induced proteins with tetratricopeptide repeats: 1, 2 and 3 (IFIT1, IFIT2 and IFIT3). Transcription of IFIT genes is triggered usually in the case of viral and bacterial infections, mostly by Type I IFNs (IFN-α/β) and type III IFNs (IFN-λs) [46,47]. Independently, IFIT genes are induced in cells infected with RNA viruses which are sensed by pattern recognition receptors [48]. In the presented study increased expression of IFIT1 was noted in DAN (FC = 5.313) and ILI (FC = 3.983) groups, while IFIT2 up-regulation was observed in BOR (FC = 11.521) and ILI (FC = 5.432) groups, as well as IFIT3, which expression increase reached FC = 17.080 in BOR, FC = 10.152 in ILI and FC = 3.245 in DAN group. Interestingly, up to now only IFIT3 was linked to PRRSV infection as an important modulator of innate immunity inhibiting virus replication in MARC-145 cells by induction of IFN-β [49]. In our study, the profile of IFITs expression differed between subtype 1 strain and subtype 2 strains, thus raising the question whether the same mechanisms are utilized during infection with PRRSV strains by those genetic groups. The specific role of IFIT1 and IFIT2 is binding single-stranded RNA, thereby acting as a sensor of viral single-stranded RNAs and inhibiting expression of viral messenger RNA [50,51]. Some differences in antiviral activity between IFIT proteins were previously described for human parainfluenza virus type 3 [52]. Based on those observations it seems that there are some individual features, like distinct tertiary structures, which allow IFIT proteins to bind different partners and selectively affect host-virus interactions. At the time of blood collection at 7 dpi, the inflammatory response was already induced. One of the genes which showed a high increase of expression in all three infected groups of animals was CXCL10 (chemokine (C-X-C motif) ligand 10). Its FC ranged from 10.297 in ILI group, to 12.280 in DAN and 18.799 in BOR group. This gene encodes one of proinflammatory chemokine CXCL10 attracting leukocytes to the site of infection [53]. CXCL10 overexpression has been already observed for highly pathogenic PRRSV (HP-PRRSV) isolate [54] and other Type 2 strains [55,56]. Previous studies showed that expression of proinflammatory cytokines like interleukin 1, 6 or tumor necrosis factor during PRRSV infection corresponds to the severity of infection [57]. In presented study the overexpression of CXCL10 reached the highest level in BOR group and remained at a similar level in both DAN and ILI groups, which corresponds with the most severe clinical outcome of infection observed in BOR-infected pigs [18]. The other gene which was found to be highly up-regulated in all three groups, GZMA (Granzyme A), with FC of 26.066 in BOR, 15.222 in ILI and 19.020 in DAN group, is an abundant protease expressed in all cytotoxic T-cells and NK-cells. GZMA induces caspase-independent cell death with morphological features of apoptosis, when delivered into the target cell through the immunological synapse [58]. In another study the apoptotic cells were found both in Band T-cell areas of lymphoid organs, suggesting that the apoptosis might play a role in the impairment of the host immune response during PRRSV infection [59]. It is not excluded that in addition to a caspase pathway of apoptosis also caspase-independent mechanism of immunological cells death can be used by PRRSV to debilitate immunological answer to infection, what could explain high up-regulation of GZMA gene expression observed in our analysis. Comparison of the pathways associated with up-or downregulated genes between groups of animals infected with particular PRRSV strains revealed some difference indicating that particular strains may utilize variable mechanism to interact with the host. Acute Phase Response was the most significant up-regulated pathway (p-value = 8.43E−11) in a group of piglets infected with subtype 1 strain DAN, while less significant up-regulation was observed in BOR (p-value = 2.24E−05) and ILI (p-value = 5.52E−05) groups for the same pathway at this time post infection. This observation may indicate the differences in the infection progress or some time shift in the course of infection. In both groups infected with subtype 2 strains, BOR and ILI Agranulocytes Adhesion and Diapedesis was the most significant up-regulated pathway, followed by Granulocyte Adhesion and Diapedesis and Leukocyte Extravasation Signaling pathways. This result is not unexpected since an inflammatory response, induced by infection, triggers the movement of leukocytes into body tissue towards the invader. Interestingly, the results of IPA analysis were comparable between both groups, when only immunological processes were considered. The Integrin Signaling pathway has been clearly associated with up-regulated genes in BOR group together with ILK (integrin-like) signaling pathway. On the other hand, Integrin Signaling pathway was strongly associated with down-regulated genes in ILI group. Integrins are cell surface glycoproteins involved in cell-cell and cell-extracellular matrix interactions, inducing signalling across the cell membrane to regulate cell proliferation, activation, migration and homeostasis [60]. In a recent study, integrins were shown to be involved in sensing of PRRSV infected macrophages by plasmacytoid dendritic cells, which stimulated production of INF-α [61]. Such mechanism seems to allow the host counteract PRRSV strategies aiming the suppression of type I INF induction. One of the top five pathways identified as up-regulated in DAN group was Tight Junction Signalling. Such observation is analogous to the results obtained by Wysocki and co-workers, who also observed the up-regulation of this pathway in lungs of PRRSV-infected pigs [62]. Tight junctions are specialized membrane domains, which maintain adjacent cells close enough to avoid uncontrolled passage of small molecules, microorganisms and cells, across the paracellular space [63,64]. They are also involved in regulation of some cellular processes, including polarization, proliferation, differentiation and gene expression [65][66][67]. Our results suggest that also PRRSV may hijack tight junctions to use cellular machinery to support its own replication. It has been already reported that some viruses are able to regulate the expression or localization of tight junctions proteins to induce cell transformation or make their exit process more efficient. Reports evidenced the importance of tight junction for the infection of different viruses, including reoviruses, influenza virus or human immunodeficiency virus 1 [68][69][70][71]. Among the most significant pathways associated with down-regulated genes the Eukaryotic Translation Initiation Factor 2 (EIF2) Signaling and Regulation of Eucaryotic Translation Initiation Factor 4 (eIF4) and p70S6K Signaling were identified in each tested group. The coincident down-regulation of mTOR Signalling in DAN group and PI3K Signaling in B Lymphocytes in ILI and BOR groups indicate the clear influence on inhibition of cellular protein synthesis. This picture, clearly observed in ILI and DAN groups, is known to be one of the hallmarks of interferon signaling [72]. A similar picture was observed by Wilkinson and co-workers in pigs infected with PRRSV Type 2 strain [73]. Additionally, Cell Cycle: G1/S Checkpoint Regulation pathway controlling the passage of the cells from the G1 into the DNA synthesis (S) phase, was associated with downregulated genes in BOR group. Zhou et al. showed already that the genes relevant to cell cycle and DNA replication can be regulated by highly pathogenic PRRSV [74]. Similarly, Sun and co-workers noticed in microarray experiment that PRRSV nsp11 is able to regulate cell cycle and DNA replication and by in vitro experiment proved that nsp11 induced the delay of cell cycle progression at the S phase. Such cell cycle arrest may be beneficial for the virus, since it can redirect cellular replicative machinery for viral replication, as can be observed for other DNA and RNA viruses [75]. Among pathways affected significantly by down-regulated genes in DAN and ILI group also CD40 Signaling was identified, and in all groups B cell receptor Signaling was down-regulated. Ligation of CD40 on the surface of dendritic cells controls the production of particular proinflammatory cytokines (IL-8, MIP-1α, TNF-α and IL-12), while its ligation on monocytes is required for stimulation of production of IL-1α, IL-1β, TNF-α, IL-6, and IL-8, and in the rescue of circulating monocytes from apoptosis. The impairment of this pathway affects simultaneously both cellular and humoral immune response. The impairment of B cell receptor signaling strongly affects humoral immune response, since signals propagated through the B cell antigen receptor (BCR) are crucial to the development, survival and activation of B lymphocytes. Furthermore, it is also linked with stimulation of nuclear factor kappa B (NFκB) and PI3K/AKT signaling pathways resulting in the nuclear accumulation of transcription factors and enhancement of protein synthesis and therefore its down-regulation impairs mentioned processes, accompanying already mentioned pathways, EIF2 and EIF4 [76]. In BOR group, additionally, Antiproliferative Role of TOB in T Cell Signaling and T Helper Cell Differentiation processes were found to be significantly associated with down-regulated genes. A transducer of ERBB2 (TOB) is a negative regulator of T cell proliferation and cytokine transcription, which is constitutively expressed in unstimulated peripheral blood T lymphocytes and selectively expressed in anergic T cells. Down-regulation of TOB is necessary for T cell activation what is crucial during infection [77]. But when this whole process undergoes down-regulation, together with T Helper Cell Differentiation process, it changes the situation and heavily impairs cellular immune response and cytokines signaling, crucial in the face of infection [78]. Comparison analysis of canonical signaling pathways activation state induced by BOR, ILI and DAN infection showed some differences between strains. Fcγ Receptormediated Phagocytosis in Macrophages and Monocytes, TREM 1 signaling and Chemokine Signaling pathways were activated in both strains from subtype 2 (BOR and ILI), while in subtype 1 strain DAN their inhibition was observed. Fcγ Receptor-mediated Phagocytosis in Macrophages and Monocytes plays a role in defense against invading bacteria. Infection with PRRSV may sensitize pigs to secondary bacterial infection. It was shown that expression of FcγRIIB was up-regulated post-infection with PRRSV strains HN07-1 and BJ-4 but an expression of FcγRIIIA receptor was inhibited, what in consequence could suppress the phagocytosis of granulocytes [79]. On the other hand, FcγR-mediated activation of monocyte derived macrophages (MDM) is a potent mechanism of HIV-1 suppression [80]. The activation of Fcγ Receptor-mediated Phagocytosis in Macrophages and Monocytes pathway after BOR and ILI infection may reflect the host attempt to control viral replication. Although the whole pathway is in an inactivated state in the DAN-infected group, we observed up-regulation of Fc gamma receptors (subtypes FcγR1A, FC = 3.7). TREM 1 signaling leads to the induction of inflammatory processes such as cytokine production, degranulation of neutrophils and phagocytosis. Badaoui et al. [38] reported the activation of TREM1 signaling pathway in response to the infection with highly virulent East European PRRSV strain Lena (subtype 3 of Type 1). Similarly, an increase of activity in TREM 1 pathway was noted for ILI and BOR strains. Furthermore, in infection with Lena as well as ILI and BOR strains, the expression of IL-8 and TLR4 was upregulated. This finding could suggest common mechanism playing role in the infection with genetically different East European strains and their higher pathogenicity compared to classical PRRSV strains present in Central and Western European countries. Further IPA analysis revealed an additive effect of immunity related genes engaged in Chemokine Signaling pathway. An activation of this pathway was observed previously in the transcriptomic analysis of PBMCs after PRRSV vaccination [81]. Particularly interesting results were described for CCL4 cytokine (MIP-1β), recently investigated in several PRRSV studies. Miller et al. [82] and López-Fuertes et al. [83] observed that the level of transcripts encoding CCL4 declined in porcine alveolar macrophages following infection with both Type 2 strain VR-2332 and European Type 1 PRRSV isolate 5710. In our study we observed the opposite effect and up-regulated expression of CCL-4 in BOR (FC = 2.5) and ILI (FC = 1.8) groups. The same additive effect for this inflammatory mediator was noted in bone marrow-derived dendritic cells BMDCs pre-infected with PRRSV (IAF-Klop PRRSV genotype 2 strain) after a subsequent infection with S. suis (wild-type virulent S. suis serotype 2 strain P1/7) [84]. CCL-4 is considered as the most potent chemoattractant and mediator of virus-induced inflammation in vivo [85], thus acitivation of Chemokine Signaling pathway and up-regulation of CCL-4 can contribute to the higher virulence of BOR and ILI strains compared to DAN observed in an animal experiment. Three pathways IL-6 Signaling, IL-8 Signaling and p38 MAPK Signaling were characterized as up-regulated only in BOR group and non-altered or down-regulated in two remaining strains. Interleukin 6 is a regulator of acute-phase response and also a lymphocyte stimulator factor, interleukin 8 plays a central role in inflammation process, while p38 MAPK pathway is known to be necessary for induction of different inflammatory cytokines in respiratory viral infections [86] and have been confirmed to be activated by PRRSV [87,88]. The activation state of signaling pathways of two pro-inflammatory cytokines, IL-6 and IL-8, as well as induction of the p38 MAPK pathway, suggests potential mechanisms responsible for the highest virulence of BOR strain observed in the experimental infection study [18]. Although the samples for microarray analysis were pooled, strong correlation of RT-qPCR results between pooled and individual samples confirmed the validity of obtained results. Moreover, the results of the present study were consistent with the results of an experimental infection study, where the group infected with BOR strain developed acute clinical symptoms, in contrast to medium and mild severity in case of ILI and DAN strains, respectively [18]. The highest FC values obtained in BOR group for multiple genes (OAS1, Mx1, IFIT2, IFIT3, CXCL10, GZMA) involved in a range of immunological processes indicate the most pronounced inflammatory response. Also, BOR strain seems to have a higher general influence on the cells' metabolic processes and signaling pathways (down-regulation of cell cycle related G1/S Checkpoint Pathway, up-regulation of Integrin and Integrin-like Signaling Pathways, dysregulation of TOB activity, lack of up-regulation of Interferon Signaling Pathway in contrast to ILI and DAN strains). Presented results, referring to different transcriptional profiles of pigs infected with three PRRSV strains, create an important platform for further studies on pathogenicity and immune mechanisms used by PRRSV strains of subtype 2 and 1 to sabotage host immune activation. However, more studies are necessary to identify the full spectrum of pathways influenced by particular strains of PRRSV, especially in the context of extensive genetic variability observed within PRRSV.
v3-fos-license
2016-05-12T22:15:10.714Z
2013-06-04T00:00:00.000
8076662
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/archive/2013/490578.pdf", "pdf_hash": "cce2f0fba09abe22832abe4197415685cb37acd3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42687", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "cce2f0fba09abe22832abe4197415685cb37acd3", "year": 2013 }
pes2o/s2orc
Factors Influencing Survival in Stage IV Colorectal Cancer: The Influence of DNA Ploidy Objective. To evaluate the prognostic significance of microscopically assessed DNA ploidy and other clinical and laboratory parameters in stage IV colorectal cancer (CRC). Methods. 541 patients with histologically proven stage IV CRC treated with palliative chemotherapy at our institution were included in this retrospective analysis, and 9 variables (gender, age, performance status, carcinoembryonic antigen, cancer antigen 19-9, C-Reactive Protein (CRP), anaemia, hypoalbuminaemia, and ploidy (DNA Index)) were assessed for their potential relationship to survival. Results. Mean survival time was 12.8 months (95% confidence interval (CI) 12.0–13.5). Multivariate analysis revealed that DNA indexes of 2.2–3.6 and >3.6 were associated with 2.94 and 4.98 times higher probability of death, respectively, compared to DNA index <2.2. CRP levels of >15 mg/dL and 5–15 mg/dL were associated with 2.52 and 1.72 times higher risk of death, respectively. Hazard ratios ranged from 1.29 in patients mild anaemia (Hb 12–13.5 g/dL) to 1.88 in patients with severe anaemia (Hb < 8.5 g/dL). Similarly, the presence of hypoalbuminaemia (albumin < 5 g/dL) was found to confer 1.41 times inferior survival capability. Conclusions. Our findings suggest that patients with stage IV CRC with low ploidy score and CRP levels, absent or mild anaemia, and normal albumin levels might derive greatest benefit from palliative chemotherapy. Introduction More than 1 million individuals worldwide will be diagnosed with colorectal cancer (CRC) every year [1,2]. Approximately 35% of CRC patients present with stage IV metastatic disease at the time of diagnosis, and 20%-50% with stage II or III disease will progress to stage IV at some point during the course of their disease [3][4][5]. Stage IV CRC carries an unfavourable outcome as the 5-year survival rate is less than 10% [4,5] with a median survival time of about 6-12 months [6,7]. In metastatic CRC, surgery and/or chemotherapy are used mainly with palliative intent. However, as treatment modalities in stage IV CRC are associated with significant complications and increased costs, there is a need to identify prognostic factors which may determine treatment response and survival. It is anticipated that such an approach could 2 ISRN Gastroenterology refine palliative management according to the likelihood of clinical benefit [8]. As part of our systematic search for prognostic factors in CRC, this study expanded our previous work [9] by evaluating the prognostic significance of DNA ploidy in addition to other clinicopathological factors in a cohort of patients with stage IV metastatic disease receiving palliative chemotherapy. Patients and Data Sources. The population under study has been described thoroughly in a previous report [9]. Briefly, the medical records of 541 patients with histologically proven CRC (UICC stage IV) between 1998 and 2008 were retrospectively reviewed. All were consecutive nonselected cases from a single centre and all patients were treated outside of clinical trials. No patients were candidates for surgical treatment (either curative or palliative); however, all received palliative chemotherapy according to established protocols. Chemotherapy regimens were based on single agent leucovorin modulated 5-FU (Mayo clinic or AIO regimens) or combination treatments of 5-FU (DeGrammont or simple infusion and leucovorin) with either oxaliplatin or irinotecan, or capecitabine with or without bevacizumab or cetuximab. Records with complete data (for the parameters used as prognostic factors) were included in the analysis. Followup was continued until death from CRC or from any other cause, and patients who remained alive were censored as of January 1, 2009. Overall survival was the primary endpoint. This protocol has been approved by the National and Kapodistrian University of Athens ethics committee. DNA Measurements (Ploidy). For DNA measurements, the Feulgen staining technique was applied as previously described [10]. The nuclei of Feulgen-stained cells were evaluated for DNA ploidy using a Nikon eclipse microscope (Nikon, Japan) connected with a Nikon CCD videocamera and an IBM Pentium 4/PC cell measurement software (Image Pro Plus v. 5.1, Media Cybernetics Inc, Silver Springs, MD, USA). Areas of the Feulgen-stained sections containing pathological lesions, identified in adjacent H&E stained slides, were selected for DNA content analysis. A total of 200-300 nuclei with clear boundaries appearing to have no loss of membrane integrity were analyzed in each tissue sample. Cytometry measurements were performed with a magnification of ×200 and calculated automatically according to the algorithms described previously by measuring the nuclear integrated optical density (IOD), representing the cytometrical equivalent of DNA content [11]. The procedure was performed for all nuclei, and the overall mean represents DNA content or DNA index (DI). Mean IOD of human lymphocytes (control cells) was used as the diploid standard (2c) and reference for DI calculation for targeted cells. DNA histograms were generated and a tumour was classified as diploid if the DI ranged from 0.9 to 1.1 and the relevant DNA histogram revealed only 1 peak at 2c and aneuploid if any from the previous 2 criteria was absent. Statistical Analysis. Descriptive statistics were calculated with the measures of means, medians, and standard deviation for quantitative parameters and counts/percentages for discrete factors. Overall survival was studied with the use of Kaplan-Meier method. Survival differences between groups are studied with the use of log-rank test. A multivariate Coxregression model was implemented to study the simultaneous effect of parameters on survival after taking into account the parallel effect of remaining factors. Best model selection was based on manual and automated forward techniques. Results of regression analyses were displayed in the form of regression estimates tables. Hazard ratios of outcomes under study were calculated for each parameter estimate as well as 95% confidence intervals. Categorical covariates were compared with a predefined reference category. All analyses were performed at a significance level of = 0.05 with the use of the statistical package SPSS 12.0. Patients. A total of 541 patients were included in the study, with median age of 61.00 years, a mean age of 60.33 years, and standard deviation of 7.35 years. The frequencies of the clinical variables are shown in Table 1. Survival Analysis. Survival data were collected for all patients. Based on the Kaplan-Meier method, the mean survival time was recorded at 12.8 months (95% confidence interval (CI) 12.0-13.5 months), with a median survival of 9.8 months (95% CI 8.8-10.8 months) (Figure 1). Univariate Analysis. In the univariate analysis, CRP, Hb, albumin, and ploidy scores were related to survival outcome at a significance level of < 0.001. Multivariate Analysis. Factors found to have strongest significance of a relation to survival according to the bivariate analysis were entered into the multivariate analysis model. Factors were added and excluded using the change in likelihood between models as inclusion and exclusion criteria. Forward automated procedures resulted in the final model, which is described in Table 2. Hazard Ratios of Risk Factors. Probability of death increased with increased CRP at presentation; patients with CRP > 15 mg/dL had 2.52 higher risk of death and patients with CRP 5-15 mg/dL had 1.72 times higher risk of death than patients with CRP < 5 mg/dL (Figure 2(a)). Anaemia was also associated with an adverse outcome. In particular HRs ranged from 1.29 in patients who presented with mild anaemia (Hb 12-13.5 g/dL) to 1.88 in patients with severe anaemia (Hb < 8.5 g/dL) (Figure 2(b)). Similarly, patients with low albumin levels (<5 g/dL, hypoalbuminaemia) had 1.41 times higher probability of death than did those with normal albumin levels (Figure 2(c)). Finally, a high ploidy score was associated with worst survival prognosis as patients with ploidy scores 2.2-3.6 or >3.6 had 2.94 and 4.98 times higher probability of death, respectively, as compared to those patients with ploidy score <2.2 (Figure 2(d)). Discussion This pooled analysis based on the individual data of 541 stage IV colorectal cancer patients treated with palliative chemotherapy confirms the prognostic value of previously identified factors such as CRP, Hb, and albumin and strengthens the existing data from other studies supporting the prognostic significance of DNA ploidy in stage IV colorectal cancer. CRP is synthesized by the liver and is a nonspecific but sensitive marker of inflammation. Its production is induced by proinflammatory cytokines such as Interleukin-6 (IL-6), IL-8, and tumour necrosis factor alpha (TNF-) and its levels have been positively correlated with weight loss, anorexiacachexia syndrome, extent of disease, and recurrence in many cancer types including CRC [12]. Preoperatively elevated serum CRP levels have been shown to be associated with increased incidences of liver metastases, peritoneal carcinomatosis, histopathologic lymph nodes metastasis, intravascular invasion, and detrimental 1-, 2-and 3-year survival rates in CRC [13], and these results have been supported by others [14]. Although there appears to be no difference in Dukes' stage between patients with normal or increased preoperative CRP levels [15], CRP has been shown to specifically influence survival in patients with Dukes' C and D tumours. In the advanced disease setting in particular, a recent analysis of a homogeneous cohort consisting of 50 patients with peritoneal carcinomatosis has demonstrated an association between elevated plasma CRP levels at the time of diagnosis and overall survival [16]. On the opposite end, Chung and Chang have advocated on the lack of prognostic significance of CRP in CRC by a multivariate analysis of rather small heterogeneous cohort consisting of 172 patients with CRC at various stages [17]. The association between anaemia and inferior survival capability has been widely validated in previous studies including a multivariate analysis of 3.825 patients with stage IV CRC treated with palliative 5FU-based chemotherapy in the setting of 22 multinational trials by Köhne et al. [18]. Serum albumin reflects the nutritional status of patients depicting general condition, including reserve capacity, and its predictive value on metastatic CRC outcome has been well documented [18][19][20][21][22][23]. The prognostic significance of DNA content (DNA ploidy or index) in CRC has been extensively investigated in the past with controversial results. The majority of the reported studies have employed flow cytometric derived DNA ploidy, and the American Society of Clinical Oncology Tumour Markers Expert Panel has reviewed fifteen articles (encompassing 14 independent series) evaluating the prognostic role of flow cytometric derived ploidy in CRC to support its recommendation regarding the unsuitability of this marker as a prognostic determinant due to largely controversial results [24]. To some extent, disparate results in DNA ploidy studies have been ascribed to the differing techniques employed (flow cytometry versus DNA image cytometry) and the heterogeneity in the nuclear DNA content in colonic tumour cells; hence, image cytometry has generally been considered superior to flow cytometry as only tumor cells are used for DNA measurement [25]. Despite its technical advantages, image cytometry has only been applied to a limited number of studies which were particularly aimed to identify patients with stage II disease with high risk of recurrence following curative resection and assess the survival benefit of adjuvant chemotherapy. One of these earlier studies by Nori et al. [26] demonstrated that aneuploidy was associated with significantly higher tumour recurrence rate ( = 0.024) and shorter overall survival ( < 0.002) but was hampered by small patient number ( = 20). Subsequent studies by Kay et al. [27] and Buhmeida et al. [28] in larger patient cohorts ( = 168 and = 253, resp.) demonstrated the prognostic significance of DNA image cytometry in stages II CRC and have evolved this marker as a major determinant for administering adjuvant chemotherapy in stage II disease. These results were reiterated by a meta-analysis of 63 studies reporting outcome in 10126 patients, 60.0% of whom had chromosomal instability positive (CIN+, i.e., aneuploid/polyploid) tumours whereby it was shown that patients with CIN+ CRC and stages II-III disease appear to have a poorer survival in terms of overall survival and progression free survival irrespective of whether these receive adjuvant therapy. In stage IV disease, the data were inconclusive due to low patient numbers confounded by high degree of heterogeneity [29]. The limitations of our study centre mainly on the retrospective nature of the analysis and the objectivity of the methodology applied to assess DNA ploidy. Despite these limitations, the study has clinical significance as it validates the usefulness of a number of factors to assess the likelihood of clinical benefit of palliative chemotherapy in stage IV CRC. Clearly, however, these results need to be evaluated in a prospective manner. Conclusions The present study represents a comprehensive analysis of the prognostic significance of a number of factors in a large cohort of stage IV unoperable colorectal cancer patients receiving palliative chemotherapy. Our analysis demonstrated that DNA ploidy, along with simple haematological and biochemical parameters such as Hb, CRP, and albumin, carries the most significant independent effect on the outcome of stage IV CRC.
v3-fos-license
2018-04-03T01:07:17.304Z
2015-07-16T00:00:00.000
822604
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "BRONZE", "oa_url": "https://www.cdc.gov/pcd/issues/2015/pdf/15_0100.pdf", "pdf_hash": "7e3034e638074ffb6c9687b570fa00f3d74e5e8a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42688", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "178e473dcab59964e986dec17496602de077482d", "year": 2015 }
pes2o/s2orc
Activity Profile and Energy Expenditure Among Active Older Adults, British Columbia, 2011–2012 Introduction Time spent by young adults in moderate to vigorous activity predicts daily caloric expenditure. In contrast, caloric expenditure among older adults is best predicted by time spent in light activity. We examined highly active older adults to examine the biggest contributors to energy expenditure in this population. Methods Fifty-four community-dwelling men and women aged 65 years or older (mean, 71.4 y) were enrolled in this cross-sectional observational study. All were members of the Whistler Senior Ski Team, and all met current American guidelines for physical activity. Activity levels (sedentary, light, and moderate to vigorous) were recorded by accelerometers worn continuously for 7 days. Caloric expenditure was measured using accelerometry, galvanic skin response, skin temperature, and heat flux. Significant variables were entered into a stepwise multivariate linear model consisting of activity level, age, and sex. Results The average (standard deviation [SD]) daily nonlying sedentary time was 564 (92) minutes (9.4 [1.5] h) per day. The main predictors of higher caloric expenditure were time spent in moderate to vigorous activity (standardized β = 0.42 [SE, 0.08]; P < .001) and male sex (standardized β = 1.34 [SE, 0.16]; P < .001). A model consisting of only moderate to vigorous physical activity and sex explained 68% of the variation in caloric expenditure. An increase in moderate to vigorous physical activity by 1 minute per day was associated with an additional 16 kcal expended in physical activity. Conclusion The relationship between activity intensity and caloric expenditure in athletic seniors is similar to that observed in young adults. Active older adults still spend a substantial proportion of the day engaged in sedentary behaviors. Introduction Time spent by young adults in moderate to vigorous activity predicts daily caloric expenditure. In contrast, caloric expenditure among older adults is best predicted by time spent in light activity. We examined highly active older adults to examine the biggest contributors to energy expenditure in this population. Methods Fifty-four community-dwelling men and women aged 65 years or older (mean, 71.4 y) were enrolled in this cross-sectional observational study. All were members of the Whistler Senior Ski Team, and all met current American guidelines for physical activity. Activity levels (sedentary, light, and moderate to vigorous) were recorded by accelerometers worn continuously for 7 days. Caloric expenditure was measured using accelerometry, galvanic skin response, skin temperature, and heat flux. Significant variables were entered into a stepwise multivariate linear model consisting of activity level, age, and sex. Results The average (standard deviation [SD]) daily nonlying sedentary time was 564 (92) minutes (9.4 [1.5] h) per day. The main predictors of higher caloric expenditure were time spent in moderate to vigorous activity (standardized β = 0.42 [SE, 0.08]; P < .001) and male sex (standardized β = 1.34 [SE, 0.16]; P < .001). A model consisting of only moderate to vigorous physical activity and sex explained 68% of the variation in caloric expenditure. An increase in moderate to vigorous physical activity by 1 minute per day was associated with an additional 16 kcal expended in physical activity. Introduction The amount of time people engage in physical activity tends to decrease with increasing age (1), leading to numerous functional and cardiometabolic sequelae. Older adults make up both the least active and most sedentary cohort in Western countries (2). A lack of energy expenditure from physical activity is considered to be one of the contributors to the growing worldwide rates of obesity (3). Activity profiles of differently aged populations show that the main contributor to calorie expenditure among young adults is time spent in moderate to vigorous physical activity (4), while light activity is the most important contributor to caloric expenditure among older adults (5,6). Both young adults (7) and older adults (8) spend large quantities of their waking hours engaged in sedentary behaviors, which has cardiometabolic consequences independent of the amount of time spent in leisure-time physical activity. What has not been examined, however, is the activity profiles of older adults who meet guidelines (9) for physical activity (≥150 minutes of physical activity per week). The objective of this study was to measure (by accelerometer) the amount of time active older adults spend in sedentary behavior and to determine which intensity of activity best predicts daily caloric expenditure. The opinions expressed by authors contributing to this journal do not necessarily reflect the opinions of the U.S. Department of Health and Human Services, the Public Health Service, the Centers for Disease Control and Prevention, or the authors' affiliated institutions. Methods This was a cross-sectional observational study. This study was approved by the human subjects committee of the University of British Columbia, and all participants gave written informed consent. Participants and recruitment Fifty-five community-dwelling men and women aged 65 years or older were screened through their affiliation with the Whistler Senior Ski Team of British Columbia, Canada, via a study poster and information session. Participants were enrolled from October 2011 through June 2012. All participants had to be able to independently perform all basic activities of daily living, climb 1 flight of stairs, and walk 2 blocks without assistance. Current smokers, recreational drug users, and those with diabetes mellitus or cardiovascular disease (prior strokes, transient ischemic attacks, angina, myocardial infarction, or coronary revascularization in the previous 2 years) were not eligible. All participants had to meet current guidelines for physical activity (≥150 minutes of physical activity per week) (9). Research procedures Each participant was required to make at least 1 study visit to allow researchers to collect demographic data and apply an accelerometer. Sensewear Pro armband triaxial accelerometers (Body-Media, Sword Medical Limited) were fitted snuggly around the left upper triceps and used to monitor levels of physical activity 24 hours per day for 7 full days. Participants were instructed to wear it continuously, including during sleep, except when bathing or swimming. Minute-by-minute epoch data from the SenseWear Pro was analyzed using Body Media InnerView Research Software (version 5.1, BodyMedia, Inc). To be included in the analysis, participants were required to comply with wearing the accelerometer for at least 5 valid days, including 1 weekend day. A valid day was defined as at least 21 hours of recorded activity on the accelerometer. Measures Accelerometer data was recorded in 1-second epochs. Average time per day spent in sedentary activity, light activity, and moderate to vigorous activity levels was recorded as minutes per day. On the basis of a systematic review of accelerometry practice for older adults (10), the following cut points were used: 99 counts or less per minute as sedentary time, 100 to 1,951 counts per minute as light physical activity, and 1,952 or more counts per minute as moderate to vigorous level of activity (11,12). The SenseWear Pro armband was also used to measure heat flux (the amount of heat dissipating from the body), galvanic skin response (the amount of evaporative heat loss), and skin temperature (an estimate of the body's core temperature). These parameters are then entered into proprietary algorithms to estimate caloric expenditure. The use of the SenseWear Pro to measure caloric expenditure due to physical activity has been used in previous investigations in older adults (13) and has been validated against doubly labeled water techniques (14). Statistical analysis All measures of physical activity were normalized by the amount of time per day the accelerometer device was worn. Our primary response variable was caloric expenditure (energy expenditure per day, kcal). The 3 levels of physical activity (sedentary, light, and moderate to vigorous) and predictors in the multivariate linear regression model were determined a priori. Previous investigations showed that age and sex are predictors of energy expenditure in older adults; these predictors therefore were also added to our initial model (11,15). Scatterplots were visually inspected for outlier data, and density plots were examined to identify data skewing. Any predictors that demonstrated skewing were logarithmically transformed (base 10) before the univariate and multivariate analyses. A tiered approach was used for the analysis whereby the initial model consisted of all of our predictor variables. The data were fitted with a linear model using the least squares method and the parameters (intercept and β coefficients) were calculated (16) as well as scaled β coefficients using standard methods (16). A stepwise method was used to generate each successive regression model; the criterion for removal of variables was the least significant predictor with a P value greater than .05. In each iteration of the stepwise regression model, the least significant predictor was removed (17). After the removal of each predictor, an analysis of variance (ANOVA) was performed with the previous model to verify that there had been no significant change. To ensure the assumptions of the multivariate regression were met, variance inflation factors were examined for multicolinearity in each iteration of the model. A value of a variance inflation factor greater than 10 was interpreted as an indicator of collinearity problems (16). Plots of residuals and a QQ plot were examined in our final minimum effective model. The R core software package version 3.0.1 was used for statistical analysis; a significance level of P < .05 was set (18). Results Fifty-five people were screened; 1 person was excluded because of a cardiovascular event in the previous 2 years. Of the 54 originally recruited, 2 participants withdrew; 1 participant did not meet the accelerometer compliance criteria, and 1 participant wore the monitor incorrectly. Data from 50 participants (23 men, 27 women) were used in data analysis. The accelerometers were worn for an average of 98.4% (standard deviation [SD], 1.4%) of the study time. Participants spent an average (SD) of 159 (78) None of the predictor variables demonstrated skewing, and no transformation was required before the analysis. In our final minimum effective model, moderate to vigorous physical activity was the only activity parameter significantly correlated with caloric expenditure. Time spent in sedentary activity and light activity was negatively associated with caloric expenditure, but these associations were not significant ( Table 2). A multivariate regression model including 5 predictors (time spent in sedentary activity, time spent in light activity, time spent in moderate to vigorous activity, age, and sex) explained 73% of the variance in caloric expenditure (Model 1, Table 3). The highest variance inflation factor was 2.97 (for time spent in sedentary activity), indicating no issues of multicolinearity. Table 3) demonstrated positive associations between time spent in light activity, time spent in moderate to vigorous physical activity and male sex with caloric expenditure and a negative association between increasing age and time spent in sedentary activity with caloric expenditure. However, our minimum effective model (Model 4, Table 3), which included only moderate to vigorous physical activity and sex, explained 68% of the variance in caloric expenditure. Every extra minute spent in moderate to vigorous physical activity per day was associated with an increasing caloric expenditure of 16 kcal. In addition, male sex was associated with a higher caloric expenditure (Model 4, Table 3). Model 1 ( One participant had very high levels of activity and was therefore an outlier. A sensitivity analysis in which data for this participant were excluded showed that this exclusion had no effect on the results of the analysis. Discussion We demonstrated that an athletic older adult population spent a substantial portion of their waking hours in sedentary activity (approximately 9.4 hours per day). We also showed that the main contributor to energy expenditure in active older adults is the amount of time spent in moderate to vigorous activity. The most surprising finding of our study was that despite exceeding the current guidelines for physical activity of 150 minutes or more of physical activity per week (9), highly active older adults spent a large quantity of the day completely sedentary. The amount of sedentary time observed in our active population was comparable to that seen in sedentary adults over 60 years old (19), sedentary adults over 65 years old (6), middle-aged sedentary adults (11), adult men over 70 years old (20), and older adult men over 80 years old (20). This amount of sedentary time was surprising given that the physical activity level of our participants is observed in less than 5% of the older adult population (21). Although the contribution of different levels of activity to energy expenditure has been studied in sedentary young (4) and sedentary older (6) adults, this relationship has not been examined in active older adults. Previous work showed that the main contributor to energy expenditure in young adults is moderate-to-vigorous intensity activity (4). In contrast, light activity has been shown to be the main contributor to energy expenditure in older adults (5,6). Our participants demonstrated a relationship between activity intensity and caloric expenditure that was more in keeping with a younger population, with moderate-intensity activity predicting energy expenditure. Although the reason for this "young" profile for activity and energy expenditure is unclear, some exercise intervention studies suggest an underlying mechanism. Unlike young adults, older adults may increase their physical activity though a shift from sedentary to light intensity activities, because these activities tend to be better tolerated (22). Our participants clearly did not follow this pattern, perhaps because most of our participants were continuing an established pattern of high levels of physical activity as opposed to starting an exercise program from a previous sedentary state. Although regular leisure-time physical activity has many benefits (23,24), sedentary behavior has recently been identified as an independent risk factor for dyslipidemia, diabetes mellitus, obesity, and hypertension (23,25). Of even more concern, these associations persist despite accounting for the level of moderate and vigorous physical exercise (26). These findings suggest that sedentary behavior may pose a risk for cardiometabolic disease that is PREVENTING CHRONIC DISEASE www.cdc.gov/pcd/issues/2015/15_0100.htm • Centers for Disease Control and Prevention distinct from physical exercise, or the lack thereof. Our study population was an extremely active group of individuals; despite this, they spent a large amount of time in sedentary behaviors. In fact, the time spent sedentary was similar to that observed in studies of average normal older adults (6). Although more work needs to be done, our results suggest that even active older adults could benefit from interventions (such as the use of standing desks) that would reduce sedentary time without interfering with their current high levels of moderate-intensity activity. Our study has several potential limitations. The cross-sectional nature of the study design limits inference about causality. Prospective or interventional trials are needed to define the physiologic and behavioral factors involved in the associations observed in this study. In addition, our highly active study population and the study's small sample size make generalizability of our results to less active populations problematic.
v3-fos-license
2014-10-01T00:00:00.000Z
1999-02-01T00:00:00.000
5714965
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.nature.com/articles/6690078.pdf", "pdf_hash": "5951e00416cc35c0773c49e8d11cad8b4667f530", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42691", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "5951e00416cc35c0773c49e8d11cad8b4667f530", "year": 1999 }
pes2o/s2orc
The impact of genetic counselling about breast cancer risk on women's risk perceptions and levels of distress Women referred to a familial breast cancer clinic completed questionnaires before and after counselling and at annual follow-up to assess their risk estimate and psychological characteristics. The aims were to determine whether those who attended the clinic overestimated their risk or were highly anxious and whether counselling influenced risk estimates and levels of distress. Women (n = 450) at this clinic were more likely to underestimate (39%) than overestimate (14%) their risk. Mean trait anxiety scores were higher than general population data (t = 4.9, n = 1059, P < 0.001) but not significantly different from published data from other screening samples. Overestimators (z = 5.69, P < 0.0001) and underestimators (z = –8.01, P < 0.0001) reported significantly different risk estimates (i.e. increased accuracy) after counselling, but significant inaccuracies persisted. Over- (n = 12) and underestimators (n = 60) were still inaccurate in their risk estimates by a factor of 2 after counselling. Thirty per cent of the sample scored above the cut-off (5/6) for case identification on a screening measure for psychological distress, the General Health Questionnaire (GHQ). GHQ scores were significantly lower after counselling (t = 3.6, d.f. = 384, P = 0.0004) with no evidence of increasing risk estimate causing increased distress. The risk of distress after counselling was greater for younger women and those who were more distressed at first presentation. The counselling offered was effective in increasing the accuracy of risk perceptions without causing distress to those who initially underestimated their risk. It is worrying that inaccuracies persisted, particularly as the demand for service has since reduced the consultation time offered in this clinic. Further work is needed to evaluate alternative models of service delivery using more sophisticated methods of assessing understanding of risk. © 1999 Cancer Research Campaign was greater than they had previously thought. In the current state of knowledge, the information that can be given about individual risk and the efficacy of available risk management strategies is highly probabilistic. It was recognized that this uncertainty could also generate anxiety. Key issues were therefore to assess womenÕs perceptions of their risk of developing breast cancer and the psychological morbidity associated with cancer risk counselling. This study was conducted against a background of data accruing from the US to show a substantial proportion of women with a family history of breast cancer with significant levels of psychological distress (Kash et al, 1992) and gross overestimates of their own cancer risk (Lerman et al, 1994a;Gagnon et al, 1996) even after risk counselling (Lerman et al, 1995). High levels of perceived susceptibility and associated anxiety have been shown to interfere with adherence to recommended surveillance programmes (Kash et al, 1992;Lerman et al, 1993). The concern has also been expressed that some women will deal with their concerns by making ill-considered requests for genetic testing or prophylactic surgery (Lerman et al, 1994b). In the UK, Lloyd et al (1996) compared 62 genetic counsellees (with a family history of breast cancer) with a matched group of attenders at a general prac-titionerÕs (GP) surgery. They found these two groups of women to be similar in terms of the outcome measures used and concluded that the risk of breast cancer was not predictive of psychological morbidity. In this study, risk perceptions were recorded before counselling, but 58% of the women self-reported that they had previously overestimated their risk. Assessed after genetic counselling, they were found more likely to underestimate (48%) than overestimate their risk (18%). The impact of genetic counselling about breast cancer risk on womenÕs risk perceptions and levels of distress This paper presents an interim analysis of a subset of data from the first cohort of women attending the clinic to address three main questions: (1) What were the characteristics of the women who presented at the clinic? In particular, did they overestimate their risk and were they highly anxious women? (2) Did counselling influence their perception of their risk of developing breast cancer? (3) Did cancer risk counselling cause distress? The study afforded the opportunity for exploring some potentially explanatory intervening variables to account for individual differences in risk perception and distress, i.e. beliefs about control of health and coping style in relation to information about threat to health. Preliminary findings are reported. Interpretation of the data presented require some understanding of the clinical context from which they were collected. Clinic setting A multidisciplinary clinic was set up in Edinburgh in the autumn of 1992 to provide genetic counselling and breast screening for young (age <50) women with a family history of breast cancer. The clinic is run in a breast screening centre in the community and at the time of the study was staffed by consultants in genetics and breast surgery, a senior registrar in oncology with training in genetics, a surgical registrar conducting research in genetics for an MD thesis and a specialist nurse. The clinic was funded from a variety of sources to provide a research-based service of which this psychological assessment was an integral part. The criteria for referral at the time of this study were: (1) a firstdegree relative with bilateral breast cancer or breast and ovarian cancer or breast cancer diagnosed at age < 50 years or (2) two firstdegree relatives with breast/ovarian cancer at any age or (3) a male first-degree relative with breast cancer. In practice, referrals were accepted of any woman with a history of concern to the referring agent. Of the first 200 women, the majority (> 70%) were referred from hospital clinics. As the clinic became known, the proportion of referrals from GPs increased (to 55% of the total sample). Women were counselled individually. At the time of this study, two clinic appointments were usually offered. At the first, a detailed family history was taken and discussed with the geneticist, who gave some general educational information about breast cancer genetics. Clinical examination and mammography (where appropriate) were undertaken and women were offered training in breast self-examination. Opportunity was given for breast cancer worries to be discussed with the breast surgeon/oncologist. Family history data were then verified (i.e. through hospital and public records by pedigree workers) and reviewed by clinic staff at a case conference where risk estimates were assigned with reference to epidemiological data (Claus, 1991) and consensus established on the advice to be given. At their second clinic visit women were given this empiric risk estimate and plans for their future management and subsequent follow-up were discussed with them and subsequently communicated to their GPs by letter. Those not at significantly increased risk were discharged. Sample A consecutive series of 486 women newly referred to the clinic over a 27-month period were invited to take part in the study. Risk estimate Women were asked to select from 12 response categories the ratio (e.g. 1 in 2 to 1 in 100) which they believed to be (a) the risk for a woman in the general population and (b) their own lifetime risk of developing breast cancer (Evans et al, 1993(Evans et al, , 1994. Risk factors Women were asked to identify any of a list of nine factors that they believed would increase the risk of breast cancer (adapted from Fallowfield et al, 1990). The factors were: being single, married without children, married with children, not having breastfed, taking the oral contraceptive pill, having relatives with breast cancer, being past the menopause, having been hit in the breast and stress. Psychological distress The Spielberger stateÐTrait Anxiety Inventory (STAI, Spielberger, 1983) was used to measure anxiety proneness (trait) and current levels of generalized anxiety (state). Knight et al (1983) collected STAI data from a general population sample in an area of New Zealand with a strong history of immigration from Scotland. The STAI trait and state anxiety scores, which are presented by age (in 10-year bands) and sex, offer more appropriate reference data for this study than those in the STAI manual, which are derived from employees in the US Federal Aviation Administration. Reference data are also available from a large series of women attending for breast screening and for whom STAI trait and state anxiety scores are presented by age (in 10-year age bands from 30Ð69) and separately for women with normal breasts vs benign disease (Morris and Greer, 1982). More recent trait anxiety data are available from women aged > 50 in the tamoxifen prevention trial and from women with and without a family history of breast cancer undergoing routine screening (Thirla et al, 1996). The General Health Questionnaire (GHQ)-30 was used to screen for clinically significant levels of psychological distress and dysfunction (Goldberg and Williams, 1988). The manual for this instrument provides extensive comparative data derived from population surveys. Psychological factors The Health-related Locus of Control Scale (Wallston et al, 1978) was used to assess the extent to which the women attributed their health to internal (i.e. own behaviour), external (e.g. doctors) or chance factors. The nine items with the highest item Ð subscale correlations were selected (Marks et al, 1986). Although no descriptive data were available for comparison, this short form allowed the role of locus of control to be explored while keeping the burden on respondents to a minimum. The Miller Behavioural Style Scale (Miller, 1987) was designed to assess the propensity of people to seek out (ÔmonitorÕ) or avoid (ÔbluntÕ) information about threatening events. The short form presents two scenarios from which respondents select their most likely reaction from a fixed choice of ÔmonitoringÕ and ÔbluntingÕ responses. Limited comparative data using this version are available from small samples of students and patients with recurrent cancer (Steptoe, 1989). The short form was again considered adequate for exploratory analysis of the role of these constructs within this study. Data about the womenÕs health care attitudes and behaviour, which were also collected, will be reported elsewhere. Procedure The assessment package was posted to women with their clinic appointment and returned when they attended the clinic. Exceptions were the baseline state anxiety and GHQ-30, which were administered at clinic before the consultation. Risk estimate, state anxiety and GHQ-30 were reassessed at clinic after the second consultation at which risk counselling was undertaken. The measures were again sent to women for completion prior to their annual follow-up at the clinic. Statistical analysis Descriptive statistics were generated to describe the study population. Anxiety and distress data from the same patients on two occasions were compared using paired t-tests. Comparisons between two independent samples were made using two-sample t-tests. Personal risk estimates (transformed from ratio to percentage risk) from the same patients on two occasions were compared by the Wilcoxon matched-pairs test. Risk estimates from independent groups were compared by the KruskalÐWallis test. The associations between explanatory variables and ordered groups based on the accuracy of patientsÕ personal risk estimates (underÐ, close, overestimates) were examined using the non-parametric trend test (Cuzick, 1985). The chi-squared test for trend was used to compare proportions across these ordered groups. SpearmanÕs rank correlation coefficient (r s ) was used to measure the association between change in risk estimate and change in GHQ score. The gamma statistic G was used as a measure of association for ordered variables when these data were cast in the form of a contingency table. Multiple logistic regression, with backward stepwise selection, was used to examine which variables were predictive of distress, using P ² 0.05 as the criterion for keeping variables in the model. Previous models in the stepwise procedure were examined to assess the justification for retaining exploratory variables, i.e. locus of control, monitoring/blunting in future studies. The data were analysed using the Stata statistical package (Stata Corp., 1995) The sample Four hundred and eighty-six women referred to the clinic between October 1992 and January 1995 were sent baseline assessments, and 481 (99%) returned them when they attended the clinic. There are variable numbers of missing data per item in the forms returned over the three assessment points. For clarity, the denominator is therefore specified throughout. Baseline data were analysed to assess whether the characteristics of those attending the clinic changed year by year. In the absence of statistically significant time trends, the sample was treated as a single cohort. At the time of data analysis, a number of women were no longer in the study population. Of the original sample attending the clinic, 136 women (28%) were discharged. Ninety-five of them were not at sufficiently increased risk to warrant surveillance and, of these ten were discharged after the first clinic visit and 85 after the second. Twenty-eight women were discharged because of their age, a further 12 were discharged to the IBIS (International Breast Cancer Intervention Study) trial and one woman was discharged following medical investigation. Of the original sample, 69 (14%) failed to attend subsequent clinics: five of them withdrew from the clinic altogether, 30 failed to attend for follow-up visits in the time frame of this study and 34 were lost to follow-up, for example moved away or administrative failure. Consequently, at the time of data analysis 281 (58% of the original sample) were being kept under surveillance but they were at different stages of follow-up. The numbers of women for whom data were available at the time of analysis were therefore variable across the assessment points. The sample size is specified throughout. The mean age of the women presenting to the clinic in the study period was 39.6 years (n = 481, s.d. = 9.2). Preclinic risk estimate Personal risk estimate At baseline, data were available from 475 women to indicate their perception of their risk of developing breast cancer. Eighteen women (4%) believed it inevitable they would develop breast cancer. Thirty women (6%) set their risk as ² 1:100. The distribution of risk estimates between these extremes is shown in Figure 1. Sixty-three percent set their own risk at ³ 2 × general population risk whatever they believed that to be. Thirty-one per cent believed their risk to be equal to or greater than the general population risk by a factor of < 2. Surprisingly, 33 women (7%) set their own risk lower than the general population. Counselled risk The risk ratio assigned by staff at the clinic (Ôcounselled riskÕ) was available from the case notes of 458 women at the time of this analysis. Four women (1%) were given a risk estimate of 1 in 2 and 77 women (17%) were given a risk estimate of less than 1 in 10, i.e. similar to the risk of the general population. Between these extremes, the risk ratios given to those attending the clinic were: ³ 1 in 3, n = 92 (20%); ³ 1 in 4, n = 122 (27%); ³ 1 in 5, n = 87 (19%); ³ 1 in 10, n = 76 (17%). Impact of counselling on personal risk estimate Immediately after risk counselling at the clinic, data were available from 363 women who could be categorized on the basis of the accuracy of their baseline risk estimates (over-, close and underestimators). Pre-and post-counselling risk estimates were compared for each category (Figure 2A). Women with close estimates at baseline (n = 182) tended to report lower risk estimates after counselling (z = 1.74, P = 0.08). After counselling, their risk estimates were on average significantly lower than the counselled risks they had been given (z = 2.60, P < 0.01). After counselling, 11 women overestimated (³ 2×) and 35 underestimated (² 0.5×) their risk relative to the counselled risk. Data were available from 171 women at all three time points, i.e. baseline, post-counselling and follow-up. This subset of data, categorized by the accuracy of the womenÕs initial risk estimates, is shown in Figure 2B. A significant shift in risk estimates, in the direction of increasing accuracy from baseline to post-counselling, was observed for both the overestimators (z = 3.7, P < 0.0002) and underestimators (z = Ð6.4, P < 0.0001). There was no significant difference between post-counselling and follow-up risk estimates for underestimators or close estimators. Overestimators showed a significant shift (z = Ð2.8, P < 0.03), indicating that their risk estimates were increasing over time after risk counselling. Two of the 171 women consistently overestimated their risk at each time point by a factor of ³ 2, and 17 (10%) consistently underestimated their risk to the same degree over the three assessments. Some additional analyses were undertaken to assess the representativeness of these findings. Three subsets of data were compared from those who had (1) baseline data only, (2) baseline and post-counselling data only and (3) data available at all three time points. These subsets differed in the median risk estimate at baseline (χ 2 = 8.11, d.f. = 2, P = 0.02). Overestimators in subset 3 had much higher risk estimates than those in the other two subgroups (χ 2 = 13.03, d.f. = 2, P = 0.002), which inflated the median risk in that subgroup. Precounselling, 9 of the 18 overestimators in subset 3 had believed it inevitable that they would develop breast cancer. Factors influencing accuracy of initial risk perception At baseline (n = 480), the median number of correct responses to the nine items concerning putative risk factors for breast cancer was 5. Ninety-seven per cent of the women recognized that having relatives with breast cancer was a risk factor. The next most frequently cited risk factor was stress (48%). Overestimators were more likely to identify stress (62%) as a risk factor (cf. 46% of close estimators and 47% of underestimators). Age and scores for trait and state anxiety, locus of control and coping style (monitoring/blunting) collected at baseline are summarized for the sample as a whole and separately for over-, close and underestimators (Table 1). The accuracy of womenÕs initial risk estimates appeared to be related to age (z = 2.97, P < 0.005) and trait anxiety (z = 2.52, P < 0.01) but unrelated to state anxiety (z = 1.47, P = 0.14). Older women and those with higher trait anxiety were more likely to overestimate their risk. Those who overestimated their risk tended to have higher scores on the external locus of control scale, i.e. they were more likely to believe that their health depended on others, but this did not reach statistical significance. Precounselling The sample as a whole exhibited a higher mean trait anxiety score (Table 2) than that derived from women in a general population sample (Knight et al, 1983) but not significantly different from scores reported from two samples of women attending a breast screening clinic (Morris and Greer, 1982). Baseline state anxiety scores collected when women first attended our clinic were significantly higher than the data from the general population but significantly lower than the data from the screening clinic sample (Table 2). After counselling State anxiety scores were available both at baseline and post-counselling for 384 women. After counselling, state anxiety scores were significantly lower than baseline scores for the same women (Table 2). These post-counselling scores were lower than scores from Morris and GreerÕs (1982) screening clinic sample (benign disease: t = Ð 9.5, d.f. = 699, P < 0.001; healthy women: t = Ð6.7, d.f. = 656, P < 0.001) and similar to Knight et alÕs (1983) general population data (Table 2). Examined separately, under-, close and overestimators all showed on average a reduction in state anxiety scores after counselling but the difference was statistically significant only for overestimators (t = Ð2.38, d.f. = 49, P < 0.03). No significant relationship was observed between change in risk estimate and change in state anxiety score from baseline to post-counselling (r s = 0.05, P = 0.31). Among women assessed at annual follow-up, state anxiety scores tended to be on average lower than at baseline (t = Ð1.70, d.f. = 179, P = 0.09). For under-, close and overestimators examined separately there were no significant differences between state anxiety scores at baseline and at annual follow-up. Psychological distress (GHQ-30) Precounselling Four hundred and eighty-one women completed the GHQ-30 at baseline. Their mean score was 4.5 (s.d. = 6.2). Data were compared with general population data derived from respondents to a health and lifestyle survey in the UK (Cox et al, 1987). The mean GHQ score for 777 women in the survey aged 35Ð44 years was 3.8 (s.e. = 0.19). Younger (age 25Ð34 years) and older (age 45Ð54 years) women in this survey had GHQ scores that were similar to our sample: mean score = 4.4 (s.e. = 0.21) and 4.6 (s.e. = 0.21) respectively. Impact of genetic counselling on women's risk perceptions 505 British Journal of Cancer (1999) 79(3/4), 501-508 © Cancer Research Campaign 1999 Knight et al, 1983. b Morris and Greer, 1982. c Two-sample t-test from comparison with mean trait anxiety score. d Paired t-test of baseline and postcounselling state anxiety scores. e Two-sample t-test from comparison with mean state anxiety score at baseline. GHQ data analysed separately for 174 under-, 211 close and 65 overestimators showed no significant differences among the three groups at baseline (z = 1.45, P < 0.15), and the proportion of women exhibiting case-level distress was similar, i.e. 30% vs 28% vs 34% (χ 2 for trend = 0.11, d.f. = 1, P < 0.80). Changing risk estimates and distress A key concern was that, if counselling increased womenÕs perception of their risk, this would cause distress. Full data were available for 368 women assessed before and after risk counselling. There was no evidence of an association between change in risk estimate and change in GHQ scores (r s = 0.04, P < 0.50). The sample was divided on the basis of whether the womanÕs risk estimate increased, remained the same or decreased after counselling relative to her baseline estimate. Changes in GHQ-30 scores of ³3 points in either direction were recorded and tabulated against change in risk estimate of ³1% point in either direction (Table 3). Again there was no evidence that increasing estimate of risk was associated with increased distress. Thirty per cent of this sample were less distressed after risk counselling, but this was not directly related to having a lower risk estimate. Predicting post-counselling distress Clinically it is important to be able to predict which women are likely to show case-level distress so that appropriate intervention can be offered. A backward stepwise logistic regression analysis was undertaken using data from 363 women to determine the contribution of the variables assessed at baseline to predicting a GHQ score of >5 after risk counselling. Variables entered into the model were: age, number of risk factors correctly identified, Spielberger anxiety scores, baseline GHQ-30 and scores from the locus of control and monitoring/blunting scales. Accuracy of initial risk estimate and change in baseline risk estimates were examined in earlier models which included the above variables, but neither was found to be associated with an increased risk of showing case-level distress. The final model is shown in Table 4. Younger age and higher GHQ score at baseline were associated with an increasing risk of case-level distress, as assessed by the GHQ, after risk counselling. At this stage in our understanding we wanted to clarify whether any other variables suggested an effect warranting further study in the future. In this analysis, we therefore examined the order in which variables were removed from the model. The model before that reported in Table 4 included one locus of control variable, i.e. external locus of control. The coefficients for age and GHQ in this model are similar in magnitude to those in the final model reported. Increasing external locus of control tended to be associated with an increasing risk of exhibiting case-level distress, although this was not statistically significant at conventional levels (P < 0.11). DISCUSSION Before this familial breast cancer clinic was set up, women with a family history of breast cancer of concern to them or their GP had been referred to a symptomatic breast clinic. This offered clinical examination and mammographic screening but risk assessment was not discussed. These women were referred on when this clinic was set up. The proportion of women referred direct from GPs increased as the clinic became established, but there was no evidence that this shifting practice made a significant difference in terms of time trends in the variables of interest for this study. Compliance with the baseline assessment was excellent so the data presented adequately describe the first cohort of women to attend this clinic. Do breast cancer genetic counselling clinics attract women who grossly overestimate their risk and who are highly anxious? The first cohort of women who attended this clinic in SE Scotland were aware of their increased risk relative to the general population (two-thirds of them set themselves at twice the population risk, whatever they conceived that to be) but they were more likely to underestimate their risk (39%) than to overestimate it (14%). These data echo the experience of the clinic in Manchester (Evans et al, 1993). It is important to avoid ascribing a spurious accuracy to the womenÕs risk estimates collected using this methodology, the limitations of which we fully acknowledge. We also recognize that variations across studies in the way in which risk is calculated and the accuracy of womenÕs estimates defined make comparison difficult. Nonetheless, there is a stark contrast between our data and Lerman et alÕs trial (1995) of breast cancer risk counselling, in which two-thirds of the women grossly overestimated their own cancer risk. There may be important cross-cultural differences in cancer risk perception but it is also important to be aware of how samples are derived. In LermanÕs trial, women had been identified through a relative currently receiving treatment for breast cancer. It is likely that this would have increased their sense of their own susceptibility. There is a lack of consensus among clinicians about how best to communicate information about risk, e.g. in words (e.g. high, low) vs numbers (e.g. ratios or percentages). We elected to use ratios as the method of assessing risk estimate because the general population risk is commonly expressed in those terms (although personal risk information is typically given in a variety of ways even within a single consultation) and, because of this, was the form of the only comparable data available at that time (Evans et al, 1993). Epidemiological data continue to be analysed to provide genetic counsellors with more accurate numerical values for relative (Pharoah et al, 1997) and absolute risk (Pharoah and Mackay, 1998). It remains unclear what these various numerical values of risk estimates mean to the women to whom they are assigned (Hallowell and Richards, 1997). Psychosocial research suggests that lay understanding of genetic risk derives more from concepts of family relationships than from scientific genetics (Richards and Ponder, 1996). More detailed reviews of the issues in cancer genetic risk counselling have recently been published (Bottorff et al, 1998;Cull, 1998). Further research is needed to clarify how different degrees of risk are communicated and can best be understood in this setting. In terms of knowledge of risk factors, the women in this sample were better informed than Fallowfield et alÕs (1990) sample of women attending for routine screening, but some misconceptions prevailed Ð for example, half of our sample believed that stress has been shown to increase the risk of breast cancer. It was beyond the remit of this study to assess how that belief impinged on their view of their own susceptibility but this may be important in influencing how they elect to manage their own breast cancer risk. This sample did have somewhat higher trait anxiety scores than the mean for a general population sample (Knight et al, 1983) but no higher than for women who elected to attend for routine mammography (Morris and Greer, 1982). More recently, Thirlaway et al (1996) reported trait anxiety scores from participants in the Tamoxifen Prevention trial and women in the National Breast Screening Programme (NBSP), separating those who did/did not have a family history of breast cancer and those who were/were not aware of the population risk of the disease. The 24 women in the NBSP with a family history who were aware of the population risk were the most anxious (mean trait anxiety score = 45.5, s.d. = 9.5). The other groups of women had anxiety scores similar to our data. State anxiety scores in our sample were no higher than for women attending for routine mammography and were unrelated to the womenÕs baseline risk estimates. There was then no evidence that this genetic risk counselling clinic was attended by women who grossly overestimated their risk. On the contrary, they were almost three times more likely to underestimate than overestimate their risk. Nor was there evidence that they were more anxiety prone or acutely anxious than other women who attended for routine breast screening, although those who did overestimate their risk were more anxiety prone. The characteristics of those attending such clinics may change in future as public awareness increases and as referral criteria are more strictly applied to optimize access to services for those at highest risk. It is likely to be important for effective counselling to take account in the clinic of womenÕs initial perception of their risk and the factors that influenced them to attend, e.g. a close relative recently diagnosed with breast cancer, as counselling may need to be tailored accordingly. Does risk counselling influence risk perception? From the point of view of service evaluation, an important finding of this study was that the risk counselling offered increased the accuracy of the womenÕs perceptions of their own risks of developing breast cancer. Although risk estimates have limitations as a primary outcome measure, these data are encouraging in the light of the results of the only randomized trial of breast cancer risk counselling reported to date (Lerman et al, 1995). The risk estimates of the two-thirds of the women who grossly overestimated their risk initially were not modified by the counselling offered in either arm of the trial. We have, however, no room for complacency. The overestimators in our sample reduced their risk perceptions significantly after counselling, but they continued to overestimate their risk to a statistically significant degree relative to the counselled risk and their estimates appeared to be increasing again on longer term follow-up. The underestimators as a group did increase their risk estimates significantly after counselling, but they continued to underestimate relative to the counselled risk. Applying these findings in the clinic it would seem to be important to identify and understand the basis of such persistent misperceptions in order to find an effective means of correcting misunderstandings. Care is clearly needed in the analysis and interpretation of a dataset with so many missing data. Some women were discharged because they were not at sufficiently increased risk to warrant surveillance, some were lost to follow-up and others had not been followed up by the time this analysis was undertaken. We were unable to demonstrate any significant time trends in the presenting characteristics of women attending the clinic in successive years. Comparing the data from women responding at all three assessment points ( Figure 2B) with the subsets of data for women assessed pre-and post-counselling or at baseline only, this subset of women displayed a higher median risk estimate at baseline, most notably among the overestimators. The numbers are small, i.e. there were 18 overestimators for whom annual follow-up data were available. Half of them had reported at baseline that they believed that it was inevitable that they would develop breast cancer (risk estimate = 100%), thereby inflating the median risk in that subset of data. It is particularly reassuring then that, even in this subset of women with more extreme views, the counselling available was effective in reducing risk estimates in a way that was sustained to annual follow-up. This cohort had the benefit of a model of service delivery that was unsustainable in the face of growing demand for service. They had a first clinic visit which gave an opportunity for their family history and cancer related concerns to be discussed and for general information to be given about breast cancer genetics. There was time for the women to assimilate this before the second consultation about their personal risk and risk management. We are continuing to assess the impact of evolving models of service delivery in SE Scotland on womenÕs estimates of their risk of developing breast cancer and to develop methods of assessing their understanding of the concepts involved, which may be more clinically informative than risk estimates alone . Does cancer risk counselling cause distress? Mean state anxiety and GHQ scores were significantly lower after counselling than before, confirming what many women volunteered at clinic, i.e. that they felt reassured by being able to attend. There was no evidence that counselling caused anxiety or distress to those who were made aware that their risk of developing breast cancer was greater than they had previously thought. The risk of being significantly distressed after risk counselling was higher among women with higher GHQ scores at baseline. This is a reminder that among those who attend a clinic like this will be a proportion of women who are already distressed by their experience of cancer in their families and whose concerns may warrant particular attention. The role of beliefs about control of oneÕs health in the face of inherited susceptibility to a life-threatening illness warrants further study. Our preliminary findings suggest that believing that external or chance factors (rather than oneÕs own actions) control health may be associated with a sense of greater risk and with greater distress. Elucidation of the relationship between these factors may suggest effective methods of intervention to reduce distress. For example, behaviours that increase the individualÕs sense of control over their health may be a helpful coping strategy in this context (Ingledew et al, 1996). In conclusion, it was reassuring in the light of the US experience to find that the counselling offered in this clinic was effective in increasing the accuracy of risk perceptions without causing distress to those who initially underestimated their risk. It is worrying that so many inaccuracies persisted, particularly as the demand for service has required a reduction in consultation time in this clinic. There is a need to develop more sophisticated methods of assessing peopleÕs understanding of the concepts involved in assigning familial cancer risk. Inaccurate risk perceptions may be attributable to misunderstandings of such complex and probabilistic information but may also reflect contrary beliefs arising from peopleÕs personal experience of cancer in the family. Evaluation of alternative models of delivering information about cancer risk should in future include objective assessment of the recipientsÕ understanding of key items of the information given. Future studies also need to consider the influence of personal experience of cancer in the family on preconceptions about personal risk and levels of distress among those attending familial breast cancer clinics.
v3-fos-license
2024-05-11T16:25:05.501Z
2024-05-01T00:00:00.000
269677374
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1420-3049/29/9/2137/pdf?version=1714814276", "pdf_hash": "9d8faa67d5546b0eba4fcfe18e747e282db65a0c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42692", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "232cce9c55b0a8b4358212deac81d3c239bef0ac", "year": 2024 }
pes2o/s2orc
Pioglitazone Phases and Metabolic Effects in Nanoparticle-Treated Cells Analyzed via Rapid Visualization of FLIM Images Fluorescence lifetime imaging microscopy (FLIM) has proven to be a useful method for analyzing various aspects of material science and biology, like the supramolecular organization of (slightly) fluorescent compounds or the metabolic activity in non-labeled cells; in particular, FLIM phasor analysis (phasor-FLIM) has the potential for an intuitive representation of complex fluorescence decays and therefore of the analyzed properties. Here we present and make available tools to fully exploit this potential, in particular by coding via hue, saturation, and intensity the phasor positions and their weights both in the phasor plot and in the microscope image. We apply these tools to analyze FLIM data acquired via two-photon microscopy to visualize: (i) different phases of the drug pioglitazone (PGZ) in solutions and/or crystals, (ii) the position in the phasor plot of non-labelled poly(lactic-co-glycolic acid) (PLGA) nanoparticles (NPs), and (iii) the effect of PGZ or PGZ-containing NPs on the metabolism of insulinoma (INS-1 E) model cells. PGZ is recognized for its efficacy in addressing insulin resistance and hyperglycemia in type 2 diabetes mellitus, and polymeric nanoparticles offer versatile platforms for drug delivery due to their biocompatibility and controlled release kinetics. This study lays the foundation for a better understanding via phasor-FLIM of the organization and effects of drugs, in particular, PGZ, within NPs, aiming at better control of encapsulation and pharmacokinetics, and potentially at novel anti-diabetics theragnostic nanotools. Fluorescence lifetime imaging microscopy (FLIM), unlike traditional fluorescence microscopy where the contrast is given by emission intensity, delves into the time dimension of fluorescence decay.In the simplest implementation, the contrast is given by fluorescence lifetime, the average time a fluorophore remains in the excited state before emitting a photon.This temporal information can unveil valuable insights into the microenvironment, molecular interactions, and biochemical properties of fluorophores since fluorophores exhibit characteristic fluorescence decay patterns influenced by their surroundings.These decay patterns provide a quantitative means to assess molecular processes, such as energy transfer, molecular binding, and changes in local pH.In particular, FLIM has been widely used to analyze cells' metabolic states, starting with the pioneering work of Ragan et al. (1969) focused on the oxidation-reduction changes in flavin and pyridine nucleotides in perfused rat liver [28].Lakowicz et al. (1992) conducted work on imaging the short-lived Molecules 2024, 29, 2137 2 of 17 fluorescence of both free and protein-bound NADH, determining the changes in their ratio, a critical parameter for assessing cellular metabolism [29].Ferri et al. (2020) recently exploited this parameter for investigating the metabolic response of Insulinoma 1E cells to glucose stimulation, and Azzarello et al. (2022) used it to examine the metabolic response of α and β cells to glucose in living human Langerhans islets [30,31]. However, the average lifetime is not always enough to characterize the decay patterns; one can study the complete decay curves, fitting them with multiexponential decays, in order to dissect various contributions in each pixel of an image.However, fit parameters often have too big uncertainties, especially if the single component lifetimes are too close (closer than a factor of 2 or 3) and if the measurements are noisy, as it happens especially in the case of fast measurements.Moreover, each component is characterized by an amplitude and a lifetime, and all these parameters are not always easy to represent in an image.A valid alternative for representing FLIM data is given by the phasor approach (phasor-FLIM): this involves transforming fluorescence decay data into points on a phasor plot, essentially a two-dimensional representation in the complex plane of the Fourier transform of the decay curve at a fixed frequency, with real component G on the x-axis and imaginary component S on the y-axis.Each pixel in a FLIM "image" is mapped to a position in the phasor plot, creating clusters of data points that represent various fluorescence decay characteristics.In the case of a monoexponential decay, expected for a simple pure compound in solution, the phasor falls along the universal semicircle (centered on (1/2, 1/2) and with radius 1/2, for positive S), with shorter lifetimes more on the right; for bi-or multi-exponential decays, the phasor falls within this semicircle.When more than a compound is present in a pixel, the phasor falls in a position given by the average of the phasors of each compound weighted by the contribution of the compound to the total intensity of fluorescence in that pixel; in simpler terms, the phasor position in such cases is given by the linear combination of the ones of pure compounds.This aspect provides valuable and intuitive insights into the compounds composition within a pixel of an image, allowing for a more comprehensive understanding of complex molecular environments. For example, Stringari et al. (2011) utilized the phasor approach to distinguish various metabolic states of germ cells in live tissue, providing crucial insights into cellular metabolic function [2].A recent study by Tentori et al. (2022) demonstrated the utility of phasor-FLIM in unveiling the supramolecular organization of doxorubicin (DOX) in the standard Doxoves ® liposomal formulation (DOX ® ), underscoring its potential in elucidating the complex interplay between nanoparticles and biological systems.They determined that DOX ® includes three different fractions: crystallized DOX (98%), free DOX (1.4%), and an unexpected liposomal-membrane-bound DOX (0.7%); relative concentrations were determined through phasor-FLIM and spectroscopy on pure standards, demonstrating applicability for studying the state of encapsulated drugs from the production to within living matter [15].A bottleneck that still affects FLIM adoption by a broader audience is the lack of a facile tool for presenting in an intuitive way the large amount of data generated, particularly for researchers without extensive expertise in data analysis [32].There are some pieces of software developed to this aim: commercial ones such as those provided by Becker & Hickl GmbH (Berlin, Germany), PicoQuant (Berlin, Germany), or Leica Microsystems (now part of Danaher, Washington, DC, USA), and a few freely distributed ones such as SimFCS, FLUTE, PAM and one developed by some of us [33][34][35][36].However, they are still limited in representing the whole two-dimensional phasor plane in the FLIM image, using a color scale changing mostly in one direction within the phasor plot or just highlighting in the image a subset of the phasor plot. From a biomedical point of view, polymeric nanoparticles (NPs), and in particular poly(lactic-co-glycolic acid) (PLGA), have garnered significant interest as versatile drug delivery platforms due to their biocompatibility, controlled release kinetics, and tunable physicochemical properties [37][38][39].Recent advancements in nanoparticle engineering, including surface modification and functionalization, offer opportunities to enhance drug encapsulation efficiency, target-specific delivery, and cellular uptake [40][41][42][43][44][45][46][47][48][49][50][51].In particular, a drug that would benefit from an NP formulation is pioglitazone (PGZ), a thiazolidinedione derivative, which is a well-known pharmacological agent for addressing insulin resistance and hyperglycemia in type 2 diabetes mellitus (T2DM) patients.Metabolic disorders, such as T2DM, present a significant global health challenge, driving the exploration of innovative monitoring devices for diabetes management and therapeutic approaches to mitigate associated complications and improve patient outcomes [52,53].In this context, the PGZ mechanism of action involves the activation of peroxisome proliferator-activated receptor gamma (PPARγ), a nuclear receptor pivotal in regulating glucose and lipid metabolism, adipogenesis, and inflammation.Despite PGZ's clinical efficacy, challenges related to its pharmacokinetic properties, including low solubility and bioavailability, underscore the importance of exploring innovative drug delivery strategies to optimize therapeutic outcomes.A recent study from Todaro et al. (2022) elucidated the most reliable synthesis method for PGZ-loaded PLGA nanoparticles, highlighting the potential of PLGA as a suitable nanocarrier material and demonstrating controlled release capabilities for PGZ [54].The nanoprecipitation method employed for these formulations has proven effective, showcasing favorable attributes such as size, polydispersity index, encapsulation efficiency, drug loading, cost-effectiveness, and processing time [54].The successful synthesis and characterization of PGZ-loaded PLGA NPs set the stage for a deeper investigation into the drug's intrinsic fluorescence signals.Among all the methods available for the polymeric NPs characterization [55][56][57][58][59][60][61][62], FLIM can play an important role in terms of manufacturing process control, and also explain drug pharmacokinetics properties. Here, we present and make available a tool (coded in MATLAB) that maps the whole phasor plot to different hue and saturation values and represents the distribution in the phasor plot with an intensity that considers each pixel intensity in the image.The color map can be chosen in different ways, in order to characterize more than two possible coexisting decays characteristics of possibly different fluorescent molecules. As a proof of concept, we apply this tool to the study of different phases of PGZ, and we analyze its effects (both when free and encapsulated within NPs) on the metabolism of INS-1E cells, an insulinoma cell line.Indeed, the investigation presented in this article exploits PGZ intrinsic fluorescence to collect information regarding its physical state through the application of FLIM, by analyzing the differences in the fluorescence decay of PGZ when existing in various states, such as dry crystalline, solvated, or dissolved forms.Furthermore, we present the observed phasor distributions for PLGA nanoparticles, empty or encapsulating PGZ.Finally, we investigate cellular NAD(P)H autofluorescence variations between its free and bound form in the presence of PGZ-loaded PLGA nanoparticles. Results We developed an algorithm, implemented in MATLAB R2017B (and tested in version R2023B), which allows determining at a glance the position in the phasor plot of the phasor corresponding to each pixel in the image; in the phasor plot, the phasors distribution is weighted with the intensity in each pixel, and is shown with colors corresponding to the ones in the image.The colors are determined in order to highlight the position of the phasors with respect to chosen "principal" points in the phasor plot, ideally corresponding to the position for pure fluorophores, and in particular with respect to the segments joining them.The first examples of this representation are reported in Figure 1, using the same color coding in every image, with a light gray triangle superimposed on the phasor plots that highlights the considered principal points at its vertices; the choice of these principal points will be explained in the following section.Intensities are shown in a linear but also in a logarithmic scale, in order to appreciate the shape of the observed features, but also the less intense "tails" of the phasor distribution and the less intense pixels in the image, respectively. the phasor plots that highlights the considered principal points at its vertices; the choice of these principal points will be explained in the following section.Intensities are shown in a linear but also in a logarithmic scale, in order to appreciate the shape of the observed features, but also the less intense "tails" of the phasor distribution and the less intense pixels in the image, respectively.For all the panels: on top, figures with a linear intensity scale (normalized to maximum); on the bottom, the same figures with a logarithmic intensity scale starting from a 0.01 fraction of the maximum intensity in the image; on the right these intensity scales are shown in corresponding positions for various values of hue and saturation.Note how the evaporation of DMF with time causes the formation of PGZ crystals, and that the used color coding based on the position of the phasor plot for each pixel allows appreciating at a glance the different forms of PGZ and their position within the images.The side of the square microscopy image is 106 µm here and in all the following figures. Phases of Pioglitazone (PGZ) The data reported in Figure 1 derive from a detailed examination, employing FLIM, of free PGZ in different conditions. Initially, solid PGZ was examined on a glass slide, revealing a monoexponential decay characterized by a very short lifetime (phasor on the universal semicircle at high S and low G; Figure 1A); to this point, we assigned a hue of 1/3, corresponding to green.In parallel with what was observed on doxorubicin [5], we consider the observed short lifetime Phases of Pioglitazone (PGZ) The data reported in Figure 1 derive from a detailed examination, employing FLIM, of free PGZ in different conditions. Initially, solid PGZ was examined on a glass slide, revealing a monoexponential decay characterized by a very short lifetime (phasor on the universal semicircle at high S and low G; Figure 1A); to this point, we assigned a hue of 1/3, corresponding to green.In parallel with what was observed on doxorubicin [5], we consider the observed short lifetime indicative of the crystalline nature of solid PGZ; the shortness of the lifetime is most probably caused by self-quenching of PGZ molecules strongly interacting with each other in the close packing of the crystal.We then analyzed PGZ dissolved in dimethylformamide (DMF), a solvent where it is highly soluble, differently than in water (we did not observe enough fluorescence intensity from a saturated solution of PGZ in water to determine its decay characteristics with our setup).DMF-solved PGZ presents an almost monoexponential decay at a longer, intermediate lifetime (Figure 1B, time 0 from the positioning of a drop of DMF with solved PGZ on the microscope cover glass).We assigned to this phasor position a hue of 0 (and 1), corresponding to red.Notice that we also measured a pure DMF solution, finding no autofluorescence (and therefore positions in the phasor plot close to the origin of axes, as expected for noise or for light uncorrelated with the pulsed laser reaching the detector).Leaving the drop at 37 • C, the DMF started to evaporate, the PGZ reached saturation, and crystals started to grow within the solution (Figure 1C,D).Unexpectedly, the phasor position in the pixels corresponding to these observed crystals did not fall close (or tended towards) the "green" point mentioned above; instead, they tended towards a different point in the phasor plot, corresponding to a longer multiexponential lifetime, and to which we assigned a hue of 2/3 corresponding to blue.We consider this a sign of a different crystal form, a "solvated crystal", where many DMF molecules (or possibly water impurities, since we did not use anhydrous DMF and all the experiments were performed in air, containing some humidity) enter in the composition of the crystal and influence the interactions amongst PGZ molecules.Note that in Figure 1C, with intensities represented with a linear scale, the two populations of the bright but less abundant pixels with solvated crystal and the dim but much more numerous pixels with solved PGZ are both visible in the phasor plot (while the last population would overwhelm the first one employing standard representations of the phasor plot), but this last population red color is not appreciable in the image on the left.Instead, using a logarithmic intensity scale, the "red" part of the image is evident, and the crystals can be seen in their entirety; in the phasor plot, a "smear" connecting the "blue" and "red" principal points is visible, and this corresponds to pixels where the point spread function contained both part of a crystal and part of the solution.In Figure 1D, a shift of the average phasor towards the crystalline region is observable, indicating the progression of crystallization, evident also from the higher number of bigger crystals in the image on the left.Moreover, some low-saturation points, also towards a green color, start to appear (visible especially in the figure with logarithmic intensity scale), indicative of the fact that some crystals more similar to the original ones begin to form.Subsequent analysis at 50 and 55 min revealed fluorescence arising predominantly from a crystalline sample, evidenced by a significant shift towards very short lifetimes in the phasor analysis (Figure 1E,F).These observations underscore the dynamics of PGZ dissolution and crystallization over time, allowing for the conceptualization of a triangle, with the three reference species (the solid form, the solved form, and the "solvated crystal" form of PGZ) at its vertices. Phasors of PGZ-Loaded PLGA NPs Subsequently, empty PLGA nanoparticles and PGZ-loaded PLGA NPs were synthesized and analyzed.The synthesis of these samples was carried out as previously described, with slight modifications [54].The characterization of three independent formulations is shown in Table 1, demonstrating reproducibility over size, PDI, Z potential, and encapsulation efficiency. The samples were analyzed at the microscope using FLIM.Unfortunately, repeating the experiments on different batches we observed very different values of fluorescence intensities and phasor positions.For both types of samples, phasors fell approximately along a line connecting the origin (phasor position of noise) and a point shown in the top right graph of panels A-E of Figure 2 as a light gray cross; this line passes very close to the vertex of the triangle corresponding to the solvated phase of PGZ.Moreover, for both samples, the phasors in case of high enough fluorescence were close to the point cited above.Some examples of these measurements are reported in panels A-D of Figure 2, on top of each panel with the same color coding and linear intensity scale used for the top half of each panel in Figure 1.Being a dispersion, in the images reported on the left part we can see a quite uniform field, with some possible aggregates (possibly moving during the measurements) visible especially for the experiments with the lowest fluorescence.These behaviors did not depend on the buffer where the NPs were dispersed (we used PBS and RPMI, the last for comparison with experiments in cells reported below).In order to understand the origin of these fluorescence decay characteristics, we measured the lifetime characteristic of solid PLGA, discovering an unexpected fluorescence, whose characteristic phasor fell on the point cited above (see Figure 2E).In this case, the observed image reported on the left is not optimal, most probably because we used the standard setup, and in particular an objective with silicon oil as immersion medium, to visualize a piece of mostly transparent solid placed on a cover glass in the air; in any case, the fluorescence decay characteristics do not depend on the quality of the image.These experiments demonstrate the possibility of measuring PLGA nanoparticles with phasor-FLIM; however, the too high noise, the autofluorescence of the used PLGA most probably overwhelming the one of the encapsulated PGZ, and the closeness of the PLGA phasor to possible ones characterizing the PGZ do not allow for understanding the state of PGZ within the PLGA.One could check if different batches of PLGA present lower autofluorescence, and try to obtain more concentrated NPs colloids, but this is going beyond the scope of this work.We wanted to explore all the possible positions for the phasors linked to fluorescence arising from our NPs and to understand if there would be some fluorescence arising from them in the measurements presented in the following section. In the images discussed above (top part of each panel A-E in Figure 2), all the phasors result in being blue/violet, even if they are in different positions.For the purpose of highlighting even more the potential of our algorithm in presenting phasor-FLIM data, we show in the bottom part of panels A-E of Figure 2 the same data, but using more and different "principal points" (in a clockwise order: the origin, the average phasor measured in a RPMI solution as shown in Figure 3A below, the point for solvated PGZ, and the average phasor position for solid PLGA, for dissolved PGZ, and for dry crystalline PGZ).This representation allows for easily visualizing the different observed decay characteristics for the NPs colloids.In particular, in panels A and B of Figure 2 we observe the same cyan color of PLGA, as shown in panel E. In panel C, we see a phasor distribution elongated along the line connecting the origin (or the RPMI typical phasor) with the PLGA phasor, intermediate between these points, with a green color.In panel D, we observe a phasor distribution closer to the "noise zone", and the different red to yellowish-green colors are quite mixed in the image on the left, demonstrating that the distribution is mostly caused by noise in the measurements, with just some possible aggregates with a more yellowish-green color, more towards the PLGA/NP typical phasor.These experiments demonstrate the possibility of measuring PLGA nanoparticles with phasor-FLIM; however, the too high noise, the autofluorescence of the used PLGA most probably overwhelming the one of the encapsulated PGZ, and the closeness of the PLGA phasor to possible ones characterizing the PGZ do not allow for understanding the state of PGZ within the PLGA.One could check if different batches of PLGA present lower autofluorescence, and try to obtain more concentrated NPs colloids, but this is going be- gated along the line connecting the origin (or the RPMI typical phasor) with the PLGA phasor, intermediate between these points, with a green color.In panel D, we observe a phasor distribution closer to the "noise zone", and the different red to yellowish-green colors are quite mixed in the image on the left, demonstrating that the distribution is mostly caused by noise in the measurements, with just some possible aggregates with a more yellowish-green color, more towards the PLGA/NP typical phasor. C D D A B E F Autofluorescence Characteristics Changes in INS-1E Cell in the Presence of PGZ-Loaded PLGA NPs In the final experiments, we incubated INS-1E cell lines with the previously analyzed samples at 37 • C for 24 h and measured them in our two-photon FLIM microscope.First, we checked that the used RPMI medium was negligibly fluorescent, resulting in a phasor falling very close to the origin, the zone where noise falls (Figure 3A).As a control, 20 µL of 10 mg/mL PGZ was incubated in 1980 µL of RPMI at 37 • C (final nominal PGZ concentration of 0.1 mg/mL or ~280 µM), both alone (Figure 3B) and in the presence of cells (Figure 3C).In both cases, we observed a precipitation of PGZ, as expected for a maximum reported PGZ solubility in water around 0.07 mg/mL or 200 µM at 37 • C [63].We observed microscopic solids characterized by a complex fluorescence decay, illustrated by a phasor distribution ranging within points very close to the ones characterizing the solvated crystal in DMF and the anhydrous crystal species (Figure 3B).The differences between this case and the one in DMF can be easily rationalized considering the different impact of included water (and not DMF) on the interactions between the PGZ molecules within the solid.The slightly curved line formed by the local maxima in the distribution points more towards the possible presence of a continuum of species with different quenching rather than just a mixture of two species [64]. The presence of precipitated crystals in Figure 3D ensures that the solved drug is at saturation, and therefore at its maximum possible concentration; we will show that it does not make it more difficult to understand the observed fluorescence decays from within the cells thanks to the way we are presenting the data.We also report the FLIM results on control cells without anything added in the medium (Figure 3C).The fluorescence of cells with the used channel characteristics (two-photon excitation at 740 nm, emission around 440 nm) arises mostly from NAD(P)H, and indeed the cell phasor positions are in agreement with previously observed ones [7].Subsequently, the cells were incubated with PGZloaded nanoparticles and empty nanoparticles (as negative control), and the corresponding phasor-FLIM results are reported in Figure 3E,F with the same color coding.Again, the fluorescence from the cells is the only one noticeable, while no fluorescence arising from NPs can be identified in the phasor plot.Note that, although the cell autofluorescence was not considered in deciding the color coding used in Figure 3 (the same used in Figure 1), the phasor corresponding to cell-containing pixels fall in a zone characterized by a violet color, therefore they are clearly discernible from the PGZ precipitate in Figure 3D. As hinted in the introduction, free and bound NAD(P)H, more abundant when oxidative metabolism is slower and faster, respectively, have different fluorescence decay characteristics and therefore different positions in the phasor plot [7,[29][30][31].By looking carefully at the data reported in Figure 3C-F, even if the color encoding is not ideal for this purpose, it seems that the negative control (Figure 3E) and the cells alone (Figure 3C) present a very similar phasor distribution, while in the case of cells in the presence of PGZ-loaded NPs, the phasor distribution is shifted from slightly at the left (Figure 3C,E) to the right (Figure 3F) of the top left side of the drawn triangle (as clearer in the top images in the panels, with the linear intensity scale).This could also be the case for the cells in the presence of free PGZ, but the analysis is hindered in such cases by the presence of the more fluorescent PGZ precipitate. To facilitate grasping these fluorescence decay characteristics changes (and to show the versatility and potential of our methods for FLIM data presentation), we changed the color encoding.We considered three principal points (Figure 4, vertices of the triangles): two based on the position of the phasors for cells in RPMI only and in the presence of PGZ-loaded PLGA NPs (examples in Figure 3C,F), and a third one in the middle of the smear corresponding to the PGZ precipitate.It is again easy to discern the cells from the precipitated solids (Figure 4B); moreover, in this case, it is much easier to appreciate the average shift from a situation with the NAD(P)H phasor distribution characterizing more bound NADH (at a relatively longer lifetime, red zones in Figure 4) to one characterized by more free NADH (green in Figure 4), widely accepted to be linked to a decrease in oxidative metabolism (mitochondrial respiration) or to an increase in glycolytic metabolism [7,30,65].Moreover, with this color coding, it is possible to check the possible heterogeneity amongst different cells (see, e.g., the redder ones, especially in Figure 4B, bottom left panel) or even the modulation of NAD(P)H status within single cells. The shift observed in the phasor related to free and bound NAD(P)H in response to PGZ can be attributed to its impact on cellular metabolism.PGZ, a member of the thiazolidinedione class of antidiabetic drugs, exerts its effects primarily by activating peroxisome proliferator-activated receptor gamma (PPARγ), a nuclear receptor involved in the regulation of glucose and lipid metabolism.Activation of PPARγ by PGZ leads to transcriptional regulation of genes involved in insulin sensitivity, glucose uptake, and lipid metabolism.Furthermore, PGZ has been shown to enhance mitochondrial function and biogenesis, leading to increased oxidative phosphorylation and ATP production in insulin-sensitive cells, but to have an opposite effect on beta cells in pancreatic islets, from which INS-1E cells are derived and of which they are a model, with a decrease in aerobic metabolism and in insulin secretion [66][67][68].This metabolic shift towards decreased aerobic metabolism in beta cells is reflected in changes to the cellular redox state and is therefore in agreement with our observed alterations in the free and bound NAD(P)H ratio. color encoding.We considered three principal points (Figure 4, vertices of the triangles): two based on the position of the phasors for cells in RPMI only and in the presence of PGZ-loaded PLGA NPs (examples in Figure 3C,F), and a third one in the middle of the smear corresponding to the PGZ precipitate.It is again easy to discern the cells from the precipitated solids (Figure 4B); moreover, in this case, it is much easier to appreciate the average shift from a situation with the NAD(P)H phasor distribution characterizing more bound NADH (at a relatively longer lifetime, red zones in Figure 4) to one characterized by more free NADH (green in Figure 4), widely accepted to be linked to a decrease in oxidative metabolism (mitochondrial respiration) or to an increase in glycolytic metabolism [7,30,65].Moreover, with this color coding, it is possible to check the possible heterogeneity amongst different cells (see, e.g., the redder ones, especially in Figure 4B, bottom left panel) or even the modulation of NAD(P)H status within single cells. Discussion and Conclusions The applications of fluorescence-based techniques have transcended conventional limits, offering profound insights into the molecular complexities of various pharmaceutical agents.For instance, the drug organization within a polymeric or biologic matrix can be studied using phasor-FLIM approaches [11,15]; by leveraging the capabilities of FLIM, researchers can gain insights into drug release kinetics, intracellular trafficking, and metabolic responses, thereby facilitating the rational design of drug delivery systems with enhanced efficacy and safety profiles.However, the widespread adoption of FLIM in scientific, industrial, and clinical fields is hindered by the complexity of the needed instrumentation and data analysis and presentation.Some steps have been taken towards the mitigation of the first point [69], and this work represents another step towards a more intuitive presentation of the results of fluorescence lifetime microscopy, in cases where the impact of three or more species present in a sample needs to be easily recognized and mapped. The MATLAB scripts and functions used in this manuscript (and made freely available) allow defining interactively the "principal" positions in the phasor plots, i.e., the ones considered important in the data presentation (e.g., the ones of pure species or the same species in different environments), to which particular hue values and high saturations are assigned.Moreover, it is possible to define a "center" point where the saturation becomes zero (else, it is considered to be in the middle of the principal points).It is also possible to consider these points as constant and to apply the same color coding to a big number of files containing phasor-encoded images.Importantly, the correspondence between the phasor plot position and (hue, saturation) pair is in principle bijective, so that it can be reversed.In the phasor plot, the intensity follows a distribution where each pixel in the image is weighted by its intensity so that the phasor regions with the highest contributions in the observed field can be noticed at a glance without the need to define an arbitrary threshold (even if the user can impose low and high thresholds, also interactively, if needed).More details are contained in the comments at the beginning of each file. We applied our algorithms in order to study different phases of PGZ, visualize PLGA nanoparticles, and observe cells and the metabolic shift within them.An interesting observation, which could be neglected in studies involving the effect of non-perfectly soluble drugs on adherent cells, is the one of precipitated 0.1 mg/mL PGZ in various solid phases.This precipitation would largely increase the local concentration of the drug in the proximity of the cells and therefore can lead to an overestimation of the drug effect at the nominal concentration (probably, unreachable by the free form of the drug).Instead, we did not observe any precipitate with the empty or PGZ-loaded NPs, while we could reach the nominal concentration of 0.1 mg/mL of PGZ contained within them.Indeed, an important conclusion arising from our label-free fluorescence lifetime measurements on the NADH-linked metabolism of living cells, is that the effect of PGZ-loaded NPs is higher than the one of dissolved PGZ, even if in this last case crystals of precipitated PGZ were very close to the adherent cells.This points towards a higher effectiveness of the NP-encapsulated drug versus the free-form one. We could not observe a clear signal from PGZ phases within NPs, probably because of an unexpected too high autofluorescence of the used batch of PLGA or a too low final equivalent concentration of PGZ.However, we believe that the results presented here prove the usefulness of our method for presenting FLIM data in complicated samples, especially when autofluorescence (of drugs, cells, or other components) can be exploited.Indeed, any material can be studied with a setup similar to the one presented here if it is fluorescent, in our case upon two-photon excitation.Depending on the fluorescence properties of other biocompatible polymers (batches) for the synthesis of NPs, they could behave as shown here, could have higher fluorescence intensity and therefore more reproducible phasors, or might be fluorescence-free and therefore make easier studying the drug organization within them, as it happens for liposomes [11,15].Regarding the drug, if fluorescent, we expect its fluorescent decay to be different in crystalline or solved form in most if not all cases (because of the completely different environment that should change at least the fluorescence quantum yield).However, it is difficult to predict if there could be other phases with different fluorescence decays as it happens for the "solvated" PGZ crystals.In any case, the system presented here is potentially applicable to other drug-nanoparticle conjugates. Finally, the results reported here represent a step forward in understanding the effect of PGZ on beta-type cells and in developing drug-embedded NPs with more controlled pharmacokinetics, and they can be useful to suggest different characterizations of autofluorescent materials, e.g., various drugs, and of cell response to drugs and/or to changes in their environment. Synthesis and Characterization of PGZ-Loaded PLGA Nanoparticles PGZ-loaded PLGA NPs were synthesized using the nanoprecipitation method, with some modifications based on our previous publication [15].In brief, 400 µL of PLGA (24-32 kDa) dissolved in acetone (10 mg/mL) and 50 µL of PGZ dissolved in DMF (10 mg/mL) were combined and added dropwise to 1600 µL of an aqueous phase comprising 1:1 MES buffer (0.1 M, pH 6.2) and PVA (4%, 18 kDa) under continuous stirring.The resulting suspension was stirred for a minimum of 3 h to facilitate solvent evaporation and nanoparticle formation.Subsequently, the nanoparticles were collected via dialysis (1 kDa cutoff) against 2 L of 1:1 MES buffer (0.1 M, pH 6.2) and PVA (4%, 18 kDa) solution at 4 • C overnight.The physicochemical properties of NPs (including hydrodynamic radius, polydispersity and ζ-potential) were characterized using a Zetasizer Nano ZS equipment (Malvern Instruments Ltd., Worcestershire, UK) at 25 • C, while the entrapment efficiency (EE%) was characterized using a Shimadzu Nexera UHPLC, equipped with a Shimadzu SPD-M20A UV/visible detector, as already reported in our last publication [15].Finally, the nanoparticles were stored in a 10 mg/mL trehalose solution until use. FLIM Setup Fluorescence Lifetime Imaging Microscopy (FLIM) was conducted using an Olympus (now Evident, Tokyo, Japan) FVMPE-RS microscope coupled with a two-photon Ti:sapphire laser (MaiTai HP, Newport SpectraPhysics, Santa Clara, CA, USA) operating at a repetition rate of 80 MHz.The FLIM data were acquired with the FLIMbox system for lifetime acquisition (ISS, Champaign, IL, USA).Calibration of the ISS Flimbox system was performed using the known mono-exponential lifetime decay of Fluorescein at pH = 11.0 (4.0 ns upon two-photon excitation at 740 nm, collection range: 400-570 nm).The calibration sample consisted of a stock solution of 100 mmol/L Fluorescein in EtOH, prepared and diluted 1:1000 in NaOH water solution at pH 11.0 for each calibration measurement.Excitation was performed at a wavelength λ = 740 nm, fluorescence collection range was 400-570 nm, laser power was 0.6%, and acquisition was made by a GaAsP photomultiplier tube (PMT) detector with voltage at 850 V; image sampling was set at 512 × 512 pixels with 10 µs/pixel. For FLIM measurements of PGZ and PGZ-loaded PLGA NPs, experimental conditions included λ = 740 nm and a specific filter for the NADH metabolism range (420-460 nm).Used laser power ranged from 2.0 to 10.0%, and the voltage on the GaAsP PMT was set at 850 V, with image sampling maintained at 512 × 512 pixels, frame size: 106 × 106 µm 2 , with a pixel size of 0.207 × 0.207 µm 2 and scanning at 10 µs/pixel. Phasor Visualization The pixel colors in the microscopy image and in the phasor plot have a hue/saturation/value encoding, with hue and saturation depending on the position of the phasor in the phasor plot, and intensity value given by the pixel intensity in the image, and by the sum of the intensities of points within a 2D bin ("pixel") in the phasor plot.The intensities are always normalized to the maximum one (separately for the image and for the phasor plot) and are shown in a linear or in a logarithmic scale (actually, the scale is linear between 0 and x% of the maximum intensity, then it is logarithmic, with x = 1 unless specified otherwise).The phasor plot pixels have standard dimensions of 0.02 × 0.02 but can be specified otherwise. Hue and saturation are calculated starting from a certain number of "principal" points (minimum two, better if at least three) in the phasor plots, ideally the positions in the phasor plot of the pure fluorescent species present in the sample, plus a "center" point.If not given when calling the function calculating hue and saturation, the user can choose them on a binned scatter plot for all the phasors in the image; ideally, most of the phasors should be included in the region having them as vertices, and this region should be (but does not have to be) convex.If the center point is not given, it is calculated as the average position of the principal points.The hue is calculated from the angle between the segments from the center point to the desired spot and from the center point to the principal points and is always increasing going clockwise or anticlockwise (depending on the order of the given principal points, eventually corrected to be always increasing in the chosen orientation if they are more than three).The principal points have equidistant values (with 0 and 1 both corresponding to the first principal point); the hue of a phasor with an angle included within the ones of two principal points is the average of the hues of these two points weighted with the angular distance from the opposite one.The saturation is zero on the central point and increases radially.Considering the intercept of the half line from the center point to the considered phasor with the line between two central points and with the borders of the universal semicircles, the hue increases (with a function proportional to sin(x) for x from 0 to 1) between 0 and a certain first value up to the first encountered intersection point, then linearly between this value and a second one between the two intersection points, then it tends exponentially to 1 for higher distances.The second value is fixed; the first one is corrected starting from a given value so that it is closer to the second one the closer the two intersection points are.In the plots shown in this work, these values were 0.95 and 0.85 (uncorrected value).The decay constant for the exponential function is the average of the distances from the center points to the two intersection points.The algorithm considers different increasing functions for nonstandard situations, possible, e.g., if the center point is chosen outside the universal semicircle and/or the region having the "principal points" as vertices. Before calculating the phasors distribution and for the representation in the images, the values of intensities and of the G and S coordinates have been smoothed starting from their dependence on the pixel positions in the image.The intensities are treated with a 2D Gaussian filter with a given standard deviation (1 pixel in the cases shown here).The values for G and S are transformed in each pixel considering a weighted average with weights given by a Gaussian centered in the considered pixel with the same standard deviation used for the intensities, multiplied by the intensity values of the corresponding pixels. OriginPro 8.0 was used for preliminary visualization of average phasor positions within the universal circle.MATLAB R2017B was employed for advanced mathematical modeling enabling the derivation of quantitative information on phasors, for calculating the color coding mapping, and preparing all images shown in this work.Additionally, Fiji, Figure 1 . Figure 1.Characterization of free PGZ using FLIM.For each panel, on the left are reported the FLIM images and on the right the phasor plots (with corresponding colors): (A) PGZ solid, (B) PGZ in DMF at 37 °C at time 0, (C) PGZ in DMF at 37 °C at 20 min, (D) at 30 min, (E) at 50 min, and (F) at 55 min in a different position; in the phasor plot, vertices of the triangle correspond to the "principal" points, and the "center" point, where the saturation is 0 (see Sections 3 and 4.5) is shown in cyan.For all the panels: on top, figures with a linear intensity scale (normalized to maximum); on the bottom, the same figures with a logarithmic intensity scale starting from a 0.01 fraction of the maximum intensity in the image; on the right these intensity scales are shown in corresponding positions for various values of hue and saturation.Note how the evaporation of DMF with time causes the formation of PGZ crystals, and that the used color coding based on the position of the phasor plot for each pixel allows appreciating at a glance the different forms of PGZ and their position within the images.The side of the square microscopy image is 106 µm here and in all the following figures. Figure 1 . Figure 1.Characterization of free PGZ using FLIM.For each panel, on the left are reported the FLIM images and on the right the phasor plots (with corresponding colors): (A) PGZ solid, (B) PGZ in DMF at 37 • C at time 0, (C) PGZ in DMF at 37 • C at 20 min, (D) at 30 min, (E) at 50 min, and (F) at 55 min in a different position; in the phasor plot, vertices of the triangle correspond to the "principal" points, and the "center" point, where the saturation is 0 (see Sections 3 and 4.5) is shown in cyan.For all the panels: on top, figures with a linear intensity scale (normalized to maximum); on the bottom, the same figures with a logarithmic intensity scale starting from a 0.01 fraction of the maximum intensity in the image; on the right these intensity scales are shown in corresponding positions for various values of hue and saturation.Note how the evaporation of DMF with time causes the formation of PGZ crystals, and that the used color coding based on the position of the phasor plot for each pixel allows appreciating at a glance the different forms of PGZ and their position within the images.The side of the square microscopy image is 106 µm here and in all the following figures. Figure 2 . Figure 2. Characterization of empty PLGA nanoparticles and PGZ-loaded PLGA NPs using FLIM.For each panel, on the left are reported the FLIM images and on the right the phasor plots (with corresponding colors) for exemplary cases for NPs (A-D) and for solid PLGA (E): (A) empty PLGA NPs in RPMI, (B) PGZ-loaded PLGA NPs in RPMI, (C) empty PLGA NPs in PBS, (D) PGZ-loaded PLGA NPs in PBS, (E) solid PLGA.A linear scale was used for the intensity normalized to the maximum in each picture (like on top in Figure 1; color scale reported in panel (F)).Two color codes were used, one like in Figure 1 (top figures in each panel), one with additional principal points (see main text), as shown in the right bottom figure in each panel (vertices of the polygon and light gray crosses), with hue of 0/1 at (0,0) and increasing clockwise, and a "center" point (cyan ×) inside but towards the bottom of the polygon. Figure 2 . Figure 2. Characterization of empty PLGA nanoparticles and PGZ-loaded PLGA NPs using FLIM.For each panel, on the left are reported the FLIM images and on the right the phasor plots (with corresponding colors) for exemplary cases for NPs (A-D) and for solid PLGA (E): (A) empty PLGA NPs in RPMI, (B) PGZ-loaded PLGA NPs in RPMI, (C) empty PLGA NPs in PBS, (D) PGZ-loaded PLGA NPs in PBS, (E) solid PLGA.A linear scale was used for the intensity normalized to the maximum in each picture (like on top in Figure 1; color scale reported in panel (F)).Two color codes were used, one like in Figure 1 (top figures in each panel), one with additional principal points (see main text), as shown in the right bottom figure in each panel (vertices of the polygon and light gray crosses), with hue of 0/1 at (0,0) and increasing clockwise, and a "center" point (cyan ×) inside but towards the bottom of the polygon. Figure 3 . Figure 3. FLIM of PGZ and of INS-1E cells in RPMI medium, also upon co-incubation and incubation with nanoparticles for 24 h at 37 • C. (A) RPMI, (B) PGZ in RPMI, (C) INS-1E in RPMI, (D) INS-1E in RPMI in the presence of 0.1 mg/mL PGZ, (E) INS-1E in RPMI in presence of empty PLGA NPs, (F) INS-1E in RPMI in presence of PGZ-loaded PLGA NPs.Each panel has the same composition, color coding, and intensity scales as in Figure1(and in the top parts of the panels in Figure2for the linear intensity scale), in order to compare more easily the results and to show at-a-glance possible signals arising from the species observed in the previous cases. Figure 4 . Figure 4. FLIM characterization of INS-1E cells interacting with PGZ and with the nanoparticles considered in this work.The same data of Figure 3C-F are here reported with a different color encoding for phasor position (principal points at the vertices of the triangle and center point in cyan in the right part of each panel), in order to better appreciate the position of phasors characterizing fluorescence arising from within the cells.(A) INS-1E in RPMI, (B) INS-1E in RPMI in the presence of PGZ, (C) INS-1E in RPMI in the presence of empty PLGA NPs, (D) INS-1E in RPMI in the presence of PGZ-loaded PLGA NPs.For each panel: on the left, FLIM images; on the right, phasor plots with corresponding colors; on the top, a linear intensity scale is used (the most on the left on the top of the figure), on the bottom a logarithmic one starting from 1% (panels (A,C,D), center color scale on Figure 4 . Figure 4. FLIM characterization of INS-1E cells interacting with PGZ and with the nanoparticles considered in this work.The same data of Figure 3C-F are here reported with a different color encoding for phasor position (principal points at the vertices of the triangle and center point in cyan in the right part of each panel), in order to better appreciate the position of phasors characterizing fluorescence arising from within the cells.(A) INS-1E in RPMI, (B) INS-1E in RPMI in the presence of PGZ, (C) INS-1E in RPMI in the presence of empty PLGA NPs, (D) INS-1E in RPMI in the presence of PGZ-loaded PLGA NPs.For each panel: on the left, FLIM images; on the right, phasor plots with corresponding colors; on the top, a linear intensity scale is used (the most on the left on the top of the figure), on the bottom a logarithmic one starting from 1% (panels (A,C,D), center color scale on top of the figure) or from 0.5% (panel (B), rightmost color scale on the top of the figure) of the maximum intensity in each image.Note that, on average, the apparent intensities of cells in the bottom image of panel (B) are similar to the ones on the top images in the other panels. Table 1 . Size distributions measured using a Zetasizer Nano ZS equipment (Malvern Instruments Ltd., Worcestershire, UK), and encapsulation efficiency (EE) % measured using an analytical RP-HPLC (Shimadzu Nexera UHPLC, equipped with a Shimadzu SPD-M20A UV/visible detector) instrument.The hydrodynamic radius ± SEM (standard error of the mean), polydispersity PDI ± SEM, ζ potential ± SEM, and PGZ encapsulation efficiency % ± SEM are indicated.Results from n = 3 repetitions of measures on each of the N = 3 independently manufactured nanoparticle batches (total of 9 measurements).
v3-fos-license
2016-05-12T22:15:10.714Z
2013-05-27T00:00:00.000
16153609
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/bmri/2013/172521.pdf", "pdf_hash": "59856b24c3436e6dd063ca03f71f3b21cb166698", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42693", "s2fieldsofstudy": [ "Biology" ], "sha1": "4775eeb02fb520a69bd338adcd6c9588a8ca545f", "year": 2013 }
pes2o/s2orc
Advances in Molecular Diagnostics People at present day are facing serious global challenges in healthcare from emerging and reemerging diseases. Research in molecular diagnostics has provided us with the better understanding of molecular processes affecting human health, and diseases and the tools derived thereof are becoming the standard of care for treatment of several diseases. The availability of new sequencing methods, microarrays, microfluidics, biosensors, and biomarker assays has made a shift toward developing diagnostic platforms, which stimulates growth in the field by providing answers to questions regarding diagnosis, prognosis, and best course of treatment, leading to improved outcomes and greater cost savings. It is equally important to identify and resolve existing challenges that impede the effective translation of validated diagnostic biomarkers from laboratory to clinical practice in order to see results from these efforts. This special issue contains nine articles, where one review article focuses on microvesicles as a potential biomarker for ovarian cancer, three papers are related to nucleic-acid-based detection of pathogens. Two papers focus on the development of detecting methods while another paper aims to evaluate the types of samples for diagnosis. Finally, two papers address the relationship between biomarkers and cancer. Thus, the papers in this special issue, representing a broad spectrum of experimental approaches and areas of investigation, demonstrate a wide array of molecular diagnostic research. This unique and informative collection of papers in “Advances in Molecular Diagnostics” showcases the identification and characterization of molecular biomarkers and the development of practical applications or methodologies for diagnosis and prognosis including the evaluation of the efficacy of various diagnostic platforms in both laboratory and clinical settings. There are also papers dealing with criteria of optimal detection methods in clinical samples, leading to a guideline as a tool for clinical guidance. In “Microvesicles as potential ovarian cancer biomarkers,” I. Giusti et al. provided us a perspective review that summarizes the potential use of microvesicles released from tumor cells, especially regarding their microRNA profiles, as a novel molecular biomarker for ovarian cancer. In “Development of a broad-range 23S rDNA real-time PCR assay for the detection and quantification of pathogenic bacteria in human whole blood and plasma specimens,” P. Gaibani et al. developed a new broad-range real-time PCR assay targeting the 23S rDNA gene that could detect the targeted bacterial 23S rDNA gene as low as 10 plasmid copies per reaction. This assay could allow us to quantify total bacterial DNA in the whole blood and plasma samples without the need for a precise bacterial species identification. In “A pentaplex PCR assay for the detection and differentiation of Shigella species,” S. C. Ojha et al. developed a pentaplex PCR assay for the simultaneous detection and differentiation of the Shigella genus and the three Shigella species responsible for the majority of shigellosis cases. The average detection of this PCR assay was 5.4 × 104 CFU/mL, which is within the common detection limit for Shigella. In “Evaluation of multiplex PCR with enhanced spore germination for detection of Clostridium difficile from stool samples of the hospitalized patients,” S. Chankhamhaengdecha et al. designed a new multiplex-PCR based assay for the detection of C. difficile and evaluated the sample processing steps prior to the multiplex PCR diagnosis from clinical stool samples. This enrichment multiplex PCR could be an alternative approach to enzyme immuno assays for rapid and cost-effective detection of C. difficile. In “Microsphere suspension array assays for detection and differentiation of Hendra and Nipah viruses,” A. J. Foord et al. presented microsphere suspension array assays to simultaneously identify multiple separate nucleotide targets in a single reaction. The main goal of this research was to incorporate the Hendra and Nipah viruses microsphere as modules in multiplexed microsphere arrays. Their results were comparable to qPCR, indicating high analytical and diagnostic specificity and sensitivity. In “Development of a generic microfluidic device for simultaneous detection of antibodies and nucleic acids in oral fluids,” Z. Chen et al. demonstrated a portable processing system for disposable microfluidic chips suitable for point-of-care settings in the diagnosis, detection, and confirmation of infectious disease pathogens. The HIV infection was used as a model to investigate the simultaneous detection of both human antibodies against the virus and viral RNA. In “Urine cell-free DNA integrity as a marker for early prostate cancer diagnosis: a pilot study,” V. Casadio et al. evaluated the potential use of urine cell-free DNA as a promising noninvasive marker for the early diagnosis of prostate cancer. The overall diagnostic accuracy was approximately 80%. The preliminary data in this paper could pave the way for confirmatory studies on larger case series. In “Development of a novel system for mass spectrometric analysis of cancer-associated fucosylation in plasma α1-acid glycoprotein,” T. Asao et al. evaluated the fucosylated glycans as novel tumor markers that could be of clinical relevance in the diagnosis and assessment of cancer progression as well as patient prognosis. They also developed a novel software system for use in combination with a mass spectrometer to determine N-linked glycans in α1-acid glycoprotein that could be valuable for screening plasma samples to identify biomarkers of cancer progression based on fucosylated glycans. In “The associated ion between the VDR gene polymorphisms and susceptibility to hepatocellular carcinoma and the clinicopathological features in subjects infected with HBV,” X. Yao et al. evaluated the possible association between the vitamin D receptor (VDR), single-nucleotide polymorphisms (SNPs), and hepatocellular carcinoma (HCC) in patients with chronic hepatitis B virus (HBV) infection and found that the C > T polymorphisms at FokI position in the VDR gene served as a potential biomarker for the risk and the disease severity of HCC in those infected with HBV. Finally, we would like to thank the authors for their contributions in this special issue and all reviewers for critical review of the manuscripts. Tavan Janvilisri Arun K. Bhunia Joy Scaria People at present day are facing serious global challenges in healthcare from emerging and reemerging diseases. Research in molecular diagnostics has provided us with the better understanding of molecular processes affecting human health, and diseases and the tools derived thereof are becoming the standard of care for treatment of several diseases. The availability of new sequencing methods, microarrays, microfluidics, biosensors, and biomarker assays has made a shift toward developing diagnostic platforms, which stimulates growth in the field by providing answers to questions regarding diagnosis, prognosis, and best course of treatment, leading to improved outcomes and greater cost savings. It is equally important to identify and resolve existing challenges that impede the effective translation of validated diagnostic biomarkers from laboratory to clinical practice in order to see results from these efforts. This special issue contains nine articles, where one review article focuses on microvesicles as a potential biomarker for ovarian cancer, three papers are related to nucleic-acid-based detection of pathogens. Two papers focus on the development of detecting methods while another paper aims to evaluate the types of samples for diagnosis. Finally, two papers address the relationship between biomarkers and cancer. Thus, the papers in this special issue, representing a broad spectrum of experimental approaches and areas of investigation, demonstrate a wide array of molecular diagnostic research. This unique and informative collection of papers in "Advances in Molecular Diagnostics" showcases the identification and characterization of molecular biomarkers and the development of practical applications or methodologies for diagnosis and prognosis including the evaluation of the efficacy of various diagnostic platforms in both laboratory and clinical settings. There are also papers dealing with criteria of optimal detection methods in clinical samples, leading to a guideline as a tool for clinical guidance. In "Microvesicles as potential ovarian cancer biomarkers, " I. Giusti et al. provided us a perspective review that summarizes the potential use of microvesicles released from tumor cells, especially regarding their microRNA profiles, as a novel molecular biomarker for ovarian cancer. In "Development of a broad-range 23S rDNA real-time PCR assay for the detection and quantification of pathogenic bacteria in human whole blood and plasma specimens, " P. Gaibani et al. developed a new broad-range real-time PCR assay targeting the 23S rDNA gene that could detect the targeted bacterial 23S rDNA gene as low as 10 plasmid copies per reaction. This assay could allow us to quantify total bacterial DNA in the whole blood and plasma samples without the need for a precise bacterial species identification. In "A pentaplex PCR assay for the detection and differentiation of Shigella species, " S. C. Ojha et al. developed a pentaplex PCR assay for the simultaneous detection and differentiation of the Shigella genus and the three Shigella species responsible for the majority of shigellosis cases. The average detection of this PCR assay was 5.4 × 10 4 CFU/mL, which is within the common detection limit for Shigella. In "Evaluation of multiplex PCR with enhanced spore germination for detection of Clostridium difficile from stool samples of the hospitalized patients," S. Chankhamhaengdecha et al. designed a new multiplex-PCR based assay for the detection of C. difficile and evaluated the sample processing steps prior to the multiplex PCR diagnosis from clinical stool samples. This enrichment multiplex PCR could be an alternative approach to enzyme immuno assays for rapid and cost-effective detection of C. difficile. In "Microsphere suspension array assays for detection and differentiation of Hendra and Nipah viruses," A. J. Foord et al. presented microsphere suspension array assays to simultaneously identify multiple separate nucleotide targets in a single reaction. The main goal of this research was to incorporate the Hendra and Nipah viruses microsphere as modules in multiplexed microsphere arrays. Their results were comparable to qPCR, indicating high analytical and diagnostic specificity and sensitivity. In "Development of a generic microfluidic device for simultaneous detection of antibodies and nucleic acids in oral fluids, " Z. Chen et al. demonstrated a portable processing system for disposable microfluidic chips suitable for pointof-care settings in the diagnosis, detection, and confirmation of infectious disease pathogens. The HIV infection was used as a model to investigate the simultaneous detection of both human antibodies against the virus and viral RNA. In "Urine cell-free DNA integrity as a marker for early prostate cancer diagnosis: a pilot study," V. Casadio et al. evaluated the potential use of urine cell-free DNA as a promising noninvasive marker for the early diagnosis of prostate cancer. The overall diagnostic accuracy was approximately 80%. The preliminary data in this paper could pave the way for confirmatory studies on larger case series. In "Development of a novel system for mass spectrometric analysis of cancer-associated fucosylation in plasma 1-acid glycoprotein," T. Asao et al. evaluated the fucosylated glycans as novel tumor markers that could be of clinical relevance in the diagnosis and assessment of cancer progression as well as patient prognosis. They also developed a novel software system for use in combination with a mass spectrometer to determine N-linked glycans in 1-acid glycoprotein that could be valuable for screening plasma samples to identify biomarkers of cancer progression based on fucosylated glycans. In "The associated ion between the VDR gene polymorphisms and susceptibility to hepatocellular carcinoma and the clinicopathological features in subjects infected with HBV," X. Yao et al. evaluated the possible association between the vitamin D receptor (VDR), single-nucleotide polymorphisms (SNPs), and hepatocellular carcinoma (HCC) in patients with chronic hepatitis B virus (HBV) infection and found that the C > T polymorphisms at FokI position in the VDR gene served as a potential biomarker for the risk and the disease severity of HCC in those infected with HBV. Finally, we would like to thank the authors for their contributions in this special issue and all reviewers for critical review of the manuscripts.
v3-fos-license
2021-05-16T05:24:20.452Z
2021-04-19T00:00:00.000
234595356
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.jcancer.org/v12p3407.pdf", "pdf_hash": "906d97e55233e6c31c25d8af56be57a568f8fe42", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42697", "s2fieldsofstudy": [ "Medicine" ], "sha1": "906d97e55233e6c31c25d8af56be57a568f8fe42", "year": 2021 }
pes2o/s2orc
Prognostic Value of Peripheral Blood Lymphocyte/monocyte Ratio in Lymphoma Objective: Lymphocyte monocyte ratio (LMR) has been considered as a prognostic factor in patients with lymphoma, which focused on diffuse large B-cell lymphoma (DLBCL) and Hodgkin lymphoma (HL). Recently, many relevant clinical studies have been published with inconsistent results. To gain a more comprehensive view of the prognostic value of LMR, we conducted a meta-analysis on the significance of peripheral LMR in all subtypes of lymphoma. Methods: PubMed, PMC, Web of Science, Embase, and Cochrane Library were searched for relevant articles to conduct a meta-analysis. Hazard ratio (HR) and its 95% confidence interval (CI) of OS and PFS were extracted and pooled on stata12.1. Results: In the meta-analysis, forty studies were eligible and a total of 10446 patients were included. Low LMR was associated with an inferior OS (HR=2.45, 95%CI 1.95-3.08) and PFS (HR=2.36, 95%CI 1.94-2.88). In the analysis of lymphoma subtypes, similar results were seen in HL, NHL, and its subtypes including DLBCL, NK/T cell lymphoma, and follicular lymphoma. In addition, low LMR was related with higher LDH (OR=2.26, 95%CI 1.66-3.09), advanced tumor staging (OR=0.41, 95%CI 0.36-0.46), IPI score (OR=0.40, 95%CI 0.33-0.48), but not with bone marrow involvement (OR=1.24, 95%CI 0.85-1.81) or pathological subtype (OR=0.69, 95%CI 0.41-1.16). Conclusion: Low LMR in peripheral blood indicates poor prognosis in patients with lymphoma. As a simple clinical indicator, peripheral blood LMR combined with existing prognostic factors can improve the accuracy of lymphoma prognosis assessment. Introduction Lymphomas are a heterogeneous group of lymphoid malignancy that is classified into Hodgkin Lymphoma (HL) and non-Hodgkin lymphoma (NHL). HL includes classic HL and nodular lymphocyte-predominant HL, and the classic HL can be further divided into four subtypes. Compared with HL, NHL comprises a more complex spectrum of subtypes, 85-90% of which arise from B cells, such as diffuse large B-cell lymphoma (DLBCL), follicular lymphoma (FL), mantle cell lymphoma (MCL) and Burkitt's lymphoma (BL); the remainder derive from T or NK lymphocytes such as NK/T cell lymphoma (NK/TL) and peripheral T cell lymphoma (PTCL) [1]. Advances in target therapy, adoptive cell therapy, and stem cell transplantation have improved the clinical outcomes of patients with lymphoma, however, relapsed or refractory diseases remain significant unmet needs in lymphoma treatment. According to the US data in 2019, about 19970 died of the disease among newly diagnosed 74200 NHL patients [2]. Accurate prognostic stratifications are essential for individualized or precision therapy in order to reduce the mortality and improve the quality of life in lymphoma patients. At present, there are many tools to assess the risk of lymphoma, including international prognostic index (IPI), gene expression profiling (GEP), and positron emission tomographycomputed tomography (PET-CT). Nevertheless, Ivyspring International Publisher PET-CT is relatively expensive, GEP analysis is labor-intensive and time-consuming, and IPI does not take into account the patients' immune status and tumor microenvironment (TME). Thus, simple and appropriate immune biomarkers have been explored to better predict the prognosis of lymphoma. Previous studies have shown that increased tumor-associated macrophages (TAMs) before treatment was associated with poor overall survival in patients with lymphoma [3,4]. Since TAMs are derived from the monocytes in peripheral blood, the number of TAMs is well correlated with that of monocytes. TAM can secrete various cytokines to promote tumor growth as well as angiogenesis in the tumor microenvironment [5]. On the other hand, lymphocytes play an important role in immune surveillance. It was reported that absolute lymphocyte count (ALC) was a surrogate marker of immune status, and low ALC was associated with poor prognosis [6,7]. Therefore, peripheral blood lymphocytes to monocytes ratio (LMR) can readily reflect the crosstalk between the patients' immunity and the tumor microenvironment. The clinical outcomes of many lymphoma subtypes, including FL [8,9], DLBCL [10,11], and NK/TL [12], could be predicted by peripheral blood LMR. Due to the heterogeneity of the sample size and the diversity of treatments reported in previous studies, the consistency of the prognostic impact of LMR remains unknown. To clarify the prognostic role of LMR in lymphoma, we conducted a comprehensive meta-analysis to assess the prognostic value of LMR in lymphoma with its subtypes and to reveal the correlation between LMR and clinicopathological characteristics including LDH, pathologies, staging, and IPI score. Literature search PubMed, PMC, Web of Science, Embase, and Cochrane Library were searched for relevant studies, with the deadline of February 2020, and the language was restricted to English and Chinese. Search terms included "lymphocyte-to-monocyte ratio" or "lymphocyte monocyte ratio" or "LMR" and "lymphoma". Two researchers screened the search results according to the inclusion and exclusion criteria. When disagreements occurred, a third reviewer was consulted. Inclusion and exclusion criteria Inclusion criteria were as follows: 1) prospective or retrospective clinical studies; 2) patients diagnosed with lymphoma; 3) reported on the comparison of prognostic value between high LMR and low LMR group; 4) OS (overall survival) or PFS (progressionfree survival) should be included; 5) results should be provided in the form of hazard ratio (HR) and 95% confidence interval (CI). Studies were excluded based on the following criteria: 1) animal or cell line experiments; 2) duplicate studies, conference abstracts or those without available full texts; 3) studies that are not related to the research topic or those without relevant results or needed data. Data collection and literature quality assessment The following data were extracted from adopted articles and recorded in a form: first author, publication, country, disease subtypes, sample size, HR and 95%CI of OS and PFS, the use of rituximab or not, LMR cutoff value, and so on. Quality evaluation was conducted independently by two authors based on the Newcastle-Ottawa scale (NOS), according to its three components (selection, comparability, and outcome). Scores ranged from 0 to 9 points, and those with a total score higher than 5 were regarded as high-quality studies. Statistical analysis All analyses were performed using STATA version 12.1 software (StataCorp, College Station, TX, USA). HR and 95% CI were pooled to compare the prognostic significance of LMR in lymphoma on OS and PFS, and results were displayed by forest plots. Subgroup analyses were performed using the same analysis method. Correlation between LMR and clinicopathological parameters of lymphoma were evaluated by OR with its 95%CI. Heterogeneity was checked by the chi-squared test and I 2 statistic (I 2 ≤ 50%, P>0.1 acceptable level of heterogeneity; I 2 >50%, P≤ 0.1, obvious). Publication bias was assessed by Egger's and Begg's tests. We conducted a sensitivity analysis to estimate whether any single study affected combined HRs. Statistical significance was set at a two-tailed P<0.05. Search results and study characteristics A total of 1180 articles were obtained, and 658 were left after 522 duplicates were excluded. After the initial screen, 575 articles were excluded, leaving 83 articles for detailed reading. Finally, 40 articles were eligible for this meta-analysis ( Fig. 1), involving 10,446 lymphoma patients. Each study divided patients into high LMR and low LMR groups based on different LMR cut-off values, which were acquired from ROC curves or previous studies or median LMR. Five studies did not present the number of patients in the two groups. In the remaining 35 articles, 3817 patients were assigned to the low LMR group, while 5,082 were included in the high LMR group. All the adopted articles were retrospective studies with NOS scores of 4 or higher. The characteristics of the included studies were shown in Table 1. Publication Bias 35 articles were available for the analysis for publication bias with regard to the HR of OS. Both Begg's test and Egger's test demonstrated that there was publication bias regarding the HR of OS (Egger's Test: P=0.002, Begg's Test: P<0.000, Fig. 2). 26 studies reported on PFS. The results also indicated publication bias in PFS, but the bias was not as significant as that of OS (Egger's Test: P=0.010, Begg's Test: P=0.015, Fig. 3). Overall Survival 35 studies provided relevant HRs of OS. The random-effects model was employed. The outcome demonstrated that low LMR was associated with an inferior overall survival rate, and the result was statistically significant (HR=2.45, 95%CI 1.95-3.08; I²=84.5%, P<0.000, Fig. 4). The sensitivity analysis (Fig. 5) revealed that the study Zhong (2019) had an impact on the heterogeneity of OS, and the pooled HR was 2.17 (95%CI 1.88-2.50) after excluding this article. Heterogeneity decreased from 84.5% to 36.3%, while the negative correlation between LMR and OS still existed. Subgroup analysis according to sample size, country, publication year, median age, LMR cutoff, and rituximab showed that low LMR was associated with inferior OS in each subgroup ( Table 2). Subgroup analysis based on LMR cutoff value showed that the difference in prognostic significance between the two groups increased, as the LMR cutoff value increased (Fig. 6). Progression-free survival PFS was reported in 26 articles. The result showed that PFS in the low LMR group was significantly poor (HR=2.36, 95%CI 1.94-2.88; I²=61.1%, P<0.000, Fig. 7). Sensitivity analysis (Fig. 8) demonstrated that the article Simon (2016) influenced the heterogeneity of PFS. After excluding this article, pooled HR became 2.18 (95% CI = 1.83-2.58). Heterogeneity decreased from 61.1% to 47.2%, but the negative correlation between LMR and PFS did not been destroyed. In addition, subgroup analysis showed that compared with the high-LMR group, PFS in the low-LMR group was poorer in each subgroup ( Table 2). Table 3. HL patients had the most obvious difference in OS between the two groups. As for PFS, 26 articles comprised this outcome, 6 analyzing HL, 20 analyzing NHL, of which 12 articles analyzed DLBCL. The results indicated that PFS was poor in the low LMR group in these subtypes. Association between LMR and clinicopathological characteristics of lymphoma Relevant articles were enrolled to analyze the association between LMR and six clinicopathological features of lymphoma (Table 4). The results indicated that bone marrow involvement and pathological types were not associated with LMR. 15 studies were chosen to assess the association between LMR and B symptoms, and the combined OR showed that the low LMR group was prone to B symptoms (OR=2.13, 95%CI 1.61-2.82). The relationship between LMR and IPI score was analyzed in 13 studies. The result showed that patients with IPI scores higher than 3 were more likely to appear in the low LMR group (OR=0.40, 95%CI 0. 33-0.48 Discussion Peripheral blood lymphocyte count is considered as an indicator of the host immunity. The lymphocytes play an important role in immune surveillance and defense system against tumor. The CD8+ T cells are able to recognize and eliminate tumor cells mainly through perforin and granzyme B pathways. The CD4+ Th cells modulate tumor microenvironment by secreting cytokines such as IFN-γ, TGF-β, IL-4, IL-5, and IL-6. The regulatory T cells suppress immune activation and autoimmunity. Carreras et al. found that the reduction of Treg cells is associated with tumor recurrence, transformation, and highly invasive histology [13], which remains controversial in other studies [14]. In general, lymphocytosis is associated with a favorable prognosis in patients with cancer. The monocytes are released from the bone marrow into the blood, and then migrate into peripheral tissues where monocytes differentiate into macrophages. Activated macrophages are categorized to two types, i.e., M1 and M2 macrophages. M1 macrophages have anti-tumor functions, whereas TAMs, which resemble M2 macrophages, express high levels of anti-inflammatory cytokines, angiogenic factors and metalloproteinases to promote cancer progression [15]. Steidl et al. analyzed the TME in 130 classic HLs and showed that, increased number of TAMs was significantly associated with poor OS, and its prediction power was better than conventional IPI score [3]. Li et al. found that AMC positively correlated with TAM in DLBCL patients treated with rituximab, and poor survival outcomes were observed in of patients with high AMC and TAM [16]. Our meta-analysis included a total of 10446 patients from 40 studies, and explored the prognostic significance of LMR on lymphoma and its subtypes. The reduced LMR are known to adversely affect OS and PFS in patients with lymphoma. The prognostic significance did not diminish in further subgroup analysis according to LMR cut-off value, sample size, country, publication year, median age, and rituximab, suggesting that peripheral blood LMR is a reliable prognostic marker. Moreover, the analysis of LMR and clinicopathological characteristics revealed that low LMR was associated with higher LDH, IPI score, and tumor stage. LDH is indicative of lymphoma burden, and a high IPI score and advanced tumor stage correlate with poor prognosis. No significant association was found between LMR and bone marrow involvement and histological subtypes of lymphoma. A recent meta-analysis included 8 studies with a total of 3319 patients with HL, and suggested that low LMR was associated with poor OS and PFS [17]. Xia et al. analyzed 12 studies with 5,021 DLBCL patients, and similarly, they found that low LMR has poor prognostic implication for DLBCL [18]. In this study, we updated the clinical data and analyzed the prognostic value of LMR in several lymphoma subtypes. Cutoff values of LMR were variable among the included 40 studies, and most LMRs ranged from 2.0 to 3.0. Subgroup analysis based on cutoff values demonstrated that the differences in OS and PFS between the low-and high-LMR groups were more significant when the cutoff value was higher than 2.5. LMR cut-off values were usually 1.1~2.9 in HL, and 1.6~4.0 in DLBCL. The LMR cutoffs were calculated using the ROC curve in most studies, while median LMR or previously-reported value was selected in other studies. To better define the prognostic role of LMR, a standardized calculation of LMR cutoff is required for different lymphoma subtypes. Although high TAM infiltration in TME is often associated with poor prognosis in lymphoma, the use of rituximab may diminish the adverse effect. For example, Canioni et al. found low macrophages was significantly associated better event-free survival in FL patients treated with CHVP-I (cyclophosphamide, doxorubicin, etoposide, prednisolone, and interferon) regimen but not in those receiving rituximab plus CHVP-I [19]. We asked whether rituximab would affect the prognostic significance of LMR in lymphoma. Interestingly, a meta-analysis [18] showed that LMR could well predict OS and PFS in patients with DLBCL treated with rituximab; LMR did not affect PFS in DLBCL treated without rituximab. Since a small number of patients were included in non-RCHOP treatment group, these results need to be further verified. In our study, the patients treated with and without rituximab were assigned in 14 studies, respectively. Subgroup analysis suggested that the low LMR group had poor OS and PFS regardless of the use of rituximab, although the difference was more significant in non-rituximab groups. Thus, our analysis demonstrated that the prognostic significance of LMR for B cell lymphoma is still valid in the rituximab era. The combinations of LMR with other prognostic assessment tools have been studied in patients with lymphoma. Simon et al. [20] suggested that in HL, the prognosis of the high LMR/PET-CT negative group was significantly better than the low LMR/PET-CT positive group, and the factors combination was more accurate than single factor in prognostic assessment. Ji et al. reported that LMR/LDH ratio had a better predictive power than LMR alone in DLBCL [21]. Therefore, LMR can be used in conjunction with other prognostic tools such as PET-CT and IPI scores for better risk stratification, which could translate to individualized or precision treatment of lymphoma. Our study has some limitations. First, the meta-analysis is based on retrospective studies rather than prospective randomized controlled trials, which might lead to publication bias. Second, not all of the included HRs have been adjusted, because covariates did not always exist. In addition, different LMR cut-off values were used in the included studies, which may lead to increased heterogeneity. In conclusion, low LMR is associated with poor survival outcomes in lymphoma patients. As a simple and reliable prognostic marker, LMR, alone or in combination with other parameters, will be helpful for prognosis assessment. Since our results are mainly based on retrospective clinical studies, the role of LMR warrants further investigation in prospective randomized trials.
v3-fos-license
2020-04-12T13:02:32.208Z
2020-04-01T00:00:00.000
215733846
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4425/11/4/403/pdf", "pdf_hash": "59ac070e8bb5ff95d161029b5d2a3ec9609b8602", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42699", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "a58aaacccb7bb70f301f7b852afd020bf82bb084", "year": 2020 }
pes2o/s2orc
High-Throughput Sequencing Identifies 3 Novel Susceptibility Genes for Hereditary Melanoma. Cutaneous melanoma is one of the most aggressive human cancers due to its high invasiveness. Germline mutations in high-risk melanoma susceptibility genes have been associated with development hereditary melanoma; however, most genetic culprits remain elusive. To unravel novel susceptibility genes for hereditary melanoma, we performed whole exome sequencing (WES) on eight patients with multiple primary melanomas, high number of nevi, and negative for high and intermediate-risk germline mutations. Thirteen new potentially pathogenic variants were identified after bioinformatics analysis and validation. CDH23, ARHGEF40, and BRD9 were identified as the most promising susceptibility genes in hereditary melanoma. In silico analysis of CDH23 and ARHGEF40 variants provided clues for altered protein structure and function associated with the identified mutations. Then, we also evaluated the clinical value of CDH23, ARHGEF40, and BRD9 expression in sporadic melanoma by using the TCGA dataset (n = 461). No differences were observed in BRD9 expression between melanoma and normal skin samples, nor with melanoma stage, whereas ARHGEF40 was found overexpressed, and CDH23 was downregulated and its loss was associated with worse survival. Altogether, these results reveal three novel genes with clinical relevance in hereditary and sporadic melanoma. Introduction Skin cancer incidence has been rising alarmingly fast, becoming a concerning public health issue [1]. Of all skin cancers, melanoma stands out due to its high invasive capacity, being considered one of the most aggressive skin cancers, accounting for~69% of deaths caused by cutaneous malignancies [2]. True features of hereditary melanoma such as unilateral lineage, early onset of disease, and multiple primary lesions are quite rare, even in melanoma patients that report a family history of this neoplasm [3]. Family history comprises approximately 5-15% of melanoma cases; however, this does not imply that a single genetic mutation is being transmitted [4]; shared sun exposure and other risk factors are more plausible causes of melanoma among families with susceptible skin types [3]. Patients diagnosed with a single primary melanoma are at an increased risk of developing subsequent primary melanomas, which most likely occur within two years after the first diagnosis [5]. In fact, this has been demonstrated for 70% of melanoma patients who developed a second primary melanoma, showing the importance of close skin surveillance [6]. In this work, we employed whole exome sequencing (WES) to identify novel susceptibility genes for hereditary melanoma in patients with MPM, who were negative for germline mutations in the genes CDKN2A, CDK4, and MITF (p.E318K). The novel identified genes were further assessed for their clinical potential using The Cancer Genome Atlas (TCGA) database that comprises of 461 melanoma cases and was analysed in silico. To our knowledge, this is the first study using WES to identify novel MPM susceptibility genes. Institutional Approval This work involving melanoma patients as well as healthy controls samples was approved by the Ethics Committee of Instituto Português de Oncologia Francisco Gentil de Lisboa (IPOLFG, UIC/829). Informed consent was obtained from all subjects (patients and healthy controls); all subjects were over 18 years of age. The methods were performed in accordance with good clinical practices guidelines and Portuguese law. The indexes with familial melanoma and patients diagnosed with MPM were obtained by the Unidade de Investigação em Patobiologia Molecular (UIPM) from Instituto Português de Oncologia de Lisboa Francisco Gentil (IPOLFG). All patients were monitored by the Department of Dermatology and Familial Risk Clinic from IPOLFG and were subjected to genetic testing according to criteria used in Portugal [22]. The healthy control samples were cancer and disease-free and obtained from Biobanco-iMM, Lisbon Academic Medical Center, Lisbon, Portugal. All participants of this study were Portuguese and of Caucasian descent. DNA Extraction DNA of the family members was extracted from leukocytes using commercial kits (EZ1 DNA Blood kit) according to the manufacturer's instructions (Qiagen, Hilden, Germany). DNA amount was quantified with a Qubit™ Fluorometer (Thermofisher, Grand Island, New York, USA). Whole Exome Sequencing Eight patients with MPM, of whom two were siblings, were selected for whole exome sequencing (WES) analysis performed by SourceBioScience NGS Service (Nottingham, UK). These patients were selected due to the fact that they were the most representative from our population, with a higher number of primary melanomas, and had been waiting for the longest period for a decision regarding their genetic condition. The patients had been diagnosed with MPM and previously screened for melanoma susceptibility genes CDKN2A, CDK4, and MITF (p.E318K) germline mutations. Genomic DNA libraries were enriched for exomic sequencing using Agilent SureSelect Human All Exon V6. The captured exonic sequences were sequenced using the Illumina NextSeq500 V2 (Illumina, Inc., San Diego, CA, USA) using one high-output flow cell at 75 bp paired-end reads. All reads were aligned to the Human Genome version GRCh38 (May 2017 version). The SRA accession number for sequencing data included in this study is PRJNA543971. Variant Selection Bioinformatics analysis performed by Bioinf2Bio (Porto, Portugal) was first carried out as follows: Criteria 1, including potentially causative and altered genes detected simultaneously in the samples from the two siblings and in at least one other patient; Criteria 2, including potentially causative and altered genes detected in at least two non-sibling samples. Only variants with high-quality genotype, alignment, and a defined genotype were selected. These high-quality variants were then annotated to all protein-coding transcripts in the human genome. Two parallel approaches for variant selection covering each criterion were undertaken: A "broad" approach, consisting of all high-quality exonic variants; and a "stringent" approach, comprising all high-quality exonic and intronic variants that featured at least one of the following aspects: high quality predicted by Ensembl; clinical significance according to ClinVar, examined by Ensembl; predicted damaging effects by SIFT, Polyphen, MetaSVM or MetaLR. ( Figure 1A). Common polymorphisms (allele frequency ≥ 1%) were excluded by additional criteria ( Figure 1B). Complete description is provided in Supplementary Materials. WES data and patient clinicopathological data are aggregated in Table S5. included potentially altered variants detected in two siblings and at least one other patient; Criteria 2 included potentially altered variants detected in at least two patients, excluding siblings. Only highquality variants were included, and two approaches were used: A "broad approach" selecting only exonic variants and a "stringent approach" selecting exonic and intronic variants that had high quality, clinical significance, or predicted damaging effects. (B) Exclusion of variants with an allele frequency higher than 1% in the global and European populations, synonymous variants, and genes harbouring four or more variants, followed by selection of variants present in more than two patients and potentially pathogenic according to at least one database. included potentially altered variants detected in two siblings and at least one other patient; Criteria 2 included potentially altered variants detected in at least two patients, excluding siblings. Only high-quality variants were included, and two approaches were used: A "broad approach" selecting only exonic variants and a "stringent approach" selecting exonic and intronic variants that had high quality, clinical significance, or predicted damaging effects. (B) Exclusion of variants with an allele frequency higher than 1% in the global and European populations, synonymous variants, and genes harbouring four or more variants, followed by selection of variants present in more than two patients and potentially pathogenic according to at least one database. In Silico Analysis In silico mutation analysis was performed for prediction of potentially deleterious effects of validated variants, using several tools: SIFT, Polyphen-2, Provean, MutationTaster, and FATHMM. Splice site prediction was calculated using Human Splicing Finder. In order to understand the possible molecular consequences of the identified CDH23, ARHGEF40, and BRD9 mutations in the absence of structural data for each of the corresponding protein variants, structural models were generated using the SWISS-MODEL online server [23] (protocol detailed in Supplementary Materials). The models were inspected in Pymol [24] and the respective amino acid substitutions generated using the mutagenesis tool. [25,26]. CDH23, ARHGEF40, and BRD9 gene expression data for cutaneous melanoma (SKCM) was extracted from Gene Expression Profiling Interactive Analysis (GEPIA) database (http://gepia.cancerpku.cn) [27]. GEPIA database compromises RNA sequencing data from TCGA (SKCM n = 461, and normal n = 1) and normal GTEx (n = 557) samples. The Boxplots were generated to compare the expression levels between the tumour and normal skin samples. The violin plots were created based on the patient's pathological stages. The expression data was transformed in log 2 (transcripts per kilobase million + 1) and one-one-way ANOVA was used for differential analysis. The prognostic value was assessed using survival analysis from the GEPIA program. Patients were divided in low and high expression groups based on median expression cut-off for each gene. Overall survival and disease-free survival of SKCM patients were analysed using the Cox PH model. Functional Enrichment Analysis Co-expressed genes with CDH23, ARHGEF40, or BRD9 were extracted from the SKCM TCGA dataset [25,26]. Only genes with Spearman > 0.5 and q value < 0.01 were considered positively correlated. Co-expressed genes with CDH23, ARHGEF40, or BRD9 expression were then subjected to Gene Ontology (GO) and biological pathway enrichment analysis using PANTHER 14.0 (http://pantherdb.org) [28] against Homo sapiens background reference (GO database released 2018. 12.19). The statistical over-representation was calculated using a binomial test and the results were considered significant at p < 0.05, after Bonferroni correction. Genomic Mutation Analyses To determine the frequency of CDH23, ARHGEF40, BRD9, NRAS, and BRAF mutations in SKCM samples, data recently re-annotated from TCGA at GDC (https://portal.gdc.cancer.gov) and cBioPortal was employed. The prognostic value of mutated genes was evaluated using the overall survival Kaplan-Meier tool from cBioPortal. Variants Validation The selected variants were validated by Sanger sequencing. First, polymerase chain reaction (PCR) was performed in a thermocycler (Biometra, Göttingen, Germany), utilizing 5 µL of AmpliTaq Gold™ 360 Master Mix (Applied Biosystems, Foster City, California, USA), 1-1,5 µL of forward and reverse primers (1 µM) (Invitrogen™) and 1-3 µL of DNA (20 ng/µL), comprising a final volume of 10 µL. The primer sequences used, along with their respective sequence length and optimal temperature of annealing are listed in Table S4. To confirm amplification of the fragments of interest, agarose gel electrophoresis was performed using an agarose gel at 2% (w/v), in TBE 1X (TBE Buffer 10×) (National diagnostics, Atlanta, Georgia, USA), to which 5% of ethidium bromide (0.5 µg/mL) (PanReac AppliChem, Darmstadt, Germany) were added. Electrophoresis was carried out on an ABI 3130 DNA analyzer (Applied Biosystems, Foster City, California, USA). Then, for unincorporated primer and non-amplified DNA degradation, 2 µL of a mix containing FastAP Thermosensitive Alkaline Phosphatase enzyme (1 U/µL) (Thermo Scientific, Grand Island, New York, USA) and Exonuclease I enzyme (20 U/µL) (Thermo Scientific, Grand Island, New York, NY, USA) in a proportion of 2:1, respectively, were added to each PCR product. The enzyme digestion reaction was performed in the same thermocycler aforementioned. The PCR sequencing reaction was performed using BigDye Terminator v1.1 sequencing kit (Applied Biosystems, Foster City, California, USA) according to the manufacturer's instructions. Afterwards, samples were purified using a column-based DNA purification performed using the AutoSeqTM G-50 Dye Terminator Removal Kit (illustraTM, Brighton, UK). Finally, samples were added to a 96-well plate (96-well PCR Microplates, AxygenTM) and sequenced using a 3130 Genetic Analyzer (Applied Biosystems, Foster City, California, USA) sequencer. Statistical Analysis In order to evaluate cumulative effects of the multiple variants in a genomic region identified in our analysis (NTN4, MTCL1, FNDC1, CAND2, ITIH3, RPL32, and RNF213), we conducted the following region-based aggregation tests: the burden test, which indicated if a region has a large proportion of causal variants with effects in the same direction; the sequence kernel association test (SKAT), which is more powerful in the presence of both risk-increasing and risk-decreasing variants or if there are many non-causal variants; and the omnibus test SKAT-O, which adaptively combines the SKAT and burden test statistics [29,30]. These analyses were conducted using R-package SKAT and all p-values were two-sided. Identification of Rare High-Risk Variants for Hereditary Melanoma As per the filtering steps detailed in Figure 1A, 14,048 high-quality variants remained from WES data of eight patients with multiple primary melanomas (MPM), which were negative for all the susceptibility genes for melanoma (CDKN2A, CDK4, MITF, BAP1) and telomere maintenance complex genes such as Telomerase Reverse Transcriptase (TERT), Protection of Telomeres 1 (POT1), Shelterin Complex Subunit and Telomerase Recruitment Factor (ACD) and Telomeric repeat-binding factor 2-interacting protein 1 (TERF2IP). Additional criteria were applied ( Figure 1B), leading to 19 rare high-quality variants, from which 13 were validated by Sanger sequencing (Table S1). To identify the most promising variants, we analysed their potential pathogenicity using several impact prediction servers (Table S2). In order to evaluate the pathogenicity of these 13 variants in MPM, we screened 18 additional MPM patients, and 37 patients with criteria for familial melanoma, 51 being negative and 4 positive for CDKN2A mutations (frequencies in Table 1). Interestingly, when we screened these variants in the 37 indexes with familial melanoma a low variant frequency was observed, being absent in nearly half the cases (Table 1). To confirm the rarity of the variants found and exclude specific polymorphisms of the Portuguese population, we assessed blood healthy controls. As shown in Table 1, most of the variants were polymorphisms (10 of 13), presenting a frequency >1% and <5%, with the exception for MAP2K3 gene variant, which was a common variant with a frequency of 92%. Furthermore, the BMX and CFAP47 variants were found in homozygosity in a healthy control, thus being excluded, as familial melanoma susceptibility is consistent with autosomal dominant inheritance. Importantly, we identified 3 rare variants in the CDH23, ARHGEF40 and BRD9 genes (0-0.7% frequency in healthy controls, Table 1) and confirmed their rarity using ExAC (0.007002, 0.001894 and 0.000464, respectively) and gnomAD (not found, 0.00186 and 0.000404, respectively). In order to evaluate if these rare variants could synergize and increase melanoma susceptibility, we performed a region-based aggregation test analysis with all variants identified. We found a statistically significant cumulative effect of NTN4, MTCL1, FNDC1, CAND2, ITIH3, RPL32 and RNF213 variants in the MPM group, compared to the healthy control group (Table 2). Interestingly, CAND2 and RPL32 are in the same locus (3p25.2), strengthening the hypothesis that they could be co-segregated to the next generation and, consequently contribute together to increase MPM susceptibility. We investigated in silico the possible molecular consequences of the identified CDH23, ARHGEF40 and BRD9 mutations by generating structural models, which were not obtained for BRD9. Cadherin 23 is composed of an ectodomain comprising 27 extracellular cadherin (EC) repeats anchored to the cell membrane through a transmembrane helix, and a C-terminal cytosolic domain coupling CDH23 to the cytoskeleton [31]. Ca 2+ binding at linker regions flanking each EC repeat is essential for the function of the tip-link, assembled by two cis homodimers of CDH23 and protocadherin-15 (PCDH15) connected tip-to-tip. Structural models of CDH23 EC4 (Table S3) were aligned in Pymol with the template yielding the highest score (PDB ID 5SZO), corresponding to EC repeats 1-4 of protocadherin γB7 (PCDHγB7; yellow ribbons in Figure S1, top and middle panels). Zooming-in on the Ala366Thr substitution ( Figure S1, bottom panel), the mutated residue seems unlikely to affect protein folding, stability or aggregation, being located at the protein surface and replacing an uncharged side chain with a polar one. Therefore, such a pathogenic substitution could affect either the CDH23 homodimer assembly [31], or its interaction with other proteins. Protocadherin (PCDH) γB7 (PCDHγB7) contains several residues that are targets for post-translational modifications, particularly threonine mannosylation. Notably, PCDHγB7 EC3 residue Thr230 is mannosylated in crystal form 2 (PDB ID: 5SZP); this residue is perfectly aligned with the Thr366 from the CDH23 pathogenic variant herein studied ( Figure S1). It is thus envisaged that the CDH23 p.Ala366Thr variant acquires an otherwise absent mannosylation site. ARHGEF40, or Solo [32], a Rho-guanine nucleotide exchange factor (Rho-GEF), has posited roles in the maintenance of cell and tissue integrity against mechanical stresses [33]. Solo comprises a highly conserved N-terminal domain, a central region containing a CRAL/TRIO domain and spectrin repeats, and a C-terminal region containing a Dbl homology domain and a pleckstrin homology domain (DH-PH domain) [32]. The Arg834Cys substitution resulting from the herein identified pathogenic ARHGEF40 mutation is located in the central region containing the spectrin repeats. Accordingly, the best models were generated against repeats 14-16 of β2-spectrin (Table S3, Figure S2), revealing that the substituted Arg834 is located in a disordered link between two α-helices, with its side chain within H-bonding distance (2.6-3.4 Å) with Glu858 ( Figure S2; blue) or Gln851 ( Figure S2; green) side chains. Substitution by a cysteine residue will likely result in the loss of these H-bonds, disturbing the conformation of this motif within the Solo central domain and eventually causing protein misfolding and/or aggregation. The Impact of CDH23, ARHGEF40 and BRD9 in Sporadic Melanoma We found that the novel identified variants and polymorphisms (Table 2) seemed to influence melanoma development. Particularly, we found that three rare variants in CDH23, ARHGEF40 and BRD9, respectively, could be pathogenic on their own. Since the impact of these three genes in the context of sporadic melanoma remains to be fully elucidated, we further investigated their mutation frequency, along with frequent BRAF and NRAS mutations found in melanoma, in the TCGA database comprising a large cohort of melanoma patients (n = 448). As expected, BRAF and NRAS mutations were highly frequent (54% and 29%, respectively; Figure 2A). CDH23 was also highly mutated (17%), missense and truncations being the predominant types of mutations. Many missense CDH23 mutations were of unknown significance, demonstrating the importance of studying this gene and its alterations. ARHGEF40 and BRD9 were mutated in 18 of 448 melanoma patients (4%; Figure 2B). We also performed further mutation frequency analysis in 6 additional databases, which revealed the similar frequencies ( Figure S3). Since NRAS and BRAF mutations are mutually exclusive and CDH23 mutations were detected simultaneously either with NRAS or BRAF mutations, we evaluated the prognostic value of these mutations. BRAF mutated samples had a significantly higher disease-specific survival (p = 0.036), as already described. Furthermore, data from patients with mutated CDH23 revealed a lower diseasespecific survival (p = 0.047). Contrarily, ARHGEF40 and BRD9 mutations did not reach a statistically significant prognostic value due to the small sample size of mutated cases ( Figure 2C). Nevertheless, melanoma is characteristically a high mutational burden tumour type [34], and since gene length and mutation frequency are usually correlated, it is plausible that CDH23 mutations do not constitute a proper marker of prognostic value and its high mutation frequency derives from large size of this gene (>419,000 bases). Notwithstanding, the expression of CDH23, ARHGEF40, and BRD9 may be associated with patient prognostic in sporadic melanoma. To understand the biological relevance of the expression of these genes, Gene Ontology (GO) analyses were performed. Interestingly, CDH23 revealed enrichment in pathways related to calcium/cation homeostasis, protein phosphorylation, inflammatory response, and positively regulating ERK and MAPK cascades ( Figure 3A). Additionally, ARHGEF40 gene showed significant enrichment in biological processes related to tissue development, mainly skin and epidermis ( Figure 3B). The BRD9 gene was enriched for DNA replication, DNA repair, and cellular response to DNA damage stimuli ( Figure 3C). Altogether, these Since NRAS and BRAF mutations are mutually exclusive and CDH23 mutations were detected simultaneously either with NRAS or BRAF mutations, we evaluated the prognostic value of these mutations. BRAF mutated samples had a significantly higher disease-specific survival (p = 0.036), as already described. Furthermore, data from patients with mutated CDH23 revealed a lower disease-specific survival (p = 0.047). Contrarily, ARHGEF40 and BRD9 mutations did not reach a statistically significant prognostic value due to the small sample size of mutated cases ( Figure 2C). Nevertheless, melanoma is characteristically a high mutational burden tumour type [34], and since gene length and mutation frequency are usually correlated, it is plausible that CDH23 mutations do not constitute a proper marker of prognostic value and its high mutation frequency derives from large size of this gene (>419,000 bases). Notwithstanding, the expression of CDH23, ARHGEF40, and BRD9 may be associated with patient prognostic in sporadic melanoma. To understand the biological relevance of the expression of these genes, Gene Ontology (GO) analyses were performed. Interestingly, CDH23 revealed enrichment in pathways related to calcium/cation homeostasis, protein phosphorylation, inflammatory response, and positively regulating ERK and MAPK cascades ( Figure 3A). Additionally, ARHGEF40 gene showed significant enrichment in biological processes related to tissue development, mainly skin and epidermis ( Figure 3B). The BRD9 gene was enriched for DNA replication, DNA repair, and cellular response to DNA damage stimuli ( Figure 3C). Altogether, these results suggested that CDH23, ARHGEF40, and BRD9 genes could be also important for sporadic melanoma and consequently, a mutation in these genes could be pivotal in this disease. We then investigated each gene's expression and its prognostic value in melanoma (Figure 4), revealing significant CDH23 downregulation in cutaneous melanoma (SKCM) samples, when compared with normal skin samples ( Figure 4A). Nevertheless, no relationship was found between gene expression and melanoma stage ( Figure 4B). Interestingly, low CDH23 expression significantly correlated with a worse overall (p = 0.0093; Figure 4C, left panel) and disease-free survival (p = 0.05; Figure 4C, right panel). Contrarily, ARHGEF40 had a significantly higher expression in SKCM samples when compared to normal skin samples, although its expression did not appear to correlate with melanoma stages or either overall or disease-free survival (p = 0.26 and p = 0.34) (Figure 4D-F). We then investigated each gene's expression and its prognostic value in melanoma (Figure 4), revealing significant CDH23 downregulation in cutaneous melanoma (SKCM) samples, when compared with normal skin samples ( Figure 4A). Nevertheless, no relationship was found between gene expression and melanoma stage ( Figure 4B). Interestingly, low CDH23 expression significantly correlated with a worse overall (p = 0.0093; Figure 4C, left panel) and disease-free survival (p = 0.05; Figure 4C, right panel). Contrarily, ARHGEF40 had a significantly higher expression in SKCM samples when compared to normal skin samples, although its expression did not appear to correlate with melanoma stages or either overall or disease-free survival (p = 0.26 and p = 0.34) (Figure 4D-F). No statistically significant associations were found between BRD9 expression and tumour stage or survival ( Figure 4G-I). This indicates that although CDH23 is downregulated in melanoma and has a prognostic value ( Figure 4C), it either has no correlation with melanoma aggressiveness or it possibly plays a role in early melanomagenesis. Genes 2020, 11, x FOR PEER REVIEW 12 of 18 No statistically significant associations were found between BRD9 expression and tumour stage or survival ( Figure 4G-I). This indicates that although CDH23 is downregulated in melanoma and has a prognostic value ( Figure 4C), it either has no correlation with melanoma aggressiveness or it possibly plays a role in early melanomagenesis. . Expression data presented in log 2 (transcripts per kilobase million + 1). One-way ANOVA was used for differential analysis. Overall survival and disease-free survival plots from high and low gene expressing tumour samples for (C) CDH23, (F) ARHGEF40, and (I) BRD9. Prognostic value was assessed using survival analysis from GEPIA program. Patients were divided in low and high expression groups based on median expression cut-off for each gene. Survival curves were analysed using the Cox PH model. Discussion In the present study, we aimed to identify novel rare genetic variants responsible for hereditary melanoma susceptibility. Eight patients, with true features of hereditary melanoma such as MPM and high number of nevi which were negative for germline mutations in CDKN2A, CDK4, and MITF (p.E318K) genes were selected for WES. Interestingly, additional MPM and familial melanoma patients positive for CDKN2A or MITF (p.E318K) mutations did not harbour any of the suggestive variants identified in this study, supporting their probable relevant impact on MPM development. Even though most variants identified in the MPM cases were polymorphisms, variants in the CDH23, ARHGEF40, and BRD9 genes were rare in the databases employed, even among the healthy controls of the Portuguese population. One of the hardships of identifying hereditary melanoma is the fact that individuals with a history of familial melanoma may not actually have a genetic background in the family that makes them susceptible to melanoma. Rather, these families are subjected to similar environmental factors, such as profession and sun exposure, that lead to the development of melanoma. It is much more plausible that patients with true features of hereditary melanoma, such as MPM and high number of nevi, are carriers of a genetic background that culminates in melanoma susceptibility. Our promising variants were preferentially identified in patients with MPM. Therefore, it is possible that our patients with MPM have a hereditary background that makes them susceptible, and criteria for familial melanoma, which may be genetic and/or environmental background. Since CDH23, ARHGEF40, and BRD9 function is unclear, the impact of rare variants in these genes in melanoma context is unknown. Throughout our study, neither mutations on CDH23, ARHGEF40, and BRD9 nor expression of ARHGEF40 and BRD9 affected the overall survival of the patients. However, the literature indicates that CDKN2A, one of the most important genes for melanoma susceptibility, also does not correlate with patient survival, despite its important association with melanoma susceptibility [35][36][37]. Hence, the three variants identified in the present study may play an important role in melanoma susceptibility. For instance, BRD9 has been identified as a subunit of the mammalian SWI/SNF chromatin remodelling complex [38] involved in organismal development, gene regulation, and cell lineage specification, which seems to be involved in tumour suppression [39]. BRD9 is found overexpressed in numerous cancers and this overexpression seems to be associated with susceptibility to lung cancer, synovial sarcoma, and breast cancer [40][41][42], indicating that BRD9 has a potential oncogenic effect. Moreover, a recent study revealed that the binding of BRD9 to chromatin occurs at the enhancer level in a cell type-specific manner [43]. Additionally, BRD9 chromatin-binding also regulates cancer cell proliferation and tumorigenicity in acute myeloid leukaemia, indicating its oncogenic role in transformed blood cells. This is in accordance with our results of Gene Ontology, which reveals that BRD9 appears to be significantly associated with DNA replication and repair processes, particularly non-homologous end joining, implicated in cancer. However, despite this data from Gene Ontology analysis, no differential expression was observed between cutaneous melanoma and normal skin samples. Furthermore, no connection was found between BRD9 expression and both overall and disease-free survival. Although no statistically significant association between BRD9 expression and tumour stage or survival was found in our study, BRD9 importance must not be overlooked, since the available literature states that expression alterations in other important genes such as CDKN2A also do not correlate with patient prognosis in melanoma [35][36][37]44]; this could be the case of BRD9. Moreover, BRD9 inhibition has been shown to result in decreased cell proliferation, G1-arrest, and apoptosis in rhabdoid tumour cell lines [45] and synovial sarcoma [41]. Overall, the data available in the literature regarding BRD9 function coupled with our data from Gene Ontology analysis further highlights the importance of studying this gene in the context of melanoma. As previously stated, ARHGEF40, or Solo, belongs to the Rho-guanine nucleotide exchange factor (Rho-GEF) family [32], having a role in maintaining cell and tissue integrity under mechanical stress [32]. Solo misfolding/aggregation may compromise the organization of F-actin and keratin fibres, and the localization of plakoglobin to cell-to-cell adhesion sites [46], the latter being particularly relevant in terms of tumorigenesis, since plakoglobin has been proposed to act as a tumour and metastasis suppressor. Several Rho-GEFs have been described as oncogenes, possibly due to deregulated activation of Rho GTPases [47]. Interestingly, according to our Gene Ontology data, ARHGEF40 is particularly associated with skin and epidermal development, indicating that an ARHGEF40 mutation could have a relevant impact on melanoma. Indeed, ARHGEF40 has a significantly higher expression in cutaneous melanoma samples as compared to normal skin samples, although it has no significant relation with overall or disease-free survival. GEFs have also been associated with initiation and promotion of melanoma and basal cell carcinoma [48]. Therefore, ARHGEF40 could also be associated with melanoma initiation. Additionally, ARHGEF40 might regulate its protein activity through alternative splicing [32] and, according to our Human Splicing Finder tool results, the identified variant in this gene allowed the creation of an ESS site that blocks exon recognition. In turn, this blockade favours the silencing of splicing, and/or an alteration of an ESE site, which could disrupt splicing regulation [49]. Therefore, these splicing alterations could also have an important impact on ARHGEF40 function, consequently influencing the activation of Rho GTPases, which might result in numerous disorders, including cancer. Similar to ARHGEF40 and BRD9, the function of CDH23 has not been established hitherto [47] in spite of being implicated in Usher syndrome type ID and non-syndromic hearing loss [50]. CDH23 belongs to the cadherin family, a family that mediates calcium-dependent cell adhesion [51]. Here, CDH23 was associated with biologic processes, such as the positive regulation of cytosolic calcium, regulation of cytosolic ion concentration, and cellular calcium ion homeostasis. As adhesion molecules, cadherins are known to participate in cancer metastasis, for instance E-cadherin and N-cadherin whose down or upregulation, respectively, can result in epithelial mesenchymal transition, a known marker for this event [52,53]. Furthermore, despite being reported that mutations in CDH23 are commonly observed in hearing impairment conditions [51][52][53][54], they have also been associated with pituitary adenomas [55]. In fact, CDH23 has been shown to be associated with a positive inflammatory response, which plays a pivotal role in cancer. Besides, as CDH23 seems to regulate ERK and MAPK cascades, known melanoma-signalling pathways, mutations in this gene might play a role in MPM development by modulating the activity of these molecules. In this study, we observed that cutaneous melanoma samples had a lower CDH23 expression when compared to normal skin samples and this low expression was significantly correlated with a worse overall and disease-free survival. This suggests that a mutation causing downregulation or loss of function in this gene might be implicated in melanoma. The particular mutation herein identified introduces a possible site for mannosylation absent in wildtype CDH23, which is supported by the mounting evidence that aberrant patterns of cadherin glycosylation are linked to carcinogenesis and metastasis [56]. CDH1, also belonging to the cadherin family, encodes E-cadherin, which has been reported to have a tumor suppressor role [56,57], acting not only as an adhesive protein, but also as crucial in growth development and carcinogenesis. Besides the role of E-cadherin in metastasis and invasion [58], CDH1 mutations were found in familial gastric cancer [59] and lobular breast cancer [60]. Additionally, its loss of expression has been reported in sporadic gastric cancer in distinct cohorts [61], and its downregulation was associated with poor outcome [62]. Overall, this data supports the hypothesis for the pathogenicity of this rare CDH23 variant, as it could play a similar role to that of CDH1 in gastric cancer. Furthermore, our data reveals novel possibilities in the context of melanoma. Since BRAF and NRAS mutations are usually mutually exclusive [58] and clinical resistance to therapy involving these genes being a common issue in melanoma, finding new targets for treatment is crucial [60,61]. Since CDH23 mutations may coexist with BRAF or NRAS mutations, in future studies it would be interesting to analyse if the presence of CDH23 mutations could alter overall survival in BRAF or NRAS mutated melanoma patients, as it may lower patient overall survival and account for BRAF and NRAS resistance. In summary, we have identified three novel mutations in CDH23, ARHGEF40, and BRD9 genes, which could confer cutaneous melanoma susceptibility. Currently, despite our strong indications for importance of these genes, further studies are required to strengthen these findings. Our future studies will include an expansion of our cohort with several more cases for both MPM and familial melanoma. Nevertheless, the polymorphic variants identified showed a statistically significant cumulative effect on melanoma, demonstrating that they could contribute to increased MPM susceptibility in a polygenic manner. Besides, out of the three identified genes, BRD9 seems to be the most promising in hereditary melanoma due to its involvement in oncogenic and DNA-repair mechanisms, which demonstrates the importance of studying this gene and its potential pathogenic variants in the future. Supplementary Materials: The following are available online at http://www.mdpi.com/2073-4425/11/4/403/s1, Figure S1: Structural models of CDH23 p.Ala266Thr pathogenic variant, Figure S2: Structural models of ARHGEF40 (Solo) p.Arg834Cys pathogenic variant, Figure S3: Analysis of the frequency of mutation in additional datasets, Table S1: Selected variants after bioinformatics analysis and subsequent Sanger validation, Table S2: Variant impact prediction by in silico tools, Table S3: Structural models of CDH23 and ARHGEF40 obtained by homology modelling using the SWISS-MODEL server, Table S4: Primer list employed for variant validation, Table S5: Patient clinicopathological data.
v3-fos-license
2019-03-11T13:13:00.655Z
2012-03-11T00:00:00.000
73178823
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.omicsonline.org/additive-solution-to-diastema-closure-by-a-combination-of-direct-and-indirect-techniques-2161-1122.1000125.pdf", "pdf_hash": "449072bb002579e77f47ab5721d050a20f1970b5", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42700", "s2fieldsofstudy": [ "Medicine" ], "sha1": "9d19b55ef7c896ba353462e465744a12026f7f93", "year": 2012 }
pes2o/s2orc
Additive Solution to Diastema Closure by a Combination of Direct and Indirect Techniques Smile design in restorative dentistry embraces facial esthetics, gingival esthetics, the relationship between soft and hard tissues and dental asthetics [1,2]. Smile design requires assessment of facial soft tissues and skeleton, followed by intraoral examination of the teeth and their relationship with lips and soft tissues [3]. A facial analysis is performed to determine how lips and soft tissue frame the smile in different positions of speech and laughter and establishes the localization of the facial midline and its position with respect to the dental midline. Smile parameters include: gingival marginal levels tooth size, including tooth-tooth proportion and tooth width: length ratio; tooth position; tooth shape; incisal embrasures; connector spaces, where adjacent teeth appear to touch; axial inclinations; and the shade of maxillary teeth [2]. Introduction Smile design in restorative dentistry embraces facial esthetics, gingival esthetics, the relationship between soft and hard tissues and dental asthetics [1,2]. Smile design requires assessment of facial soft tissues and skeleton, followed by intraoral examination of the teeth and their relationship with lips and soft tissues [3]. A facial analysis is performed to determine how lips and soft tissue frame the smile in different positions of speech and laughter and establishes the localization of the facial midline and its position with respect to the dental midline. Smile parameters include: gingival marginal levels tooth size, including tooth-tooth proportion and tooth width: length ratio; tooth position; tooth shape; incisal embrasures; connector spaces, where adjacent teeth appear to touch; axial inclinations; and the shade of maxillary teeth [2]. Studies on the attractiveness of smiles as a function of their variation from esthetic norms have shown the presence of a diastemato be a decisive negative factor and a strong candidate for treatment [4,5]. Adiastema is a black space between adjacent teeth that are separated from each other, with no presence of a contact area. Possible origins of this defect include an excessively wide dental arch, anomalous tooth size, congenital tooth absence and gingival frenum hypertrophy. First, it is important to differentiate diastemata from pathologic tooth migration pathologic tooth migration that have developed over time and may indicate a lack of stability in tooth position [6]. A decision must then be taken whether to treat the patient with a multidisciplinary approach or simply to close the spaces by means of direct and/or indirect restorative therapy [7]. A spaced dentition also can be due to various reasons such as hypodontia, tooth size discrepancy and impeded eruption. The dilemma for clinicians is whether to close, open or redistribute the space [8]. Two principles govern the prosthodontic management of dentitions that are compromised by missing teeth or by size or shape alterations: the first principle is to obtain appropriate spaces for the optimal replacement of missing teeth with teeth of suitable size or shape; and the second is for existing teeth to be optimally positioned within the dental arch [9]. The ideal space size for restoration in these cases is established by measuring the contralateral tooth; if this is present in correct proportions, Bolton tooth-size analysis (or similar) is performed, determining average width: height ratios of the teeth and preparing an orthodontic setup or diagnostic wax-up [9]. If the teeth are correctly positioned and anomalous tooth size is the origin, the clinician can choose between restoration with porcelain or composite resin, which have different properties and imply distinct clinical procedures [10]. A direct or indirect approach can be used with the latter, but the use of porcelain requires an indirect technique. The decision to use one or the other depends on the characteristics of each case. If the diastema results from tooth malposition, orthodontic treatment can avoid the need for a restorative approach [11]. When there is a congenital absence of teeth or the presence of gingival frenum hypertrophy, surgery may be required [12,13]. The aim of this case report is to show how to solve a problem of black spaces preserving the major possible quantity of sound teeth by means of combining different materials with excellent esthetic results Case Report A healthy 27-year-old man presented to the clinic due to interdental spaces ( Figure 1). He expressed satisfaction with the color of his teeth and a desire to avoid additional orthodontic treatment. His wide smile showed a high upper lip line. Incisal edges of anterior teeth were wellpositioned in relation to gingival margins. The patient had no carious lesions or history of restorative treatment. He had a thin gingival biotype and fairly symmetrical gingival architecture. Temporomandibular joints were normal, with no history of dysfunction. Although the patient's visits to the dentist had been infrequent, clinical examination evidenced good periodontal health, with no probing depths >3 mm and periapical radiography studies showed adequate bone levels. Microdontia of the upper incisors produced diastemata in the esthetic zone. Among the different possible approaches to this case, we selected the most conservative because of the good periodontal status of the patient and his desire to improve his appearance. Before the treatment, his medical and dental history was reviewed, preoperative radiographs and photographs were taken and periodontal charting was done. Diagnostic impressions were made and study models were mounted on an articulator. Smile analysis was performed by digitally tracing over a photograph (1:2 proportion). A diagnostic wax-up with modeling-wax (Eco-Cera Dental S. L. Monachil. Granada. Spain) of the proposed ideal was made for the creation of a mockup, using silicone key (Polixilosane Lab-Putty, ColteneWhaledent. Altstätten/Switzerland) ( Figure 2) and filling it with Protemp 3 Garant A3 shade (3M ESPE Dental Products. St. Paul, MN. USA). This model also served to create a putty index to enable the preservation of as much of the dental structure as possible in the preparation ( Figure 3). Tooth shade evaluation of his natural teeth indicated the use of Vitapan Classical B1 shade (VITA Zahnfabrik. Bad Säckingen, Germany) with white horizontal lines. In the preparation of the right lateral incisor, the silicon index was used to ensure adequate removal of tooth structure for the ceramic layering. The orientation of the teeth in the arch indicated a porcelain width of 1.2 mm at the facial aspect and 1.3 mm at the incisal edge. Because the cervical margin was subgingival, Ultrapak #000 and #00 retraction cords (Ultradent Products. South Jordan UT. USA) were packed to obtain a profile of gradual emergence that ended in a slight chamfer. A minimal vestibular reduction of 0.1 mm was needed to regularize the surface. No removal of dental tissue was necessary to produce a space in the incisal third of the tooth. The left lateral incisor required a minimal preparation of around 0.5mm in the mesial area of the buccal aspect and 0.3 mm in the distal area of the buccal aspect. An incisal preparation of 0.5mm was sufficient to create internal characteristics in the porcelain that mimicked the natural tooth. The interproximal preparation was transferred in a lingual direction to prepare the interdental porcelain extension with a profile of progressive emergence in both lateral incisors ( Figure 4). Free-hand composite resin enamel plus HFO (Micerium. Avegno. Italy) was used to create direct temporary restorations. When the veneers were received from the laboratory, their fit was tested in the model, verifying the presence of adequate contact points with the adjacent teeth and performing an in-mouth color test with temporary Variolink Veneer try-In glycerin-based cements. (IvoclarVivadent. Schaan. Liechtenstein). Veneers were etched with 10% hydrofluoric acid (IvoclarVivadent. Schaan. Liechtenstein) for 90 seconds and the residue was then removed by immersion in an ultrasonic cleaner with 95% alcohol. Next, two coats of Silane (IvoclarVivadent. Schaan, Liechtenstein) were placed and then dried to evaporate the solvent. The veneers were placed in a black box during the dental preparation. The anterior area of the maxilla was isolated with an extra-thick (15 x 15 cm) Hygenic dental dam, separating neighboring teeth with 12mm x 12mm x 0.075mmPTFE Teflon strip (Unimax International Limited. Cixi, Ningbo, P. R. China). The surface was etched with 37% orthophosphoric acid (Ultradent Products. South Jordan UT. USA) for 30 seconds and the teeth and veneers were prepared with Ena-Bond light curing bonding (Micerium, Avegno, Italy). The final cementation step was resin cementation with lightcuring adhesive luting compositeVariolink Veneer high value +1 (IvoclarVivadent. Schaan. Liechtenstein) on both surfaces removing remanent cement with paint brush and dental floss ( Figure 5). All resin cement was removed from the subgingival area by using a scalpel blade. The occlusion was then re-tested, making any necessary corrections with a fine-grain diamond bur. Finally, the margins were polished with diamond discs. For the two central incisors, a diamond bur was used to create a rough surface, removing enamel prisms and enhancing the adhesion. The right central incisor was restored first, polishing the interproximal aspect, followed by restoration and polishing of the left central incisor. The same steps were followed for both, etching with 37% phosphoric acid for 30 seconds (Ultradent Products, South Jordan UT, USA), rinsing clean and then lightly air-drying to confirm a proper etch pattern. Ena-Bond light curing bonding agent (Micerium, Avegno, Italy) was painted on the entire tooth surface, dried with an air syringe until no movement was observed and then light-cured for 10 seconds. Composite layering followed, curing each layer for 10 seconds after its placement. A 0.3 mm palatal layer of Enamel plus HFO shade GE3 (Micerium, Avegno, Italy) was applied on the silicon key to conform the palatal aspect of the restoration. A small amount of Enamel plus dentin shade UD2 (Micerium, Avegno, Italy) was applied on interproximal surfaces. An Ambar translucent shade was applied on the incisal third to create an internal effect of translucency. A Kolor + plus Resin Color Modifier kit (Kerr Corporation. OrangeCA 9286. USA) was used to apply white stains to reproduce the natural stains observed in the tooth, followed by light-curing for 10 seconds. A fine layer of Enamel microfill GE3 (Micerium. Avegno. Italy) was placed to create the gross shape of the central incisors. This final shade was then covered with gel-oxalate to complete the polymerization. After analyzing the photographic studies, the patient was seen for modification of the form and shape of the direct veneers. Final polishing with Sof-Lex burs and discs (3M ESPEDental Products. St. Paul, MN. USA) from larger to smaller grain was done to obtain a satin-like finish on facial composite surfaces. Secondary and tertiary anatomies were reproduced with a fine grain conical diamond burn. Composite polishing pastes Enamel plus shiny 3 and 1 micron diamond pastes and Enamel plus shiny C1 micron aluminum oxide (Micerium, Avegno, Italy) were used on Felt disc Shiny FD 5, (Micerium, Avegno, Italy) to achieve a high gloss finish ( Figure 6). the patient's smile satisfied the patient's demands (Figure 7). 18-Month follow-up image depicts the harmonious integration of form and color achieved (Figure 8). After treatment was a total integration of the restorations in the mouth of the patient. The new appearance of the patient's smile satisfied the patient's demands (Figure 7). 18-Month follow-up image depicts the harmonious integration of form and color achieved (Figure 8). Discussion The clinician primarily assessed this case from an esthetic viewpoint, achieving a natural outcome despite the use of different materials on adjacent teeth. Numerous authors have demonstrated that diagnosis efforts and the study of patients' individual characteristics are key to a successful esthetic outcome [14,15] and a diagnostic wax-up and corresponding template can be used to evaluate the result in the patient's mouth before beginning the treatment [16,17]. The patient's perception of cosmetic improvement is not significantly affected by the selection of the material (direct composite resin or porcelain) for maxillary anterior veneers, although a more conservative option of composite veneers may be preferred by patients given the choice after receiving adequate information [18]. In the present case, the clinician offered the patient an additive solution for his teeth, adding cosmetic materials and avoiding reduction of his dental structure. A silicon index of the additive wax-up is used as a reference for tooth reduction [19], allowing a minimal reduction and therefore the optimal outcome. In the present patient, the silicon index showed that more space was available for a restoration in the lateral incisors than in the central incisors. Pascal Magne [20] described the importance of an extensive reduction in order to close diastemata, with an interproximal preparation in lingual and subgingival directions to prepare an interdental extension of the porcelain with a profile of progressive emergence. This entails an aggressive reduction of the tooth. In the present patient, the lateral incisors could be readily restored without aggressive reduction due to the cone shape of the teeth, allowing a minimal reduction and restoration with porcelain veneers. Hence, this was the best treatment option for the lateral incisors in our patient. Other authors [19,21] have reported a similar approach, resolving cases with porcelain materials and minimal tooth reduction, allowing the indirect restoration of laterals. The main advantage of an indirect method is that it offers absolute control over the emergence profile and considerable control over the contact area. An anterior tooth can be restored with either direct or indirect adhesive techniques, which offer different advantages and disadvantages according to the specific clinical situation. An indirect technique involves a less conservative approach. Thus, if we had treated this patient with porcelain alone, the lateral incisor would have been prepared with a minimal reduction due to its small size, but the central incisors would have required extensive reduction. For this reason, the decision was taken to use resin composites in the central incisors, allowing restoration with minimal reduction of the enamel using a medium grit diamond bur or aluminum oxide sandblasting. Furthermore, central incisors only require minimal preparation in the interproximal area. Excellent outcomes have been reported by numerous authors who used this procedure [10,22,23]. In the present case, the predictability of the direct technique was enhanced by producing a lingual incisal silicone index to allow the creation of a stratified restoration in the mouth of the patient with the same form as a previous wax-up. This technique has been reported by numerous authors [24][25][26]. Different performances can be expected from the distinct materials employed in this case, which were all selected to provide long-term function and esthetics. Some studies have confirmed the long-term survival of porcelain restoration, with no change in color match or surface smoothness at 10 years [27]. Although bonded porcelain veneers deliver excellent esthetics, they have traditionally required more tooth preparation in comparison to direct composite restorations; they have shown the best overall survival [28], although their life-long survival has not been demonstrated [24]. Resin composite is less stable and may require more visits for maintenance. In direct composite restorations, selection of the appropriate composite materials and application techniques is critical [24]. However, composite resin can be successfully repaired, unlike porcelain [29,30]. It was difficult to maintain a correct proportion of the teeth in this patient solely using a restorative approach. Most authors have generally concluded that the width of a central incisor should be roughly 80% of its length [29,31]. Various methods have been proposed to determine the appropriate width of central incisors, e. g. , by calculating oneseventh of the interpupilar distance [14], by multiplying the width of the lower central incisor by 1.62 [32], or by adding the width of a lower central incisor to one half of the width of the lower lateral incisor. Standard tooth measurements are also widely used to determine the width, ideally considered to be 8.0-8.5 mm for the maxillary central incisor [3,9]. The length can be determined by multiplying the width by 1.33 (maximum possible length) or 1.25 (minimum possible length). An important additional factor determining the length of a restoration is the maxillary incisor exposure at rest, reported to be a mean of 1.91 mm in males and 3.40 mm in females, with greater exposure in younger individuals and variations among ethnic groups [3,33]. When an ideal tooth proportion is not clinically feasible, attempts can be made to create the illusion of a more correct ratio by modifying the angles, which are clearly defined by the light reflective surface [34]. Changes in the orientation and arrangement of transition lines can increase or decrease the area of light reflection, thereby affecting the observer's perception of the size of the tooth. In the present case, this procedure was necessary due to the large width of the incisors [35,36]. Conclusion The conservative restorative approach adopted in this case satisfied the patient's demands. Dentists should fully inform patients of all possible risks, benefits and alternative options before initiating treatment. In the present patient, a substantial improvement was obtained despite the use of different materials. The natural tooth properties were replicated with minimum tooth preparation and maximum preservation of sound natural tissues.
v3-fos-license
2024-07-19T15:19:10.251Z
2024-07-17T00:00:00.000
271278110
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2226-4310/11/7/585/pdf?version=1721202520", "pdf_hash": "bf9ef3e5f19cff19cbb82a0c762785e4a71bafd9", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42703", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "sha1": "7a946ccd244e110aa73ed5f8640b6ba96e9ebc5b", "year": 2024 }
pes2o/s2orc
Cooling of 1 MW Electric Motors through Submerged Oil Impinging Jets for Aeronautical Applications : Electrification of aircraft is a very challenging task as the demand for energy and power is high. While the storage and generation of electrical energy are widely studied due to the limited specific energy and specific power of batteries and fuel cells, electric machines (power electronics and motors) which have years of experience in many industrial fields must be improved when applied to aviation: they generally have a high efficiency but the increase in power levels determines significant thermal loads which, unlike internal combustion engines (ICE), cannot be rejected with the exhaust. There is therefore a need for thermal management systems (TMSs) with the main objective of maintaining operating temperatures below the maximum level required by electric machines. Turboprop aircraft, such as the ATR 72 or the Dash 8-Q400, are commonly used for regional transport and are equipped with two gas turbine engines whose combined power is in the order of 4 MW. Electric and hybrid propulsion systems for these aircraft are being studied by several leading commercial aviation industries and start-ups, and the 1MW motor size seems to be the main option as it could be used in different aircraft configurations, particularly those that exploit distributed electric propulsion. With reference to the topics mentioned above, the present work presents the design of a TMS for a high-power motor/generator whose electrical architecture is known. Once integrated with the electrical part, the TMS must allow a weight/power ratio of 14 kW/kg (or 20 kW/kg at peak power) while maintaining the temperature below the limit temperature with reasonable safety margins. Submerged jet oil is the cooling technique here applied with a focus on diathermic oil. Parameters affecting cooling, like rotor speed and filling factor, are analysed with advanced CFD. Introduction The electrification of aircraft, as a necessary response to the impact of aviation on global warming, follows two parallel paths, the first of which is based on the replacement of nonpropulsive subsystems now driven by mechanical, hydraulic, and pneumatic energy with more efficient electrical subsystems.The second path is intended to replace internal combustion engine (ICE) propulsion systems with hybrid and hopefully all-electric propulsion. According to [1], carbon dioxide (CO 2 ) emissions produced by the mobility sector will increase by 80% by the end of 2050.About 20% of the foreseen increase should be due to aviation [2]. The Intergovernmental Panel on Climate Change (IPCC) issued the special report "Summary for Policymakers" on the impacts of global warming of 1.5 • C above preindustrial levels and related global greenhouse gas emission pathways [3].Subsequently, the European Commission (EC) issued the report "A Clean Planet for All" [4], underlining the need for decarbonisation by 2050.Subsequently, the European Parliament declared the climate and environmental emergency [5] and "its commitment to urgently take the concrete action needed to fight and contain this threat before it is too late." This commitment translated to the launch of the European Green Deal [6], which fixes the goal to "transform the EU into a fair and prosperous society, with a modern, resource-efficient and competitive economy where there are no net emissions of greenhouse gases in 2050 and where economic growth is decoupled from resource use", and of the Industrial Strategy for Europe [7] stating that "there should be a special focus on sustainable and smart mobility industries [. ..] to drive the twin transitions towards climate neutrality and digital leadership, to support Europe's industrial competitiveness and improve connectivity.This is notably the case for [. ..] aerospace [. ..] as well as for alternative fuels and smart and connected mobility". Aviation stakeholders committed to reducing global net aviation carbon emissions by 50% by the year 2050 compared to 2005 [8].This target is narrowed into specific goals in terms of reducing emissions per passenger per kilometre by the long-term roadmap of the Advisory Council for Aviation Research and Innovation in Europe (ACARE) Flightpath 2050 [8], according to which a reduction of 75% for CO 2 and 90% for nitrogen oxide (NO X ) emissions is expected. These goals can be achieved only by introducing new aircraft configurations and disruptive technologies [9], changing the paradigm of the current aviation market, which is still based on the use of jet engines introduced in the 1950s. In this scenario, one of the most promising technologies under study is hybrid/electric propulsion for aircraft with powers greater than 250 kW, while for lower-class aircraft, the approach with fuel cells, batteries, and supercapacitors is starting to make technical sense.However, the electrification of flight represents a promising technology that can achieve the goals mentioned above, even if it introduces new challenges, both environmentally and technologically. Several studies have addressed the problem of flight electrification, focusing in particular on propulsion systems that deal with low values of the specific powers and specific energies of the electrochemical sources for energy storage.In [10], a study is reported in terms of energy budget and weights for two types of missions for a General Aviation (GA) aircraft, a 15-min touch-and-go training flight and a 1-h cruise flight.A hybrid electric propulsion system of an aircraft with approximately 60 kW of maximum power is numerically analysed in [11] with a lumped parameter-based model in which the complete system of the converter, motor, batteries, and propeller is coupled with a traditional internal combustion engine.In [12,13], the management of multiple energy sources (fuel cells, lithium batteries, and super-cap, each with its own charging and discharging rate) allows a 60 kW two-seater aircraft to successfully fly for a 1 h mission, ensuring acceptable weight and centre of gravity displacement. A key component of the fully electric or hybrid propulsion system is the motor whose power is progressively large depending on the size of the aircraft.As already seen, GA aircraft require moderate motor shaft power but their fleet and their use result in a low environmental impact.Motor power for propeller-driven commuters (<19 passengers) and regional aircraft (about 80 passengers) ranges from 1 MW to 5 MW.Within the overall commercial aviation sector, regional aircraft account for an estimated 3% of CO 2 emissions.In this case, the development of cleaner propulsion is not so important for reducing emissions but for their application on increasingly larger aircraft for which the energy and specific power of electrochemical sources represent the real bottleneck. High-power electric motors for aviation are currently under development as the requirements are different from those designed for land and maritime applications.A power output of 500 kW for a single motor would imply a number of 6-8 motors in a regional aircraft, which would also have an effect on the overall configuration of the aircraft, including weight, power distribution, and aeroacoustics.The motor size being considered to equip regional aircraft is based on a power output of 1 MW [14].From a technical point of view, the high thermal loads associated with them make traditional TMSs impractical. The present article focuses on this technical challenge: designing a new TMS for the thermal control of a MW-class permanent magnet (PM) electric motor.The TMS for these motors requires, in addition to the obvious ability to dissipate thermal loads and maintain a high overall volumetric density, to consider effective, reliable, and low-impact cooling solutions on weight. From the examination of the most common cooling techniques presented in Section 2, oil jet cooling was considered to have the highest potential for the TMS design, also taking into account the characteristics of the electrical machine to be coupled.Therefore, the topic of oil jets has been covered extensively in Section 3 with an in-depth look at diathermic oil.As regards the electric machine, this is presented in Section 4, where the TMS design is fully developed considering the geometric and weight constraints at the system level.To allow safe operations, it is analysed using a state-of-the-art CFD tool for the identification of areas where the temperature reaches the highest values.In particular, to validate this solution, preliminary CFD analyses were performed using state-of-the-art tools to identify the thermal map of the proposed electrical machine.Based on the results of this preliminary analysis, a TMS has been developed, considering geometric and weight constraints at the system level, to allow safe operations of the proposed PM electrical machine.This study introduces an innovative configuration of submerged impinging oil jets for the direct cooling of critical components in high-power electric motors for aeronautical applications, significantly reducing the size and weight of the cooling system.The article demonstrates how the proposed system can maintain temperatures within material resistance limits even at maximum power, offering thermal optimisation and effective lubrication with minimal increase in friction losses, opening new perspectives for thermal management in aeronautical electric motors. The activities here summarised have been carried out within the European Union (EU) co-funded ORCHESTRA (Optimised Electric Network Architectures and Systems for More-Electric Aircraft) Project [15], which aims to design new technologies that allow a 10% efficiency increase and 25% weight reduction of Electric Power Systems (EPSs) compared to the state-of-the-art. Review of Cooling Technologies Over the last decade, electric motors have taken on an increasingly significant and central role in the transport sector, from automotive to the first concepts of power applications for aviation.They can offer many advantages: high efficiency, high specific power, low weight, relatively small dimensions, and high ease of use.In addition to this, they are an excellent resource for reducing polluting emissions.However, despite the positive aspects, electric motors, as seen above, are affected by numerous power losses.This logically does not only concern motors and/or generators in the aeronautical sector but, in general, any electric motor. It is pretty straightforward, however, that the most disadvantaged are those used for propulsion of aircraft, as these are required to have increasingly higher speeds and specific powers, with the addition of being compact.As a result, this causes motors to have smaller dimensions and even higher temperatures, pushing them further and further to the limits of their capabilities.Consequently, to try to stem these problems, increasingly efficient cooling systems are required in motors [16].Today, it is possible to find numerous strategies to remove heat and reduce losses, especially in automotive companies, where improvements need to be made [17]. Most cooling systems are based on the physical principles of conduction, particularly convection [18,19].Convective heat exchange is the first resource used for motor cooling.Consequently, finding strategies based solely on the conductive process is not easy.The latter methodology is based on transferring molecular energy through the molecules' vibrational, rotational, and translational motions.But it does not exploit the macroscopic movement of matter in any way.This section briefly overviews TMS technologies, starting with systems with higher Technology Readiness Level (TRL [20]) and moving on to lower ones [16]. • Liquid cooling systems: These systems use water or onboard coolants, such as engine oil and fuel, to remove heat from the equipment and cool it down.They are well known and widely used in several applications [17] (e.g., in the automotive field Figure 1a), and their TRL is higher than 7. • Forced air cooling systems: These systems adopt the same principle as liquid cooling systems [17], using air as a coolant to transfer heat from the source to the external ambient [21].As for liquid cooling systems, air cooling systems (see Figure 1b) have been applied to existing piston aircraft [22], even if it is less effective during lowspeed operations (i.e., ground operations, take-off, holding, and all the other highpower/low-speed operations).This type of cooling is commonly used in electric motors, where air is circulated over the surface of the motor to dissipate heat.Studies explore factors influencing cooling effectiveness, such as fan design, airflow, and heatsink configurations.While ensuring construction simplicity and reduced weights, this approach cannot easily guarantee heat dissipation in the reference volumes.TRL is higher than 7. Aerospace 2024, 11, x FOR PEER REVIEW 4 of 32 vibrational, rotational, and translational motions.But it does not exploit the macroscopic movement of matter in any way.This section briefly overviews TMS technologies, starting with systems with higher Technology Readiness Level (TRL [20]) and moving on to lower ones [16]. • Liquid cooling systems: These systems use water or onboard coolants, such as engine oil and fuel, to remove heat from the equipment and cool it down.They are well known and widely used in several applications [17] (e.g., in the automotive field Figure 1a), and their TRL is higher than 7. • Forced air cooling systems: These systems adopt the same principle as liquid cooling systems [17], using air as a coolant to transfer heat from the source to the external ambient [21].As for liquid cooling systems, air cooling systems (see Figure 1b) have been applied to existing piston aircraft [22], even if it is less effective during lowspeed operations (i.e., ground operations, take-off, holding, and all the other highpower/low-speed operations).This type of cooling is commonly used in electric motors, where air is circulated over the surface of the motor to dissipate heat.Studies explore factors influencing cooling effectiveness, such as fan design, airflow, and heatsink configurations.While ensuring construction simplicity and reduced weights, this approach cannot easily guarantee heat dissipation in the reference volumes.TRL is higher than 7. (a) (b) • Skin heat exchangers: This type of system uses ambient air as the cold side of the cooling system [22], while a fluid transporting waste heat from the heat source is the hot side of the system (see Figure 2).Such a system has a TRL higher than 7. • Pump two-phase system: This is a hybrid cooling loop system consisting of an evaporator, a mechanical pump, a reservoir/condenser, and connecting pipes, as can be seen in the schematic of Figure 3a [23].Such a system has already been implemented on existing aircraft, assuring a TRL higher than 7. • Phase Change Materials (PCMs) are gaining importance as passive cooling solutions for electric motors.PCMs absorb and release heat during phase transitions, maintaining a stable temperature within the motor [24].Studies investigate suitable • Skin heat exchangers: This type of system uses ambient air as the cold side of the cooling system [22], while a fluid transporting waste heat from the heat source is the hot side of the system (see Figure 2).Such a system has a TRL higher than 7. • Passive systems: Such cooling systems use fluid moving in a closed case to cool down the equipment.There are three different typologies of passive systems [21]: Heat pipes: Refrigeration fluid is heated by the heat source, changing phase (from liquid to vapour), thus absorbing heat.The vapour moves from the hot to the cold zone, condensing and releasing heat outside; Thermosyphons are similar to heat pipes but use gravity and natural convection; Vapour chambers are flat heat pipes that transfer heat in 3-D. • Pump two-phase system: This is a hybrid cooling loop system consisting of an evaporator, a mechanical pump, a reservoir/condenser, and connecting pipes, as can be seen in the schematic of Figure 3a [23].Such a system has already been implemented on existing aircraft, assuring a TRL higher than 7. • Phase Change Materials (PCMs) are gaining importance as passive cooling solutions for electric motors.PCMs absorb and release heat during phase transitions, maintaining a stable temperature within the motor [24].Studies investigate suitable types of PCMs, their encapsulation methods, and their integration within motor systems.Figure 3b shows the crystalline configuration of a generic PCM in the heat absorption and release phase, with the temperature curves as a function of time [25]. PCMs are very effective for lowering the temperature locally, but mainly for transient conditions.Various problems [26,27] have been detected regarding the material to be used, assuring a melting temperature of 450 K, suitable for our applications, and above all, the maximum amount of heat they can dissipate.The TRL of PCM ranges between 4 and 6. • Absorption refrigerator: Such systems, whose TRL ranges between 4 and 5, are driven using low-quality heat, as seen from [28].Coolant is inserted in the evaporator, which absorbs heat from the component to cool down and changes its phase into vapour (see Figure 4a).This vapour enters the absorber at low pressure, reacting with another fluid (e.g., water) to form a compound at higher pressure without any compressor. A pump directs this mixture to a generator, which is heated by a low-quality heat source (e.g., a hot system).This heat adduction induces the separation of water vapour from the coolant vapour.A proper filter separates these two vapours: water vapour is sent back to the absorber, while the coolant goes to the condenser where it condenses, moving into the liquid phase.• Vortex tube: This system, shown in Figure 4b, also known as the Ranque-Hilsch tube, is a mechanical device that separates compressed gas of homogeneous temperature in a stream hotter (up to +200 • C) than the incoming flow and another cooler one (up to −50 • C) simultaneously [29].Thanks to the geometry of the tube, which includes a control valve, an outlet, and a spin chamber, the TRL is lower than 4. • Thermoelectric effects: Some materials show a coupled thermal/electric behaviour, enabling direct conversion between electrical and thermal energy.There are two types of thermoelectric effects in Figure 5a: Peltier effect: The thermoelectric material heats up or cools down at an electrified junction [30]; Seebeck effect: The thermoelectric material converts heat directly to electricity. Thermoelectric materials have poor power density and efficiency.Their TRL is between 3 and 4, although some thermoelectric generators are already used in the aerospace sector [31].• Thermionic energy converter: This system comprises a heated surface and a collector separated by a vacuum (see Figure 5b).The heated surface emits electrons flowing towards the cold surface, producing an electromotive force that can be used to absorb heat [32].The TRL of this TMS is lower than 3. • Caloric materials: This type of materials generates cooling effects by the influence of magnetic (magnetocaloric materials), electric (electrocaloric materials), or mechanical (mechanocaloric materials) forces, using a reversible transformation.The TRL is between 2 and 3.In Figure 6a, a schematic of the principle of the magnetocaloric effect is shown.• Joule-Thomson effect: If a highly compressed gas suddenly expands, the pressure reduction lowers rapidly (almost immediately) the temperature of the gas, which can be used as a heat extractor from a heat source.TMSs based on such an effect are generally utilised on coolers to allow cryogenic performances.The system consists of a fluid isolated in a volume (see Figure 6b), cooled using a proper heat sink, before entering an isolated chamber, where there is a nozzle for sudden gas expansion through a valve into the isolated chamber itself.The expansion rapidly reduces fluid temperature, creating a very cold volume, which cools down the hot source.The TRL ranges between 1 and 3. • Cryo-cooling systems: In Figure 7a, this type of system uses cryo-refrigerants to remove large amounts of heat, reaching cryogenic temperatures [33].To remove this heat, these systems can be based on both boiling or sublimation phenomena at low temperatures (depending on the coolant used: liquid or solid).A standard system is the so-called Reverse Bryton Cycle Cryocooler (RBCC).The TRL is between 1 and 3. • Thermoacoustic heat engines: These systems (see Figure 7b) are composed of a resonator filled with a working fluid and heat exchangers in a tube.Their geometry is studied to convert heat into small air vibrations (i.e., acoustic power).The TRL is between 1 and 3. temperatures (depending on the coolant used: liquid or solid).A standard system is the so-called Reverse Bryton Cycle Cryocooler (RBCC).The TRL is between 1 and 3. • Thermoacoustic heat engines: These systems (see Figure 7b) are composed of a resonator filled with a working fluid and heat exchangers in a tube.Their geometry is studied to convert heat into small air vibrations (i.e., acoustic power).The TRL is between 1 and 3. (a) (b) Aerospace 2024, 11, x FOR PEER REVIEW 6 of 32 temperatures (depending on the coolant used: liquid or solid).A standard system is the so-called Reverse Bryton Cycle Cryocooler (RBCC).The TRL is between 1 and 3. • Thermoacoustic heat engines: These systems (see Figure 7b) are composed of a resonator filled with a working fluid and heat exchangers in a tube.Their geometry is studied to convert heat into small air vibrations (i.e., acoustic power).The TRL is between 1 and 3. (a) (b) Aerospace 2024, 11, x FOR PEER REVIEW 6 of 32 temperatures (depending on the coolant used: liquid or solid).A standard system is the so-called Reverse Bryton Cycle Cryocooler (RBCC).The TRL is between 1 and 3. • Thermoacoustic heat engines: These systems (see Figure 7b) are composed of a resonator filled with a working fluid and heat exchangers in a tube.Their geometry is studied to convert heat into small air vibrations (i.e., acoustic power).The TRL is between 1 and 3. Cooling Methodologies for High Power Electric Motor This paragraph describes the type of PM electric machine (either motor or generator) that must be used for applications on regional aircraft of the 1 MW class and which guarantees high values of power and specific energy.The typical losses of an electric motor (EM) or electric generator (EG) are responsible for the increase in weight of the overall system or the degradation in performance.In the case of conventional propulsion systems, although the total heat loads are higher, this heat is eliminated mainly at the exhaust.This heat must be managed locally and eliminated in electric aircraft with a TMS.The temperature limits of the materials that make up a permanent magnet motor, generally below 480 K, represent a challenge for the design of the TMS, which, in addition to being efficient in terms of overall power dissipated, avoids hot spots in all operating conditions, must guarantee a reduced weight, such that the sum of the EM and the Cooling Methodologies for High Power Electric Motor This paragraph describes the type of PM electric machine (either motor or generator) that must be used for applications on regional aircraft of the 1 MW class and which guarantees high values of power and specific energy.The typical losses of an electric motor (EM) or electric generator (EG) are responsible for the increase in weight of the overall system or the degradation in performance.In the case of conventional propulsion systems, although the total heat loads are higher, this heat is eliminated mainly at the exhaust.This heat must be managed locally and eliminated in electric aircraft with a TMS.The temperature limits of the materials that make up a permanent magnet motor, generally below 480 K, represent a challenge for the design of the TMS, which, in addition to being efficient in terms of overall power dissipated, avoids hot spots in all operating conditions, must guarantee a reduced weight, such that the sum of the EM and the Cooling Methodologies for High Power Electric Motor This paragraph describes the type of PM electric machine (either motor or generator) that must be used for applications on regional aircraft of the 1 MW class and which guarantees high values of power and specific energy.The typical losses of an electric motor (EM) or electric generator (EG) are responsible for the increase in weight of the overall system or the degradation in performance.In the case of conventional propulsion systems, although the total heat loads are higher, this heat is eliminated mainly at the exhaust.This heat must be managed locally and eliminated in electric aircraft with a TMS.The temperature limits of the materials that make up a permanent magnet motor, generally below 480 K, represent a challenge for the design of the TMS, which, in addition to being efficient in terms of overall power dissipated, avoids hot spots in all operating conditions, must guarantee a reduced weight, such that the sum of the EM and the Cooling Methodologies for High Power Electric Motor This paragraph describes the type of PM electric machine (either motor or generator) that must be used for applications on regional aircraft of the 1 MW class and which guarantees high values of power and specific energy.The typical losses of an electric motor (EM) or electric generator (EG) are responsible for the increase in weight of the overall system or the degradation in performance.In the case of conventional propulsion systems, although the total heat loads are higher, this heat is eliminated mainly at the exhaust.This heat must be managed locally and eliminated in electric aircraft with a TMS.The temperature limits of the materials that make up a permanent magnet motor, generally below 480 K, represent a challenge for the design of the TMS, which, in addition to being efficient in terms of overall power dissipated, avoids hot spots in all operating conditions, must guarantee a reduced weight, such that the sum of the EM and the associated TMS gives a power-to-weight ratio greater than 10 kW/kg, the minimum for aeronautical applications. Subsequently, the primary cooling techniques for machines of this power level will be reported, as well as the typical values of the convective heat transfer coefficients valid for the pre-design of the TMS on each subcomponent of the motor; finally, a focus on the cooling technique of impinging jets submerged with diathermic oil, which was conceived in this research work, will be presented. Description of Oil Cooling Techniques for the 1 MW PM Electrical Machine Electric motors are critical components in various applications, particularly for transport vehicles, of which aircraft impose the most stringent efficiency and reliability requirements.This literature review summarises the primary research and advances in thermal management techniques for electric motors.Many studies focus on developing accurate thermal models to understand the mechanisms of heat generation and dissipation in electric motors.CFD simulations and analytical models are widely used to analyse the thermal behaviour of motors.These models help identify critical points and optimise the design of cooling systems.This study is for designing and optimising a heat management and disposal system for an ultra-compact electric machine of approximately 1 MW.Improving heat transfer within electric motors is another crucial area of research.Some studies explore using heat pipes, radiators, and microchannel heat sinks to improve heat dissipation efficiency.Advanced materials like graphene and carbon nanotubes are being studied for their high thermal conductivity properties.It is therefore necessary to assess those improving techniques, which, despite being simple at a construction level, can guarantee the control of the maximum temperature in all operating conditions.One of the first approaches examined was cooling via an external cooling jacket on the stator [34].This reasonably simple solution was unsuitable for the thermal powers involved, even when using fluids other than water as the cooling fluid unless too high flow rates were used and couplings with internal phase change materials were considered.The problem is that the hottest points are near the teeth and the end winding, far from the cooling jacket, and different heat passage paths follow, as shown in Figure 8. Aerospace 2024, 11, x FOR PEER REVIEW 8 of 32 associated TMS gives a power-to-weight ratio greater than 10 kW/kg, the minimum for aeronautical applications.Subsequently, the primary cooling techniques for machines of this power level will be reported, as well as the typical values of the convective heat transfer coefficients valid for the pre-design of the TMS on each subcomponent of the motor; finally, a focus on the cooling technique of impinging jets submerged with diathermic oil, which was conceived in this research work, will be presented. Description of Oil Cooling Techniques for the 1 MW PM Electrical Machine Electric motors are critical components in various applications, particularly for transport vehicles, of which aircraft impose the most stringent efficiency and reliability requirements.This literature review summarises the primary research and advances in thermal management techniques for electric motors.Many studies focus on developing accurate thermal models to understand the mechanisms of heat generation and dissipation in electric motors.CFD simulations and analytical models are widely used to analyse the thermal behaviour of motors.These models help identify critical points and optimise the design of cooling systems.This study is for designing and optimising a heat management and disposal system for an ultra-compact electric machine of approximately 1 MW.Improving heat transfer within electric motors is another crucial area of research.Some studies explore using heat pipes, radiators, and microchannel heat sinks to improve heat dissipation efficiency.Advanced materials like graphene and carbon nanotubes are being studied for their high thermal conductivity properties.It is therefore necessary to assess those improving techniques, which, despite being simple at a construction level, can guarantee the control of the maximum temperature in all operating conditions.One of the first approaches examined was cooling via an external cooling jacket on the stator [34].This reasonably simple solution was unsuitable for the thermal powers involved, even when using fluids other than water as the cooling fluid unless too high flow rates were used and couplings with internal phase change materials were considered.The problem is that the hottest points are near the teeth and the end winding, far from the cooling jacket, and different heat passage paths follow, as shown in Figure 8.In this study, different numbers of windings on the cooling jacket from 4 to 10 are examined, from which it is possible to estimate the work of the convective heat exchange coefficient and derive maximum values of the thermal power exchanged.Table 1 shows schematically the correlations of the dimensionless heat transfer coefficient (Nusselt Number-Nu) for the various cooling techniques mentioned. Natural Cooling Ref. Cylinder housing = 0.525( • ) .; In this study, different numbers of windings on the cooling jacket from 4 to 10 are examined, from which it is possible to estimate the work of the convective heat exchange coefficient and derive maximum values of the thermal power exchanged.Table 1 shows schematically the correlations of the dimensionless heat transfer coefficient (Nusselt Number-Nu) for the various cooling techniques mentioned. Table 2 shows the fluid dynamic correlations for calculating the rotor and stator resistance coefficients for various conditions and the different areas of the electric machine.From these values, it is then possible to estimate the thermal resistance of each macro component of the motor and, finally, the overall one. Natural Cooling Ref. Spray cooling Nu = Pr 0.4 0.785Re 0.5 Entrance of air-gap Entrance of rotor ducts Submerged Jets Oil Cooling Techniques In general, the effectiveness of a cooling system greatly depends on the type of cooling liquid used.Coolants should have high thermal capacities to absorb heat without significant temperature changes.They should also have high thermal conductivity and high thermal stability (low freezing and high boiling points).Water is one of the most commonly used liquids because it has high thermal capacity.However, ethylene glycol and water (50/50) are often used [44].The studies [45][46][47] investigate the cooling enhancement using confined impinging jets, with and without water-based nanofluids.From them it is possible to derive the Nusselt correlations as a function of the geometric parameters of the jets (z and D) and of the Reynolds number of the jets. Oil, especially engine oil or Automatic Transmission Fluid (ATF), is another coolant commonly used for TMSs due to its many advantages from both thermodynamic and physical points of view.It has a thermal conductance similar to water and is an excellent electrical insulator with low dielectric constant and high electrical resistivity.In addition, it is chemically stable, non-toxic, and non-flammable.These aspects make oil a valid alternative to water or various mixtures, especially if the coolant is to be placed in direct contact with internal motor parts such as stator heads or windings.In fact, because of oil insulating capabilities, it is unnecessary to worry about whether electrical or magnetic effects may be created or about all the insulated circuits the electric motor is composed of. In addition, a secondary but not insignificant benefit is that by taking advantage of motor oil or ATF, there is no need for a second pumping system to circulate the liquid since the one already present is used [48].In recent years, by exploiting the properties of oil, more and more new solutions have been patented: oil jet, oil spray, and oil immersion cooling.Table 3 shows the thermophysical properties of the primary cooling fluids.In oil-immersion cooling, the motor is cooled by running oil through its internal parts in a set path.This method successfully cools the motor entirely and more efficiently as there is direct contact with the various internal components.In addition, due to the properties of oil, the magnetic fields on which the operating principles of the electric motor are based are not disturbed or disrupted.An example of a motor that utilises such a cooling methodology is the YASA P400R [44]. Cooling methods using oil jets and oil sprays are still under development, and many experiments on them have been made over the years, as well as fluid dynamic studies.However, to date, such a type of TMS is still not so widely used.This is because there are many physical parameters and phenomena to take into account, and up to now, the majority of the studies have focused on testing the effectiveness of such solutions on simple test surfaces rather than on real stator windings.Both methods base their operation on the direct injection of oil onto the internal parts of the motor, particularly the stator heads.In this way, direct liquid-surface contact occurs, and more effective heat transfer is assured than in the solutions described above.Successively, the oil falls by gravity to the lower part of the motor, where it is collected using a particular outlet.Finally, it will be cooled and re-circulated for use again.This type of system is considered one of the few capable of operating variably: when loads are low, or the use is short, the system delivers less oil or no oil at all; vice versa, for high loads or prolonged uses, it is expected to provide a quantity proportional to the workload.As can be seen, depending on the fluid and thus on the various properties (see Tables 1 and 3), the convective heat transfer coefficient values will tend to vary, as well as the heat transfer effectiveness.In addition, we must also consider that convection can be natural or forced.In fact, in the case of natural convection, the cooling fluid is the air, for which typical values of h are (5 ÷ 10) W/m 2 K.In the forced case, the air has h value of (10 ÷ 300) W/m 2 K. On the other hand, liquids can reach values of (50 ÷ 20,000) W/m 2 K [48].This study focused on oil jets directly conveyed at stator windings, which appears to be a highly efficient and flexible method of transferring heat: a liquid flow directed against a surface can absorb, very efficiently, large amounts of heat energy. Compared with conventional systems, where the flow is confined in a circuit and not in direct contact, this solution provides heat transfer coefficients up to three times higher for a given maximum flow velocity.This is because of turbulence generated by shear stresses between the jet and the circulating air, which is carried into the boundary layer of the surface.In addition, the flow required by an oil injection device can be up to two orders of magnitude less than that required in systems with cooling circuits.Unlike the spray case, the oil injection mechanism relies on nozzles with significantly lower pressure. Figure 9a shows a dense liquid column impinging on the solid surface at the nozzle's exit. Aerospace 2024, 11, x FOR PEER REVIEW 11 of 32 lower part of the motor, where it is collected using a particular outlet.Finally, it will be cooled and re-circulated for use again.This type of system is considered one of the few capable of operating variably: when loads are low, or the use is short, the system delivers less oil or no oil at all; vice versa, for high loads or prolonged uses, it is expected to provide a quantity proportional to the workload.As can be seen, depending on the fluid and thus on the various properties (see Tables 1 and 3), the convective heat transfer coefficient values will tend to vary, as well as the heat transfer effectiveness.In addition, we must also consider that convection can be natural or forced.In fact, in the case of natural convection, the cooling fluid is the air, for which typical values of h are (5 ÷ 10) W/m 2 K.In the forced case, the air has h value of (10 ÷ 300) W/m 2 K. On the other hand, liquids can reach values of (50 ÷ 20,000) W/m 2 K [48].This study focused on oil jets directly conveyed at stator windings, which appears to be a highly efficient and flexible method of transferring heat: a liquid flow directed against a surface can absorb, very efficiently, large amounts of heat energy. Compared with conventional systems, where the flow is confined in a circuit and not in direct contact, this solution provides heat transfer coefficients up to three times higher for a given maximum flow velocity.This is because of turbulence generated by shear stresses between the jet and the circulating air, which is carried into the boundary layer of the surface.In addition, the flow required by an oil injection device can be up to two orders of magnitude less than that required in systems with cooling circuits.Unlike the spray case, the oil injection mechanism relies on nozzles with significantly lower pressure. Figure 9a shows a dense liquid column impinging on the solid surface at the nozzle s exit.Three distinct regions can be distinguished from experimental studies when a liquid jet hits a surface.The first is that of the free jet, which develops instantaneously at the nozzle outlet and remains throughout the injection process.This region can, in turn, be subdivided into further sub-regions in which the flow takes on ever-changing and consequential characteristic features Figure 9b.Thus, after exiting the nozzle bore, the first section is characterised by a velocity, temperature, and turbulence profile dependent on the upstream flow and, therefore, on the shape of the nozzle [50,51].Finally, the oil jet impinges on the opposite surface, and it is deflected outward as illustrated in Figure 9c. For example, a cylindrical nozzle s flow will have a parabolic velocity profile, including moderate turbulence.In contrast, a thin, flat nozzle will create a flow with a flat velocity profile and low turbulence.If the velocity profile presents spatial gradients, these give rise to shear stresses on fluid packets present in the lateral edges of the jet.Three distinct regions can be distinguished from experimental studies when a liquid jet hits a surface.The first is that of the free jet, which develops instantaneously at the nozzle outlet and remains throughout the injection process.This region can, in turn, be subdivided into further sub-regions in which the flow takes on ever-changing and consequential characteristic features Figure 9b.Thus, after exiting the nozzle bore, the first section is characterised by a velocity, temperature, and turbulence profile dependent on the upstream flow and, therefore, on the shape of the nozzle [50,51].Finally, the oil jet impinges on the opposite surface, and it is deflected outward as illustrated in Figure 9c. For example, a cylindrical nozzle's flow will have a parabolic velocity profile, including moderate turbulence.In contrast, a thin, flat nozzle will create a flow with a flat velocity profile and low turbulence.If the velocity profile presents spatial gradients, these give rise to shear stresses on fluid 'packets' present in the lateral edges of the jet.This transfer (diffusion by viscosity) momentum outward from the jet, attracting additional fluid and increasing the mass flow of the jet.During such a process, the jet loses energy, and the velocity profile widens in spatial extension, decreasing its modulus along the edges of the jet itself.The "core" of the liquid column, in general, is not affected by momentum transfer and thus forms a central zone with a higher total pressure than the rest, with a velocity along the nozzle axis (U m ) almost equal to the velocity exiting the borehole (U n ).The extreme points of this zone have velocities equal to 0.95•U n and thus allow it to be distinguished from the rest of the liquid column.It may occur that shear stresses also expand towards the "core" before the jet reaches the surface.The decay of the "core" itself thus begins. Typically, this occurs at distances from the exit bore ranging from four to eight times the nozzle's diameter (or width).Should the jet decay, the velocity in the central part decreases, and its profile becomes similar to a Gaussian curve.We are then in conditions of a fully developed velocity profile [52,53]. The second region is the impact region, where the interaction between the jet and the surface produces a strong flow deceleration.The fluid begins, thus, to flow in a direction parallel to the solid, forming a liquid layer that grows along the dimensions of the impact surface.This liquid "film" represents the third region in Figure 9c [50,52]. When the liquid first impacts the surface, it stagnates in a small region of the impact zone.It remains in this condition for a given period and only then begins to expand along the surface.This time frame, called "residence time" (t*), can vary from a fraction of a second to a few minutes, depending on the conditions under which the experiments are carried out.At a time instant less than t*, the surface temperature decreases slowly and almost at a constant rate, although there is a sudden drop at the moment of impact.At instants later than t*, the liquid begins to expand, wetting the surface, consequently decreasing the temperature faster.The stagnation zone typically extends 12 times the diameter of the nozzle (in the case of circular jets) [50]. Droplets break away from the liquid layer when the turbulent oil jet impacts the solid wall.This phenomenon is called "splattering" (Figure 10a) and reduces the efficiency of the heat transfer process due to liquid loss.This transfer (diffusion by viscosity) momentum outward from the jet, attracting additional fluid and increasing the mass flow of the jet.During such a process, the jet loses energy, and the velocity profile widens in spatial extension, decreasing its modulus along the edges of the jet itself.The "core" of the liquid column, in general, is not affected by momentum transfer and thus forms a central zone with a higher total pressure than the rest, with a velocity along the nozzle axis (Um) almost equal to the velocity exiting the borehole (Un).The extreme points of this zone have velocities equal to 0.95•Un and thus allow it to be distinguished from the rest of the liquid column.It may occur that shear stresses also expand towards the "core" before the jet reaches the surface.The decay of the "core" itself thus begins. Typically, this occurs at distances from the exit bore ranging from four to eight times the nozzle s diameter (or width).Should the jet decay, the velocity in the central part decreases, and its profile becomes similar to a Gaussian curve.We are then in conditions of a fully developed velocity profile [52,53]. The second region is the impact region, where the interaction between the jet and the surface produces a strong flow deceleration.The fluid begins, thus, to flow in a direction parallel to the solid, forming a liquid layer that grows along the dimensions of the impact surface.This liquid "film" represents the third region in Figure 9c [50,52]. When the liquid first impacts the surface, it stagnates in a small region of the impact zone.It remains in this condition for a given period and only then begins to expand along the surface.This time frame, called "residence time" (t*), can vary from a fraction of a second to a few minutes, depending on the conditions under which the experiments are carried out.At a time instant less than t*, the surface temperature decreases slowly and almost at a constant rate, although there is a sudden drop at the moment of impact.At instants later than t*, the liquid begins to expand, wetting the surface, consequently decreasing the temperature faster.The stagnation zone typically extends 12 times the diameter of the nozzle (in the case of circular jets) [50]. Droplets break away from the liquid layer when the turbulent oil jet impacts the solid wall.This phenomenon is called "splattering" (Figure 10a) and reduces the efficiency of the heat transfer process due to liquid loss.The intensity of this event depends on the Weber number of the jet and the surface tension of the liquid (σ): The intensity of this event depends on the Weber number of the jet and the surface tension of the liquid (σ): 𝑊𝑒 = 𝑢 𝐷𝜌 𝜎 In case the regime is laminar, no "splattering" occurs [50].The heat transfer of a jet striking a surface is expressed by the Nusselt number (Nu) and is a complex function of many parameters: , nozzle shape = hD λ where (z/D) is the dimensionless distance between the nozzle and surface and (x/D) is the dimensionless distance from the stagnation point.In addition, nozzle geometry, turbulence, and jet velocity also have significant effects.By studying this trend, it is possible to derive the value of the convective heat transfer coefficient of the jets.Some analytical studies carried out in laminar jets have shown that: This suggests that the Nusselt number should remain approximately constant in the "core" and decrease downstream.Furthermore, again, by observing Nu, it was seen that in the stagnation zone, along the jet axis and thus in the "core", there is a point where heat exchange is maximum.This point also coincides with the maximum turbulence intensity.As we move away from the "core", the heat transfer rate decreases due to ever-lower liquid velocities.However, this decrease is stopped for high turbulence levels, and an increase is shown [53].This is until the drop in velocity is compensated for by the increase in turbulence.Figure 10b shows the radial variation of the heat transfer coefficient, obtained by [54], by measuring the Nusselt number of jets from a cylindrical nozzle.As can be seen, there is a local maximum at x/D = 0.5 for all injections with nozzle-surface distance 4 < z/D < 6, while a second maximum, smaller than the previous one, occurs for values of z/D ≤ 4, at x/D equal to 2. The first peak is due to an acceleration of the radial velocity in the stagnation zone.As for the second peak, the only explanation suggested can be that at 1 ≤ x/D ≤ 2 there is a transition from laminar to turbulent flow.In fact, at distances x/D = 2, vortices with a toroidal shape have been found by [55] to hit the surface.At greater distances x/D, the radial velocity decreases, thus lowering the efficiency of heat exchange.The previously mentioned vortices, however, are only present for distances z/D ≤ 4. Above this value, these vortices tend to break down into smaller-scale vortices penetrating the "core."This is why nozzle-surface distances more significant than four have a single maximum and a bell-shaped heat transfer coefficient distribution. This cannot be applied in the case of fully developed jets or jets with large nozzlesurface distances, as these cause turbulent flow in the stagnation zone in the boundary layer of the wall.In [56], it has been shown that jets with z/D = 50 exhibit important ring, helical, and double-helix vortical structures [53].The angle of impact also plays an essential role in heat transfer.Taking a more recent study into account, we can see how much the angle of the nozzle matters concerning the solid surface [57].This study showed how the angle of inclination of the nozzle, about the impact surface, affects heat transfer (Figure 11).Specifically, keeping the liquid flow rate fixed, the inclination angle varied from 45 In case the regime is laminar, no "splattering" occurs [50].The heat transfer of a jet striking a surface is expressed by the Nusselt number (Nu) and is a complex function of many parameters: where (z/D) is the dimensionless distance between the nozzle and surface and (x/D) is the dimensionless distance from the stagnation point.In addition, nozzle geometry, turbulence, and jet velocity also have significant effects.By studying this trend, it is possible to derive the value of the convective heat transfer coefficient of the jets.Some analytical studies carried out in laminar jets have shown that: This suggests that the Nusselt number should remain approximately constant in the "core" and decrease downstream.Furthermore, again, by observing Nu, it was seen that in the stagnation zone, along the jet axis and thus in the "core", there is a point where heat exchange is maximum.This point also coincides with the maximum turbulence intensity.As we move away from the "core", the heat transfer rate decreases due to ever-lower liquid velocities.However, this decrease is stopped for high turbulence levels, and an increase is shown [53].This is until the drop in velocity is compensated for by the increase in turbulence.Figure 10b shows the radial variation of the heat transfer coefficient, obtained by [54], by measuring the Nusselt number of jets from a cylindrical nozzle.As can be seen, there is a local maximum at x/D = 0.5 for all injections with nozzle-surface distance 4 < z/D < 6, while a second maximum, smaller than the previous one, occurs for values of z/D ≤ 4, at x/D equal to 2. The first peak is due to an acceleration of the radial velocity in the stagnation zone.As for the second peak, the only explanation suggested can be that at 1 ≤ x/D ≤ 2 there is a transition from laminar to turbulent flow.In fact, at distances x/D = 2, vortices with a toroidal shape have been found by [55] to hit the surface.At greater distances x/D, the radial velocity decreases, thus lowering the efficiency of heat exchange.The previously mentioned vortices, however, are only present for distances z/D ≤ 4. Above this value, these vortices tend to break down into smaller-scale vortices penetrating the "core."This is why nozzle-surface distances more significant than four have a single maximum and a bell-shaped heat transfer coefficient distribution. This cannot be applied in the case of fully developed jets or jets with large nozzlesurface distances, as these cause turbulent flow in the stagnation zone in the boundary layer of the wall.In [56], it has been shown that jets with z/D = 50 exhibit important ring, helical, and double-helix vortical structures [53].The angle of impact also plays an essential role in heat transfer.Taking a more recent study into account, we can see how much the angle of the nozzle matters concerning the solid surface [57].This study showed how the angle of inclination of the nozzle, about the impact surface, affects heat transfer (Figure 11).Specifically, keeping the liquid flow rate fixed, the inclination angle varied from 45°, 60°, and 90°, respectively.It has been found that the Nusselt number decreases when the angle of inclination becomes less than 90 • .In fact, at smaller angles, only a part of the jet impinges on the surface, and the stagnation zone and wall flow development are very different from the classical 90 • injection case [57].For the 90 • injection angle, there is the highest Nu, and the target surface isotherms are more uniform and symmetric. All these data are typically obtained on flat geometries and limited dimensions; the purpose of the reported study is to understand, with the help of CFD, how these behaviours change when we work on nonplanar geometries in a confined environment, especially when rotation of the geometries is introduced. Design and CFD Analysis of Motor Cooling System Research shows that the temperature increase in the electric motor negatively affects the performance of the electric aircraft.In general, an increase in operating temperature of 30 • C leads to a reduction in torque of up to 50%.Additionally, increased failure rates and a shortened life cycle reduce the overall efficiency and performance of the electric propulsion systems. PM Electrical Machine Short Description A complex hybrid propulsion system for a regional aircraft is being developed as part of the EU-funded ORCHESTRA project.Two electric machines are under development, a generator to convert the mechanical energy of a gas turbine into electrical energy and a motor to drive the propellers for distributed propulsion.Since the two machines have different requirements, mainly high power for the generators and low speed and high torque for the motors, one of the strategies outlined is to proceed with two separate designs.A 1MW generator was conceived by the University of Nottingham, the leading partner of the ORCHESTRA project, who provided the architecture as illustrated in Figure 12 and calculated the thermal loads reported in Table 4. Aerospace 2024, 11, x FOR PEER REVIEW 14 of 32 It has been found that the Nusselt number decreases when the angle of inclination becomes less than 90°.In fact, at smaller angles, only a part of the jet impinges on the surface, and the stagnation zone and wall flow development are very different from the classical 90° injection case [57].For the 90° injection angle, there is the highest Nu, and the target surface isotherms are more uniform and symmetric. All these data are typically obtained on flat geometries and limited dimensions; the purpose of the reported study is to understand, with the help of CFD, how these behaviours change when we work on nonplanar geometries in a confined environment, especially when rotation of the geometries is introduced. Design and CFD Analysis of Motor Cooling System Research shows that the temperature increase in the electric motor negatively affects the performance of the electric aircraft.In general, an increase in operating temperature of 30 °C leads to a reduction in torque of up to 50%.Additionally, increased failure rates and a shortened life cycle reduce the overall efficiency and performance of the electric propulsion systems. PM Electrical Machine Short Description A complex hybrid propulsion system for a regional aircraft is being developed as part of the EU-funded ORCHESTRA project.Two electric machines are under development, a generator to convert the mechanical energy of a gas turbine into electrical energy and a motor to drive the propellers for distributed propulsion.Since the two machines have different requirements, mainly high power for the generators and low speed and high torque for the motors, one of the strategies outlined is to proceed with two separate designs.A 1MW generator was conceived by the University of Nottingham, the leading partner of the ORCHESTRA project, who provided the architecture as illustrated in Figure 12 and calculated the thermal loads reported in Table 4.The generator is a 900 kW-20k rpm PM electric machine with 48 poles.It is made up of Recoma 33 material used for the Samarium Cobalt permanent magnets, NKM (slot liner insulation material) for the slot liner that keeps the surface magnets connected, Recoma 33 material used for the Samarium Cobalt permanent magnets, Vacoflux Cobalt Iron for the stator, copper with Litz type treatment for windings, steel for the crankshaft, and aluminium for the external case. TMS Design By performing the thermal analysis of the electric machine for which a simple surface finning was used for the power density of Table 4, the maximum temperature was detected at the stator windings, with temperatures rising above 800 K.The design of the TMS for the current application started with collecting global data on different cooling technologies' power and global heat exchange coefficients.From this, it was observed that the power involved and the limited spaces were incompatible with classic cooling systems such as cooling jackets.It was therefore decided to focus on direct cooling with thermal oil, which remains liquid even at temperatures above 180 • C.Even if thermal oil has a specific heat and a density slightly lower than water, thanks to the potential of direct cooling and the effect of the impinging jets [58], a more efficient heat exchange is obtained locally and globally.Both the review of cooling techniques and the in-depth analysis of oil jet cooling have been discussed in Sections 2 and 3, respectively. The design of the 1MW generator TMS was conducted to guarantee a maximum temperature of 523 K, preserve the compactness of the electric machine and the TMS group, and seek the maximum specific power for the latter. With these objectives, a TMS was developed that exploited the cooling of oil jets.In essence, as sketched in Figure 13, the oil jets are directed mainly towards the ends of the windings with oil subsequently collected by gravity on the bottom of the case to be recirculated through a closed cooling circuit.The generator is a 900 kW-20k rpm PM electric machine with 48 poles.It is made up of Recoma 33 material used for the Samarium Cobalt permanent magnets, NKM (slot liner insulation material) for the slot liner that keeps the surface magnets connected, Recoma 33 material used for the Samarium Cobalt permanent magnets, Vacoflux Cobalt Iron for the stator, copper with Litz type treatment for windings, steel for the crankshaft, and aluminium for the external case. TMS Design By performing the thermal analysis of the electric machine for which a simple surface finning was used for the power density of Table 4, the maximum temperature was detected at the stator windings, with temperatures rising above 800 K.The design of the TMS for the current application started with collecting global data on different cooling technologies power and global heat exchange coefficients.From this, it was observed that the power involved and the limited spaces were incompatible with classic cooling systems such as cooling jackets.It was therefore decided to focus on direct cooling with thermal oil, which remains liquid even at temperatures above 180 °C.Even if thermal oil has a specific heat and a density slightly lower than water, thanks to the potential of direct cooling and the effect of the impinging jets [58], a more efficient heat exchange is obtained locally and globally.Both the review of cooling techniques and the in-depth analysis of oil jet cooling have been discussed in Sections 2 and 3, respectively. The design of the 1MW generator TMS was conducted to guarantee a maximum temperature of 523 K, preserve the compactness of the electric machine and the TMS group, and seek the maximum specific power for the latter. With these objectives, a TMS was developed that exploited the cooling of oil jets.In essence, as sketched in Figure 13, the oil jets are directed mainly towards the ends of the windings with oil subsequently collected by gravity on the bottom of the case to be recirculated through a closed cooling circuit.Specifically, based on the geometry provided, an external case with a net internal diameter of 285 mm was assumed to have a gap with the external part of the stator of 17.5 mm on the radius (see Figure 14).This cavity allows the oil to flow and effect additional direct cooling, similar to a cooling jacket.The oil is conveyed inside the case by means of two circular pipes of the same diameter connected by a duct placed above the external case since the delivery pump is connected on only one side.The cooling oil that enters will then exit from the exit duct located in the centre of the bottom external case, as illustrated in Figure 15.The holes for oil impinging are located on a 16 mm diameter pipeline with a circular axis on both sides.They are positioned so that the oil jets go against the end windings; by taking into account the 48 motor poles, there are in total 96 holes equally spaced as in Figure 16a.Two hole sizes, 2 and 4 mm, were considered for CFD analyses.To obtain an exit velocity of at least 0.5 m/s, a mass flow rate of between 9 and 36 L/min on the 96 holes is required.Lower speed values would negatively affect heat transport.Furthermore, setting the distance between the orifice and the end-winding cusp to approximately 26 mm, as shown in Figure 16b, results in an H/D hole ratio of 12 and 6 according to the two hole sizes.These values are optimal so that the liquid jets optimise the heat exchange, as seen in Section 3.2.Finally, resin to electrically isolate the copper windings from the stator is considered a design option. Aerospace 2024, 11, x FOR PEER REVIEW 16 of 32 Specifically, based on the geometry provided, an external case with a net internal diameter of 285 mm was assumed to have a gap with the external part of the stator of 17.5 mm on the radius (see Figure 14).This cavity allows the oil to flow and effect additional direct cooling, similar to a cooling jacket.The oil is conveyed inside the case by means of two circular pipes of the same diameter connected by a duct placed above the external case since the delivery pump is connected on only one side.The cooling oil that enters will then exit from the exit duct located in the centre of the bottom external case, as illustrated in Figure 15.The holes for oil impinging are located on a 16 mm diameter pipeline with a circular axis on both sides.They are positioned so that the oil jets go against the end windings; by taking into account the 48 motor poles, there are in total 96 holes equally spaced as in Figure 16a.Two hole sizes, 2 and 4 mm, were considered for CFD analyses. To obtain an exit velocity of at least 0.5 m/s, a mass flow rate of between 9 and 36 L/min on the 96 holes is required.Lower speed values would negatively affect heat transport.Furthermore, setting the distance between the orifice and the end-winding cusp to approximately 26 mm, as shown in Figure 16b, results in an H/Dhole ratio of 12 and 6 according to the two hole sizes.These values are optimal so that the liquid jets optimise the heat exchange, as seen in Section 3.2.Finally, resin to electrically isolate the copper windings from the stator is considered a design option.Specifically, based on the geometry provided, an external case with a net internal diameter of 285 mm was assumed to have a gap with the external part of the stator of 17.5 mm on the radius (see Figure 14).This cavity allows the oil to flow and effect additional direct cooling, similar to a cooling jacket.The oil is conveyed inside the case by means of two circular pipes of the same diameter connected by a duct placed above the external case since the delivery pump is connected on only one side.The cooling oil that enters will then exit from the exit duct located in the centre of the bottom external case, as illustrated in Figure 15.The holes for oil impinging are located on a 16 mm diameter pipeline with a circular axis on both sides.They are positioned so that the oil jets go against the end windings; by taking into account the 48 motor poles, there are in total 96 holes equally spaced as in Figure 16a.Two hole sizes, 2 and 4 mm, were considered for CFD analyses. To obtain an exit velocity of at least 0.5 m/s, a mass flow rate of between 9 and 36 L/min on the 96 holes is required.Lower speed values would negatively affect heat transport.Furthermore, setting the distance between the orifice and the end-winding cusp to approximately 26 mm, as shown in Figure 16b, results in an H/Dhole ratio of 12 and 6 according to the two hole sizes.These values are optimal so that the liquid jets optimise the heat exchange, as seen in Section 3.2.Finally, resin to electrically isolate the copper windings from the stator is considered a design option. Simulations Setup Once the TMS was preliminarily designed, verification and analysis were performed using Ansys Fluent ® as a CFD tool.The CFD simulations conducted are in steady-state conditions, with the properties of the solid materials and oil constant with temperature to reduce the computational cost.In these calculations, the interface between fluid dynamics and structure has been considered from a thermal point of view; the energy equation is activated to consider the effects, among other things, of heating due to friction in the air gap, while the turbulence model used is the k-ε [59-61] with standard wall functions.Finally, CFD analyses do not consider the mechanical connections between the external case and the generator structure, assuming they would not interfere with cooling.Full three-dimensional analyses were performed considering the effects of both gravity and non-symmetry of the system (for example, the presence of a single exit duct on the bottom of the case and rotation).Further details about the simulation setup are provided in the following article subsections. Governing Equations and Mesh Details TMS analysis of the investigated electric machine requires the resolution of a conjugated conductive-convective problem. Reynolds Averaged Navier-Stokes equations and turbulence models are applied to describe the fluid flow evolution using a coupled implicit approach, adopting the κ-ε twoequations model for the turbulence fluctuations.In fluid regions, in particular, the transport equations have the following dimensional form: The above equations are discretised using a finite volume formulation and solved by the FLUENT ® COUPLED algorithm associated with a well-assessed Algebraic Multigrid model.In conjunction with this, 2nd order spatial numerical upwind schemes have been adopted to discretise the spatial domain.All grids were generated considering the Simulations Setup Once the TMS was preliminarily designed, verification and analysis were performed using Ansys Fluent ® as a CFD tool.The CFD simulations conducted are in steady-state conditions, with the properties of the solid materials and oil constant with temperature to reduce the computational cost.In these calculations, the interface between fluid dynamics and structure has been considered from a thermal point of view; the energy equation is activated to consider the effects, among other things, of heating due to friction in the air gap, while the turbulence model used is the k-ε [59-61] with standard wall functions.Finally, CFD analyses do not consider the mechanical connections between the external case and the generator structure, assuming they would not interfere with cooling.Full three-dimensional analyses were performed considering the effects of both gravity and non-symmetry of the system (for example, the presence of a single exit duct on the bottom of the case and rotation).Further details about the simulation setup are provided in the following article subsections. Governing Equations and Mesh Details TMS analysis of the investigated electric machine requires the resolution of a conjugated conductive-convective problem. Reynolds Averaged Navier-Stokes equations and turbulence models are applied to describe the fluid flow evolution using a coupled implicit approach, adopting the κ-ε two-equations model for the turbulence fluctuations.In fluid regions, in particular, the transport equations have the following dimensional form: The above equations are discretised using a finite volume formulation and solved by the FLUENT ® COUPLED algorithm associated with a well-assessed Algebraic Multigrid model.In conjunction with this, 2nd order spatial numerical upwind schemes have been adopted to discretise the spatial domain.All grids were generated considering the require-ments that RANS calculations need near the wall.In solid regions, the energy transport equation used by FLUENT has the following dimensional form: Regarding the grid generation, five prismatic layers were generated on each wall in the fluid domain.To improve the accuracy of the heat transfer calculation in the air gap, the mesh in the radial direction was built with ten layers, five on the rotor wall and five on the stator wall, and two or three layers of tetrahedrons.The solid domains, like windings and parts in ferromagnetic materials, were discretised using tetrahedrons, refining the mesh in the zones where a greater temperature gradient was forecast.Figure 17a,b shows the views of the mesh at z = const.and at x = const., for which clustering is noticeable near the air gap and the end windings where higher temperature gradients are expected.Figures 18 and 19 show close-ups of the stator, teeth, windings and end-winding.Regarding the grid generation, five prismatic layers were generated on each wall in the fluid domain.To improve the accuracy of the heat transfer calculation in the air gap, the mesh in the radial direction was built with ten layers, five on the rotor wall and five on the stator wall, and two or three layers of tetrahedrons.The solid domains, like windings and parts in ferromagnetic materials, were discretised using tetrahedrons, refining the mesh in the zones where a greater temperature gradient was forecast.Figure 17a,b shows the views of the mesh at z = const.and at x = const., for which clustering is noticeable near the air gap and the end windings where higher temperature gradients are expected.Figures 18 and 19 Tables 5 and 6 summarise boundary conditions and mesh characteristics.A mesh independence study was conducted by creating three different levels of mesh refinement based on 8, 15, and 30 million cells, respectively.The main interest of the CFD analysis was to identify the maximum temperature reached in the solid zone.It was observed that the maximum temperature computed using the grid levels did not exceed 2%.The intermediate level was considered a good compromise for considering the computational effort and the accuracy of the results simultaneously.Tables 5 and 6 summarise boundary conditions and mesh characteristics.A mesh independence study was conducted by creating three different levels of mesh refinement based on 8, 15, and 30 million cells, respectively.The main interest of the CFD analysis was to identify the maximum temperature reached in the solid zone.It was observed that the maximum temperature computed using the grid levels did not exceed 2%.The intermediate level was considered a good compromise for considering the computational effort and the accuracy of the results simultaneously.Tables 5 and 6 summarise boundary conditions and mesh characteristics.A mesh independence study was conducted by creating three different levels of mesh refinement based on 8, 15, and 30 million cells, respectively.The main interest of the CFD analysis was to identify the maximum temperature reached in the solid zone.It was observed that the maximum temperature computed using the grid levels did not exceed 2%.The intermediate level was considered a good compromise for considering the computational effort and the accuracy of the results simultaneously.The positioning of the various solid materials is schematically represented in Figure 20a; the details of the electrical machine with 48 slots and the filling of these slots with copper wires are included in Figure 20b.Table 7 shows the thermophysical properties of the materials which were used in setting up the CFD model. Id. Configuration The positioning of the various solid materials is schematically represented in Figure 20a; the details of the electrical machine with 48 slots and the filling of these slots with copper wires are included in Figure 20b.Table 7 shows the thermophysical properties of the materials which were used in setting up the CFD model.Table 7. Thermophysical properties of solid materials of the electric machine.Table 8 shows the test matrix and the setup of the boundary conditions for various configurations; in particular, the diameter of the holes, the rotation speed of the motor, the oil inlet speed, and the influence of the epoxy resin, which fills or does not fill all the stator slots, isolating them from the windings, were varied.The resin-free configuration is to be understood as using a thin insulating layer on the windings but still allowing the presence of a cavity in the stator slot, which should positively influence cooling.This configuration is intended as an improvement even if it requires greater processing and more expensive resins. CFD Results Figures 21 and 22 show the temperature maps velocity fields, wall shear stresses, and heat transfer coefficient on the computational domain for the Run 4 and Run 10 cases, respectively.These two cases are to be considered real configurations, with differences in the flow rate of oil disposed of and the diameter of the oil orifices.In both cases, the maximum temperature is significantly lower than the design limit (523 K), which also allows us to have a safety margin on the maximum temperature, given some simplifications of the analyses, such as the thermal power generated by the bearings. As can be seen, the maximum temperature is reached on the windings; on the front areas, there is maximum cooling efficiency, while on the midsection, there is the maximum operating temperature because the oil has difficulties entering that area.Nonetheless, this value is comfortably below the limit.The same effect is seen on the rotor Figure 21c due to heat transport from the stator windings via the air gap.In Figure 21e, however, a front section taken at z/2 of the solid domain of the motor is shown: a reduction in the temperature on the teeth near the air gap due to the Taylor convective motions generated by the rotation of the rotor is visible.The same trends can be noticed in Figures 21b and 22b. CFD Results Figures 21 and 22 show the temperature maps velocity fields, wall shear stresses, and heat transfer coefficient on the computational domain for the Run 4 and Run 10 cases, respectively.These two cases are to be considered real configurations, with differences in the flow rate of oil disposed of and the diameter of the oil orifices.In both cases, the maximum temperature is significantly lower than the design limit (523 K), which also allows us to have a safety margin on the maximum temperature, given some simplifications of the analyses, such as the thermal power generated by the bearings. As can be seen, the maximum temperature is reached on the windings; on the front areas, there is maximum cooling efficiency, while on the midsection, there is the maximum operating temperature because the oil has difficulties entering that area.Nonetheless, this value is comfortably below the limit.The same effect is seen on the rotor Figure 21c due to heat transport from the stator windings via the air gap.In Figure 21e, however, a front section taken at z/2 of the solid domain of the motor is shown: a reduction in the temperature on the teeth near the air gap due to the Taylor convective motions generated by the rotation of the rotor is visible.The same trends can be noticed in Figures 21b and 22b Figure 22 summarises the results of the Run 10 case.The same considerations previously described can also be applied to this test case.The lack of symmetries depends on the fact that the fluid dynamic field generated is not symmetrical due to the presence of the exit hole and the rotor s rotation speed.The lack of symmetries depends on the fact that the fluid dynamic field generated is not symmetrical due to the presence of the exit hole and the rotor s rotation speed.Another important effect to note is the influence of the air gap on cooling, and in particular, the effect of EM rotation on the fluid dynamic field induced in this restricted area.Figure 26a and b show the maps of the HTC on the rotor and the speed in the air gap, respectively.In this area, heat transfer by convection comes from the stator windings towards the rotor.Still, other losses are generated due to friction and, therefore, to the shear stresses in the cavity.Mechanical losses can be divided into friction losses and windage losses.Frictional losses depend on speed and occur, for example, in bearings.Ventilation losses occur in electric motors with non-round rotors and depend on speed.For example, switched reluctance motors (sr-motors) or separately excited synchronous motors have rotors that are not round. Windage loss in air-gap is relative to rotor movement, creating tangential velocity components.The friction of rotating air to surfaces and between its fluid layers creates significant heat dissipation.These losses are estimated with the following: Another important effect to note is the influence of the air gap on cooling, and in particular, the effect of EM rotation on the fluid dynamic field induced in this restricted area.Figure 26a and b show the maps of the HTC on the rotor and the speed in the air gap, respectively.In this area, heat transfer by convection comes from the stator windings towards the rotor.Still, other losses are generated due to friction and, therefore, to the shear stresses in the cavity.Mechanical losses can be divided into friction losses and windage losses.Frictional losses depend on speed and occur, for example, in bearings.Ventilation losses occur in electric motors with non-round rotors and depend on speed.For example, switched reluctance motors (sr-motors) or separately excited synchronous motors have rotors that are not round. Windage loss in air-gap is relative to rotor movement, creating tangential velocity components.The friction of rotating air to surfaces and between its fluid layers creates significant heat dissipation.These losses are estimated with the following: Windage loss in air-gap is relative to rotor movement, creating tangential velocity components.The friction of rotating air to surfaces and between its fluid layers creates significant heat dissipation.These losses are estimated with the following: In Figure 26a, stripes of higher WSS are alternating, as are for HTC.This effect is due to alternating vortices in the air-gap and the Couette-Taylor motion induced by the rotation.Therefore, this effect improves the heat exchange locally, especially in the central area where the oil has more difficulties entering directly. Conclusions and Remarks In general, better cooling translates into greater efficiency, longer life, and lower associated costs of electrical components.The design of the cooling system must be considered as early as possible and should be as important as structural and electromagnetic issues. This research study presents the TMS designed by the Centro Italiano Ricerche Aerospaziali (CIRA) team, which is intended to equip the PM electric machine developed by the University of Nottingham (UNOTT) as part of the ORCHESTRA project.Therefore, the description of the electric machine and the definition of the requirements were the starting point for designing an effective TMS.In addition to maintaining the correct operating temperature, TMS is designed to add as little weight aircraft's overall performance is not affected by parasitic drag, resulting from compensating for any additional weight. Diathermic oil jets directed at the end windings of the electric machine was the cooling technique chosen for a more compact and efficient TMS.A preliminary CFD analysis using cutting-edge tools allowed for obtaining the heat map of the electric machine, useful to define the baseline configuration of the TMS to be designed.The identified configuration introduces a coolant to each side of the generator, with 48 orifices per side spraying oil directly onto the end windings, which represent the most critical impact surface that needs to be cooled.Cooling is symmetrical along the motor axis by injecting on both sides.The plane orthogonal to the motor axis at the centreline is a critical area in heat dissipation.Still, from the simulations carried out, it was observed that the maximum temperature in that area is below the design limits.Different oil injection solutions were analysed to verify the influence of the flow rate on the cooling, rotation speed and temperature of the inlet oil (9 L/min < Qv < 36 L/min, 25 • C < Toil < 70 • C, 0 < ω < 20,000 rpm).The results show a dependence of the flow rate on the global cooling performance, increasing the oil inlet temperature and the average temperature of the solid domain by the same amount.A significant effect, however, is due to the rotation speed; in our full three-dimensional (3D) Reynolds Averaged Navier-Stokes (RANS) type analyses, considering a Moving Reference Frame of the rotor, rotation generates a distortion of the flow field, which is favourable for cooling.In fact, in Table 9, the maximum temperature difference between Run 1 and Run 2 (the conditions of the CFD runs are defined in Table 8) is a decrease of approximately 70 K.The influence of the rotation speed is significant up to 10,000 rpm, then decreasing for higher speeds, as can be seen from Run 11 and Run 11_bis.This is because, at higher speeds, the increase in shear stresses generates significant frictional heat, which limits their effectiveness.The maximum operative temperature of the motor is below the imposed limit of 523 K in all realistic conditions examined.Theoretical conditions, however, such as those with a stationary rotor, are considered only to verify the influence of the various parameters.Using the designed cooling system, for the motor nominal power, a weight/power ratio of approximately 14 is obtained, but for the peak power, which many catalogues adopt as a reference, it is possible to reach 20 kW/kg, while still maintaining the temperature below of the limit with large safety margins. Figure 1 . Figure 1.Typical scheme of cooling: (a) liquid cooling system for an automotive engine; (b) air cooling system for a piston engine. Passive systems: Such cooling systems use fluid moving in a closed case to cool down the equipment.There are three different typologies of passive systems [21]: o Heat pipes: Refrigeration fluid is heated by the heat source, changing phase (from liquid to vapour), thus absorbing heat.The vapour moves from the hot to the cold zone, condensing and releasing heat outside; o Thermosyphons are similar to heat pipes but use gravity and natural convection; o Vapour chambers are flat heat pipes that transfer heat in 3-D. Figure 1 . Figure 1.Typical scheme of cooling: (a) liquid cooling system for an automotive engine; (b) air cooling system for a piston engine. Figure 8 . Figure 8. Schematic example of heat paths inside an electric machine. Figure 8 . Figure 8. Schematic example of heat paths inside an electric machine. Figure 9 . Figure 9. (a) Image of laboratory tests on oil jets Reprinted/adapted with permission from Ref. [49].Copyright © 2015 by ASME; (b) development of the fluid jet in the free jet region; (c) sliding region along the solid wall. Figure 9 . Figure 9. (a) Image of laboratory tests on oil jets Reprinted/adapted with permission from Ref. [49].Copyright © 2015 by ASME; (b) development of the fluid jet in the free jet region; (c) sliding region along the solid wall. Figure 10 . Figure 10.(a) Representation of the impact of an oil jet resulting in the spattering phenomenon [50]; (b) radial variation of the heat transfer coefficient, the curves are parameterized based on the z/D ratio [51]. Figure 10 . Figure 10.(a) Representation of the impact of an oil jet resulting in the spattering phenomenon [50]; (b) radial variation of the heat transfer coefficient, the curves are parameterized based on the z/D ratio [51]. Figure 11 . Figure 11.Representation of different jet angles impinging heated surface.Figure 11.Representation of different jet angles impinging heated surface. Figure 11 . Figure 11.Representation of different jet angles impinging heated surface.Figure 11.Representation of different jet angles impinging heated surface. Figure 12 . Figure 12.Electric machine design: (a) electrical machine dimensions under investigation; (b) PM motor segment components. Figure 12 . Figure 12.Electric machine design: (a) electrical machine dimensions under investigation; (b) PM motor segment components. Figure 13 . Figure 13.Oil cooling architecture schematic: (a) configuration selected for the electric machine; (b) oil cooling by more orifice; (c) detailed circular ring needed for oil jets.Figure 13.Oil cooling architecture schematic: (a) configuration selected for the electric machine; (b) oil cooling by more orifice; (c) detailed circular ring needed for oil jets. Figure 13 . Figure 13.Oil cooling architecture schematic: (a) configuration selected for the electric machine; (b) oil cooling by more orifice; (c) detailed circular ring needed for oil jets.Figure 13.Oil cooling architecture schematic: (a) configuration selected for the electric machine; (b) oil cooling by more orifice; (c) detailed circular ring needed for oil jets. Figure 15 . Figure 15.CAD of EM s outer case: (a) isometric view; bottom with exit hole (b) and top (c) views. Figure 15 . Figure 15.CAD of EM s outer case: (a) isometric view; bottom with exit hole (b) and top (c) views.Figure 15.CAD of EM's outer case: (a) isometric view; bottom with exit hole (b) and top (c) views. Figure 15 .Figure 16 . Figure 15.CAD of EM s outer case: (a) isometric view; bottom with exit hole (b) and top (c) views.Figure 15.CAD of EM's outer case: (a) isometric view; bottom with exit hole (b) and top (c) views. Figure 16 . Figure 16.CAD inside EM: (a) end-winding with oil jet holes position; (b) top view of the stator with axial dimensions and distance to oil holes. Figure 17 . Figure 17.Details of the mesh, in the case with the presence of resin between the windings and d = 2 mm; (a) section at z = cost; (b) section at x = cost.Figure 17.Details of the mesh, in the case with the presence of resin between the windings and d = 2 mm; (a) section at z = cost; (b) section at x = cost. Figure 17 . Figure 17.Details of the mesh, in the case with the presence of resin between the windings and d = 2 mm; (a) section at z = cost; (b) section at x = cost.Figure 17.Details of the mesh, in the case with the presence of resin between the windings and d = 2 mm; (a) section at z = cost; (b) section at x = cost. Figure 20 . Figure 20.CFD domain and materials association: (a) detail of a stator slot; (b) representation of slot shape and conductor. Figure 20 .Table 7 . Figure 20.CFD domain and materials association: (a) detail of a stator slot; (b) representation of slot shape and conductor. Figures 21 and 22 show the temperature maps velocity fields, wall shear stresses, and heat transfer coefficient on the computational domain for the Run 4 and Run 10 cases, respectively.These two cases are to be considered real configurations, with differences in the flow rate of oil disposed of and the diameter of the oil orifices.In both cases, the maximum temperature is significantly lower than the design limit (523 K), which also allows us to have a safety margin on the maximum temperature, given some simplifications of the analyses, such as the thermal power generated by the bearings.As can be seen, the maximum temperature is reached on the windings; on the front areas, there is maximum cooling efficiency, while on the midsection, there is the maximum operating temperature because the oil has difficulties entering that area.Nonetheless, this value is comfortably below the limit.The same effect is seen on the rotor Figure21cdue to heat transport from the stator windings via the air gap.In Figure21e, however, a front section taken at z/2 of the solid domain of the motor is shown: a reduction in the temperature on the teeth near the air gap due to the Taylor convective motions generated by the rotation of the rotor is visible.The same trends can be noticed in Figures21b and 22b. Figure 22 Figure 21 . Figure 22 summarises the results of the Run 10 case.The same considerations previously described can also be applied to this test case. Figure 22 Figure 22 .Figure 22 . Figure 22 summarises the results of the Run 10 case.The same considerations previously described can also be applied to this test case. Figure 22 . Figure 22.Contour map CFD domain EM-Run 10: (a) Temperature and streamlines x = cost.;(b) velocity magnitude; (c) temperature on windings; (d) temperature on rotor; (e) temperature at section at z = cost.;(f) WSS on rotor; (g) HTC on complete stator; (h) velocity magnitude at section at z = cost.Figures 23 and 24 show the trends on the curvilinear abscissa of the convective heat exchange coefficient along the coil profile of a winding.In particular, two symmetrical coils are considered (indicated as upper and lower), one located near the oil outlet hole Y min (lower) and the other symmetrical above (upper) Y max . Figure 23 . Figure 23.Trend of HTC on two stator windings, at top and bottom of oil outlet, for all test cases analysed.Figure 23.Trend of HTC on two stator windings, at top and bottom of oil outlet, for all test cases analysed. Figure 23 . Figure 23.Trend of HTC on two stator windings, at top and bottom of oil outlet, for all test cases analysed.Figure 23.Trend of HTC on two stator windings, at top and bottom of oil outlet, for all test cases analysed. Figure 23 . Figure 23.Trend of HTC on two stator windings, at top and bottom of oil outlet, for all test cases analysed. Figure 24 . Figure 24.Trend of HTC on two stator windings, above and below oil outlet, for test cases with resin-filled stator slots and active rotation. Figure 25 . Figure 25.Maximum temperature value for various test cases. Figure 26 . Figure 26.Detail contour maps-Run 10: (a) WSS on the rotor, alternating striations due to Couette-Taylor motion in the air-gap; (b) velocity field section along the axis, air-gap detail. Figure 25 . Figure 25.Maximum temperature value for various test cases. Figure 25 . Figure 25.Maximum temperature value for various test cases. Figure 26 . Figure 26.Detail contour maps-Run 10: (a) WSS on the rotor, alternating striations due to Couette-Taylor motion in the air-gap; (b) velocity field section along the axis, air-gap detail. Figure 26 . Figure 26.Detail contour maps-Run 10: (a) WSS on the rotor, alternating striations due to Couette-Taylor motion in the air-gap; (b) velocity field section along the axis, air-gap detail. P 1 air−gap = k r •C M,air−gap •π•ρ f luid •ω 3 •r 4 rot •L air−gapwhere k r is the roughness coefficient, r rot is the rotor radius, L air-gap is the length of the air-gap and C M,air-gap is the friction coefficient in the air-gap, defined as: C M,air−gap = 0Re air−gap where e is the air-gap thickness, and Re air-gap is the rotational Reynolds number relative to air-gap considering rotor speed ω, defined as:Re air−gap = ρ oil •e•ω µ oil Table 1 . Heat transfer coefficient vs. cooling system. Table 1 . Heat transfer coefficient vs. cooling system. Table 2 . Flow resistance coefficient by the literature. Table 3 . Typical cooling fluids are used in electric motors. Table 4 . Thermal loads of current electric machines. Table 4 . Thermal loads of current electric machines. Table 5 . Reviewed boundary condition of CFD cases. Table 5 . Reviewed boundary condition of CFD cases. Table 5 . Reviewed boundary condition of CFD cases. Table 6 . Number of cells for various test cases. Table 8 . Test matrix of CFD analyses.
v3-fos-license
2021-05-28T05:18:26.827Z
2021-05-01T00:00:00.000
235213798
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4360/13/10/1535/pdf", "pdf_hash": "4159d7639b2f6a3749166a0d6996a0fc7fbde184", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42705", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "4159d7639b2f6a3749166a0d6996a0fc7fbde184", "year": 2021 }
pes2o/s2orc
Insight into the Surface Properties of Wood Fiber-Polymer Composites The surface properties of wood fiber (WF) filled polymer composites depend on the filler loading and are closely related to the distribution and orientation in the polymer matrix. In this study, wood fibers (WF) were incorporated into thermoplastic composites based on non-recycled polypropylene (PP) and recycled (R-PP) composites by melt compounding and injection moulding. ATR-FTIR (attenuated total reflection Fourier transform infrared spectroscopy) measurements clearly showed the propagation of WF functional groups at the surface layer of WF-PP/WF-R-PP composites preferentially with WF loading up to 30%. Optical microscopy and nanoindentation method confirmed the alignment of thinner skin layer of WF-PP/WF-R-PP composites with increasing WF addition. The thickness of the skin layer was mainly influenced by the WF loading. The effect of the addition of WF on modulus and hardness, at least at 30 and 40 wt.%, varies for PP and R-PP matrix. On the other hand, surface zeta potential measurements show increased hydrophilicity with increasing amounts of WF. Moreover, WF in PP/R-PP matrix is also responsible for the antioxidant properties of these composites as measured by DPPH (2,2′-diphenyl-1-picrylhydrazyl) assay. Introduction The invention of the first plastics took place in the 19th century, and the development of new plastics continues. The material that overcame consumption in the last century is desired at all levels of our daily lives. To date, various types of plastics have been developed, divided into thermoplastics, thermosets, polyurethanes and elastomers [1,2]. The production of plastics increased from 2 million tons in 1950 to 380 million tons in 2015 and continues to grow worldwide [3][4][5]. The vast majority of these plastics are made from fossil hydrocarbons, are not biodegradable, and accumulate in the environment over time [4], with negative consequences for the entire planet. First, 5 to 13 million tons of plastics, or 1.5 to 4% of global plastic production, end up in the oceans every year. The environmentally damaging accumulation of plastics harms plants, animals, and industries such as tourism, fishing, and shipping. The production of plastics and the accumulation of plastic waste cause about 400 million tons of CO 2 per year worldwide [1] Accordingly, there is a great trend to develop more biodegradable plastics. Plastic composites with added biodegradable components are most commonly formed and are now in vogue. It is important to add such components to the plastic that retain the useful properties of the plastic, such as mechanical properties, while increasing the level of biodegradability. Natural fibers as reinforced materials in polymer composites have attracted much attention due to their Our study confirms interesting findings from previous work that the untreated raw material WF acts as a nucleating agent that provides a new crystalline β-phase of PP, resulting in an increased degree of crystallinity of WF-PP composites, which may lead to improved specific final surface properties of the material. More so, the surfaces of WF-PP and WF-R-PP composites are associated with the WF loading and resulted in a skin-core structure and actually surpassed the antioxidant properties as well as the mechanical enhancement of the WF-PP composite. At higher concentrations of the WF, a smaller pure PP interface is found at the surface, i.e., decadent skin-core phenomena show up in higher relative modules and hardness. It has also been shown that the anionic nature of the composite (at the plateau level in the alkaline) decreases with increasing amount of added WF, which translates into a more hydrophilic character that directly affects the antioxidant activity of the surface. PP and Wood Fiber As we reported in our previous work [27] for the preparation of wood fiber-reinforced composite PP materials, we used in this research two different matrix materials (i) Polypropylene (PP) from Braskem and (ii) Recycled Polypropylene (R-PP) material obtained from MEPOL. Both matrix materials were reinforced with wood fibers obtained from the wood processing company (MLINAR d.o.o.) as a side product of plywood grinding. The wood fibers consisted of spruce and pine wood (roughly −20-80%). Wood-PP Composite Material Preparation Wood PP/R-PP composites were prepared by melt compounding WF and PP/R-PP without adding processing additives. Prior to extrusion, WF and polypropylene were dried at 90 • C for 2 h in a ventilated oven. WF-PP and R-PP were mixed and fed to the feed unit of the twin screw extruder, PolyLab HAAKE Rheomex PTW 16, Thermo Haake. To ensure uniform distribution of fibers in the polymer matrix, all materials were extruded four times. After each extrusion, the material was pelletized by a pelletizer, and the material was again fed to the feeder. WF-PP and WF-R-PP composites were prepared with different mass fractions of WF, ranging from pure polymer (matrix) material to a maximum wood content of 40 wt%. In total, six materials with PP and six materials with recycled PP were produced. An overview of the materials produced is shown in Table 1, with names given to the individual material blends. barrel depending on the material extruded. In all cases, the screw speed was set to 80-90 rpm and the feed stage to 70-80. The dumbbell shaped samples were prepared following the ISO 3167:2014 Standard (sample type 1B). Prior to the injection moulding, the material, in granulated form, was dried at 80 • C. Dumbbell samples were prepared using the injection moulding machine, Arbrug Germany, with 15 t of closing force. For the injection phase, a pressure between 700−900 bar and 190 • C melt temperature was used, while the temperature of the mould was set to 20 • C. The injection step was followed by backpressure steps of 500 bar and 100 bar. The last step was cooling in the mould for 12 s. The whole production cycle for 1 sample lasted for 18 s. Figure 1 shows the injection moulded samples of each material. A melt temperature of approximately 190 °C was used for all materials and all four extrusion cycles, which required adjusting the temperature along the length of the extruder barrel depending on the material extruded. In all cases, the screw speed was set to 80-90 rpm and the feed stage to 70-80. The dumbbell shaped samples were prepared following the ISO 3167:2014 Standard (sample type 1B). Prior to the injection moulding, the material, in granulated form, was dried at 80 °C. Dumbbell samples were prepared using the injection moulding machine, Arbrug Germany, with 15 t of closing force. For the injection phase, a pressure between 700−900 bar and 190 °C melt temperature was used, while the temperature of the mould was set to 20 °C. The injection step was followed by backpressure steps of 500 bar and 100 bar. The last step was cooling in the mould for 12 s. The whole production cycle for 1 sample lasted for 18 s. Figure 1 shows the injection moulded samples of each material. As specified, surface tests were made on the dumbbell shaped samples seen in Figure 2. All surface tests were made in an area closer to the injection gate, indicated with a dotted line in Figure 2. The samples of composites WF-PP/WF-R-PP, which were used for an antiradical assay, were prepared using a custom-made spherical cylinder. This spherical cylinder, with a size of 12 mm radius and 2.5 mm height, was processed under the same conditions as the dumbbell-shaped samples and using the injection moulding machine Arbrug Germany with 15 t clamping force (a pressure between 700−900 bar and 190 °C melt temperature was used, with the mould temperature set at 20 °C; the injection step was followed As specified, surface tests were made on the dumbbell shaped samples seen in Figure 2. All surface tests were made in an area closer to the injection gate, indicated with a dotted line in Figure 2. A melt temperature of approximately 190 °C was used for all materials and all four extrusion cycles, which required adjusting the temperature along the length of the extruder barrel depending on the material extruded. In all cases, the screw speed was set to 80-90 rpm and the feed stage to 70-80. The dumbbell shaped samples were prepared following the ISO 3167:2014 Standard (sample type 1B). Prior to the injection moulding, the material, in granulated form, was dried at 80 °C. Dumbbell samples were prepared using the injection moulding machine, Arbrug Germany, with 15 t of closing force. For the injection phase, a pressure between 700−900 bar and 190 °C melt temperature was used, while the temperature of the mould was set to 20 °C. The injection step was followed by backpressure steps of 500 bar and 100 bar. The last step was cooling in the mould for 12 s. The whole production cycle for 1 sample lasted for 18 s. Figure 1 shows the injection moulded samples of each material. As specified, surface tests were made on the dumbbell shaped samples seen in Figure 2. All surface tests were made in an area closer to the injection gate, indicated with a dotted line in Figure 2. The samples of composites WF-PP/WF-R-PP, which were used for an antiradical assay, were prepared using a custom-made spherical cylinder. This spherical cylinder, with a size of 12 mm radius and 2.5 mm height, was processed under the same conditions as the dumbbell-shaped samples and using the injection moulding machine Arbrug Germany with 15 t clamping force (a pressure between 700−900 bar and 190 °C melt temperature was used, with the mould temperature set at 20 °C; the injection step was followed The samples of composites WF-PP/WF-R-PP, which were used for an antiradical assay, were prepared using a custom-made spherical cylinder. This spherical cylinder, with a size of 12 mm radius and 2.5 mm height, was processed under the same conditions as the dumbbell-shaped samples and using the injection moulding machine Arbrug Germany with 15 t clamping force (a pressure between 700−900 bar and 190 • C melt temperature was used, with the mould temperature set at 20 • C; the injection step was followed by backpressure steps of 500 bar and 100 bar, and the last step was cooling in the mould for 12 s). Particle Size of Wood Fibers The volume size and size distribution of WF were characterized in our previously work [27] by laser diffraction using a Particle Size Analyzer (PSA 1190), Anton Paar, Austria. To measure the WF properties, a WF-water mixture was first prepared. This mixture was first sonicated for 30 s to break up any agglomerates. The mixture was then measured for 10 s. Seven replicates were performed, and the average size distribution was determined. Dumbbell Shaped Samples X-ray Diffraction Analysis X-ray diffraction (XRD) measurements, by a PANalytical PRO MPD diffractometer using a Cu Kα radiation source at 40 kV were used for determination of the crystalline structure of WF-PP/WF-R-PP composites The X-ray diffraction patterns were recorded for the angles in the range of 2 θ, from 10 to 40 • , with a step of 10 • /min (λ = 0.154 nm). ATR-FTIR Spectroscopy To provide detailed information of the surface chemical structure of WF-PP and WF-R-PP composites provided by the extrusion and injection moulding process, the ATR-FTIR spectra were monitored on a Perkin Elmer Spectrum GX NIR FT-Raman. (The analyzed samples of the surface were in contact with a diamond crystal.) Each spectrum was determined as an average of 32 scans at a resolution of 4 cm −1 for measuring background, and the samples' spectra were measured in the wavenumber range from 400 to 4000 cm −1 at room temperature. All spectra were baseline corrected and smoothed after the measurements. An approach was used to estimate the changes in the C-O groups in the composites WF-PP/WF-R-PP. Thus, those of the individual spectra were base corrected from 5 to 40% of WF addition and the area was mathematically calculated using appropriate mathematical models (Gaussian terms). Microstructure Characterization The microstructure of the injection moulded samples was examined on a transversal cross-section with an optical microscope, Nikon Epiphot 300 (Tokyo, Japan), equipped with a system for digital quantitative image analysis (Olympus DB12 and software program Analysis). Before metallographic preparation, the samples were positioned using metal clamps and cold mounted carefully in epoxy resin. Grinding was performed using SiC paper P400 followed by polishing using a 9-and 3-micron diamond suspensions, and in the final step 0.05-micron colloidal alumina. In all polishing steps a micro-cloth was used, which was wetted prior to applying the polishing agent for additional lubrication. Samples were prepared on an automated grinder/polisher, using clockwise rotation 150/40 rpm, and a force of 27 N. Zeta Potential Measurements For the surface Zeta Potential analysis with SurPASS-3, two sample pieces of approx. 10 mm × 9 mm were cut for the streaming potential measurement, as indicated by the dotted lines in Figure 2. The sample pieces were fixed on the sample holders of the Adjustable Gap Cell (with a cross-section of 20 mm × 10 mm) using double-sided adhesive tape. The sample pieces were aligned opposite of each other, such that the maximum surface area of the samples overlapped. The distance between the sample pieces was adjusted to 105 ± 5 µm. Streaming potential measurements were performed using an aqueous 0.001 moL/L KCl solution as the background electrolyte. The pH dependence of the surface Zeta Potential was determined by adjusting an initial pH 10, using 0.05 moL/L KOH and reducing the pH automatically by dosing 0.05 moL/L HCl. Nanoindentation Nanoindentation measurements of elastic modulus and hardness were performed using a G200 Nanoindenter (Agilent, Santa Clara, CA, USA) equipped with an XP head and Berkovich geometry (indenter tip). Continuous Stiffness Measurements (CSM) methodology was used to obtain properties throughout the selected indentation depth [19]. A harmonically oscillating tip with frequency 45 Hz and 2 nm amplitude was pushed into the material. Based on the dynamic model of the whole measuring system, the sample properties were calculated continuously up to the penetration depth of 3900 nm. Measurements were performed on the sample surface ( Figure 2) with 36 (6 × 6) indents per sample. The distance between indents was 150 µm. For the analysis, all measurements were averaged. The DPPH • Assay The antioxidant activity of WF-PP and WF-R-PP composites was measured using DPPH• (2,2 -diphenyl-1-picrylhydrazyl) (Sigma Aldrich, France). The method is estab-lished on the reduction of the DPPH• radical, which is analyzed spectrophotometrically at a wavelength of 515 nm (Spectrophotometer (UV-VIS) Agilent Cary 60) ( Figure 3). properties were calculated continuously up to the penetration depth of 3900 nm. Meas-urements were performed on the sample surface ( Figure 2) with 36 (6 × 6) indents per sample. The distance between indents was 150 µm. For the analysis, all measurements were averaged. The DPPH° Assay The antioxidant activity of WF-PP and WF-R-PP composites was measured using DPPH• (2,2′-diphenyl-1-picrylhydrazyl) (Sigma Aldrich, France). The method is established on the reduction of the DPPH• radical, which is analyzed spectrophotometrically at a wavelength of 515 nm (Spectrophotometer (UV-VIS) Agilent Cary 60) ( Figure 3). DPPH• organic radical can be reduced in the antioxidant (AO) presence, with the consequent decolorization, from purple to yellow color. The antioxidant capacity can be determined by decrease of absorption at wavelength 515 nm. DPPH solution was prepared in methanol (8.1 × 10 − 5 moL/L) [32,33]. The WF-PP/WF-R-PP composites samples disks were directly immersed into 3 mL of the methanol DPPH• solution. The scavenging capability was determined straightaway, after 100 min, 200 min and 300 min. The percentage of radical scavenging activity at 515 nm was calculated using Equation (1): where AControl is the absorbance, measured at the starting concentration of DPPH•, and ASample is the absorbance of the remaining concentration of DPPH• in the presence of WF-PP and WF-R-PP composites polymer. Wood Fibres In our previous work [27] it was published that the average size of WF as fillers was about 100 µm. Few WF had sizes smaller than 10 µm, and, on the other hand, the diameter of some of the largest particles was about 500 µm. DPPH• organic radical can be reduced in the antioxidant (AO) presence, with the consequent decolorization, from purple to yellow color. The antioxidant capacity can be determined by decrease of absorption at wavelength 515 nm. DPPH solution was prepared in methanol (8.1 × 10 − 5 moL/L) [32,33]. The WF-PP/WF-R-PP composites samples disks were directly immersed into 3 mL of the methanol DPPH• solution. The scavenging capability was determined straightaway, after 100 min, 200 min and 300 min. The percentage of radical scavenging activity at 515 nm was calculated using Equation (1): where A Control is the absorbance, measured at the starting concentration of DPPH•, and A Sample is the absorbance of the remaining concentration of DPPH• in the presence of WF-PP and WF-R-PP composites polymer. Wood Fibres In our previous work [27] it was published that the average size of WF as fillers was about 100 µm. Few WF had sizes smaller than 10 µm, and, on the other hand, the diameter of some of the largest particles was about 500 µm. Surface Properties of Injection Moulded Samples of Wood Fibre-Polypropylene Composites X-ray diffraction patterns show crystallization of PP/R-PP in the presence of WF (Figure 4a Figure 4b. The results represented that there is no difference in the diffraction peaks. The diffraction peaks of R-PP-WF composites known for the α-crystalline phase were determined at 2θ angles of 13.9 • (110), 16.8 • (004), 18.5 • (130), 21.4 • (111) and 28.6 (200), respectively. The results of PP and R-PP polymer matrix based WF composites show that there is no difference in the diffraction peaks, and there is no evidence of phase transformation. In our previous study [27], XRD measurements reveal the changed crystallographic structure of PP/R-PP. Even more, it was clarified that the presence of WF with nucleation ability leads to the performance of formation of a different β-form and γ-form phases in PP. In contrast, in the following study, all WF-PP/WF-R-PP composite samples preferentially followed the α-monoclinic PP formation. Polymers 2021, 13, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/polymers there is no difference in the diffraction peaks, and there is no evidence of phase transformation. In our previous study [27], XRD measurements reveal the changed crystallographic structure of PP/R-PP. Even more, it was clarified that the presence of WF with nucleation ability leads to the performance of formation of a different β-form and γ-form phases in PP. In contrast, in the following study, all WF-PP/WF-R-PP composite samples preferentially followed the α-monoclinic PP formation. The elemental composition of the composite surfaces was monitored by ATR-FTIR analysis. The spectra of the injection moulded samples with different addition of WF (from 5%: WF-PP−5 to 40%: WF-PP−40) as well as the pure PP and WF, are shown in Figure 5. As can be seen from this Figure 5, the most prominent characteristic fingerprint band vibration at 1030 cm −1 (associated with the stretching vibrations of different groups in carbohydrates) appeared in the sample with the highest proportion of WF, i.e., 40%. Figure 5 shows the ATR-FTIR spectra of unrecycled PP (PP), wood fibre (WF) and wood fibre-polypropylene composites. The characteristic bands of PP ( Figure 5) and R-PP (Figure 5) at 2919 cm −1 correspond to the asymmetric CH2 stretching vibration, the band at 2956 cm −1 to the -CH3 stretching vibration and 2875 cm −1 to the asymmetric -CH2 stretching vibration. The characteristic peaks at 1452 cm −1 and 1374 cm −1 were determined as -CH3 -CH2-rocking vibration. In the "fingerprint" region of PP and R-PP, the spectra contain several main PP tacticity, which includes isotactic, syndiotactic and atactic forms. Typical characteristic peaks for isotactic PP are found in the region below 1000 cm −1 [34,35] . The characteristic vibration bands of WF (red curve) are also shown in Figure 5. WF are com- The elemental composition of the composite surfaces was monitored by ATR-FTIR analysis. The spectra of the injection moulded samples with different addition of WF (from 5%: WF-PP−5 to 40%: WF-PP−40) as well as the pure PP and WF, are shown in Figure 5. As can be seen from this Figure 5, the most prominent characteristic fingerprint band vibration at 1030 cm −1 (associated with the stretching vibrations of different groups in carbohydrates) appeared in the sample with the highest proportion of WF, i.e., 40%. Figure 5 shows the ATR-FTIR spectra of unrecycled PP (PP), wood fibre (WF) and wood fibre-polypropylene composites. The characteristic bands of PP ( Figure 5) and R-PP ( Figure 5) at 2919 cm −1 correspond to the asymmetric CH 2 stretching vibration, the band at 2956 cm −1 to the -CH 3 stretching vibration and 2875 cm −1 to the asymmetric -CH 2 stretching vibration. The characteristic peaks at 1452 cm −1 and 1374 cm −1 were determined as -CH 3 -CH 2 -rocking vibration. In the "fingerprint" region of PP and R-PP, the spectra contain several main PP tacticity, which includes isotactic, syndiotactic and atactic forms. Typical characteristic peaks for isotactic PP are found in the region below 1000 cm −1 [34,35]. The characteristic vibration bands of WF (red curve) are also shown in Figure 5. WF are composed of cellulose, hemicellulose and lignin [36,37]. Since the wood structure is very complex, the ATR-FTIR spectra were divided into two regions. The first region from 3800 cm −1 to 2700 cm −1 includes the OH and CH stretching vibrations. A strong broad peak is seen at 3345 cm −1 , which has been assigned to various O-H stretching vibrations, while at 2919 cm −1 a characteristic peak is related to the asymmetric and symmetric methyl and methylene stretching vibrations groups represented in wood. The second, at 1800 cm −1 to 800 cm −1 , is known as the "fingerprint" region for various functional groups of the wood structure. The "fingerprint" region of wood consists of several bands related to wood structure. A characteristic peak associated with the C=O stretching vibration in lignin and hemicellulose was observed at 1734 cm −1 . The characteristic bands at 1508 cm −1 and 1263 cm −1 were determined to be C=C, C-O stretching or bending vibrations of the groups in lignin. The bands at 1452 cm −1 , 1421 cm −1 , 1371 cm −1 were assigned to C-H, C-O deformation, bending or stretching vibrations of lignin groups and carbohydrates. The bands at 1023-1051 cm −1 were assigned to C-O deformation in cellulose, symmetric C-O-C stretching of dialkyl ethers and aromatic C-H deformation in lignin [38,39] generally for carbohydrates from WF. oi.org/10.3390/xxxxx www.mdpi.com/journal/polymers bands at 1452 cm −1 , 1421 cm −1 , 1371 cm −1 were assigned to C-H, C-O deformation, bending or stretching vibrations of lignin groups and carbohydrates. The bands at 1023-1051 cm −1 were assigned to C-O deformation in cellulose, symmetric C-O-C stretching of dialkyl ethers and aromatic C-H deformation in lignin [38,39] generally for carbohydrates from WF. We see that there are no significant differences between the pure PP material and the composites up to a concentration of 30% by weight of added WF. Since we measure the properties at the surface, this indicates that a layer of (only) PP has formed at the surface of the sample. We see that there are no significant differences between the pure PP material and the composites up to a concentration of 30% by weight of added WF. Since we measure the properties at the surface, this indicates that a layer of (only) PP has formed at the surface of the sample. teristic bands, and so it could be suggested that only R-PP is oriented in the upper layer of the material. Only the injection moulded samples of WF-R-PP composites with the highest WF content (i.e., 30, 40%) showed characteristic band vibrations at 1030 cm −1 indicating C=O, C-H, C-O-C, C-O deformation or stretching vibrations of the different functional groups in the carbohydrates of the WF origin. We have also critically analyzed the ATR-FTIR data at wavelength cm −1 by fitting mathematical models (Gaussian terms) to find out the orientation of the functional groups WF on the surface of the composites WF-PP and WF-R-PP. Figure 7 shows the dependence of the area of the peak at 1030 cm −1 representing the stretching vibration of the C-O and C-O-C groups of the composites WF-PP and WF-R-PP with respect to the functional groups WF. Spectral analysis had shown that significant spectral differences occur at the surface of WF-PP and WF-R-PP. The characteristic peak at 1030 cm −1 increased with increasing WF loading in WF-PP/WF-R-PP composites. The results for A (area) were plotted as a function of WF loading in WF-PP and WF-R-PP composites. Based on this fit, there appears to be a broadening of this peak in cases up to 30% of the addition of WF to PP or R-PP. This indicates the increasing presence of WF functional groups on the surface of WF-PP and WF-R-PP composites. We also detected the signals of these bound contributions, as indicated in Figures 5 and 6, which are much stronger in WF-PP-30, WF-PP-40, WF-R-PP-30, WF-R-PP-40 compared to PP or R-PP. We have also critically analyzed the ATR-FTIR data at wavelength cm −1 by fitting mathematical models (Gaussian terms) to find out the orientation of the functional groups WF on the surface of the composites WF-PP and WF-R-PP. Figure 7 shows the dependence of the area of the peak at 1030 cm −1 representing the stretching vibration of the C-O and C-O-C groups of the composites WF-PP and WF-R-PP with respect to the functional groups WF. Spectral analysis had shown that significant spectral differences occur at the surface of WF-PP and WF-R-PP. The characteristic peak at 1030 cm −1 increased with increasing WF loading in WF-PP/WF-R-PP composites. The results for A (area) were plotted as a function of WF loading in WF-PP and WF-R-PP composites. Based on this fit, there appears to be a broadening of this peak in cases up to 30% of the addition of WF to PP or R-PP. This indicates the increasing presence of WF functional groups on the surface of WF-PP and WF-R-PP composites. We also detected the signals of these bound contributions, as indicated in Figures 5 and 6, which are much stronger in WF-PP-30, WF-PP-40, WF-R-PP-30, WF-R-PP-40 compared to PP or R-PP. Comparison of the two ATR-FTIR spectra of WF-PP and WF-R-PP composite samples at the surface layer indicates the same behavior of PP -orientation in WF-PP and WF-R-PP composites. PP was oriented by injection moulding process, preferably at the surface. This process is known from the literature and is defined as skin-core orientation of PP polymer [28,29,[40][41][42][43][44][45]. In the literature, the relationship between the thickness of the polymer surface skin layer and the injection speed and temperature has been clearly studied. At a lower speed, the thickness of the skin is higher because the melt front of the material has more time to relax and is more prone to cooling. At lower mold or melt temperatures, the opposite is true. The skin layer increases. At the highest injection rate, the mold and melt temperatures have a negligible effect on the skin thickness [46]. Comparison of the two ATR-FTIR spectra of WF-PP and WF-R-PP composite samples at the surface layer indicates the same behavior of PP -orientation in WF-PP and WF-R-PP composites. PP was oriented by injection moulding process, preferably at the surface. This process is known from the literature and is defined as skin-core orientation of PP polymer [28,29,[40][41][42][43][44][45]. In the literature, the relationship between the thickness of the polymer surface skin layer and the injection speed and temperature has been clearly studied. At a lower speed, the thickness of the skin is higher because the melt front of the material has more time to relax and is more prone to cooling. At lower mold or melt temperatures, the opposite is true. The skin layer increases. At the highest injection rate, the mold and melt temperatures have a negligible effect on the skin thickness [46]. In the skin-core process, the material enters in a parabolic profile, taking into account the stretching and deposition on the wall and the formation of an immobile freezing layer. The top layer of molten material flows within the shell, and the thin region of the extended layer is called the skin. The core region, on the other hand, solidifies last and relaxes as it cools. The hierarchical structure is therefore commonly referred to as the skin-core structure and, following thermoplastic composites, involves a phase behavior for the dispersed phase and a crystalline or oriented structure hierarchy for the matrix. Only in 30%, 40% WF addition, some of these fibers are obviously oriented and present on the composite surface, which is due to the identification of the functional groups present in the wood (cellulose, hemicellulose as carbohydrates and, lignin.). The relationship between WF elongation on PP/R-PP with the increased distribution of skin phenomenon was further confirmed by the light microscopy images (Figure 8, Figure 9). The main objective of this study was to determine the microstructure of WF-PP/WF -R-PP composites from the skin core region and to investigate the evolution of the structure from the practical point of view of these WF-PP composites. The skin layer is known for its strong orientation with the flow. It has been shown in the micrographs of the interfaces of PP/R-PP and WF-PP/WF-R-PP composites. In the skin-core process, the material enters in a parabolic profile, taking into account the stretching and deposition on the wall and the formation of an immobile freezing layer. The top layer of molten material flows within the shell, and the thin region of the extended layer is called the skin. The core region, on the other hand, solidifies last and relaxes as it cools. The hierarchical structure is therefore commonly referred to as the skin-core structure and, following thermoplastic composites, involves a phase behavior for the dispersed phase and a crystalline or oriented structure hierarchy for the matrix. Only in 30%, 40% WF addition, some of these fibers are obviously oriented and present on the composite surface, which is due to the identification of the functional groups present in the wood (cellulose, hemicellulose as carbohydrates and, lignin.). The relationship between WF elongation on PP/R-PP with the increased distribution of skin phenomenon was further confirmed by the light microscopy images (Figure 8, Figure 9). The main objective of this study was to determine the microstructure of WF-PP/WF -R-PP composites from the skin core region and to investigate the evolution of the structure from the practical point of view of these WF-PP composites. The skin layer is known for its strong orientation with the flow. It has been shown in the micrographs of the interfaces of PP/R-PP and WF-PP/WF-R-PP composites. As can be seen for the same injection pressure and temperature as PP/R-PP and all WF-PP/WF-R-PP specimens, the thickness of the boundary layers is mainly affected by the WF loading. Moreover, the thickness of the skin layer represented on Figure 9 varied from 18.5 µm for WF -PP-5 composites to 2.9 µm for WF -PP-40. There was not much difference in the thickness of the skin layer, about 7.8 µm, between the composites with WF loading from 10 to 20%. In fact, the recycled WF-R-PP composites showed the same trend of skin layer thickness, from 20 µm for WF-R-PP−5 composites to 2.3 µm for WF-R- As can be seen for the same injection pressure and temperature as PP/R-PP and all WF-PP/WF-R-PP specimens, the thickness of the boundary layers is mainly affected by the WF loading. Moreover, the thickness of the skin layer represented on Figure 9 varied from 18.5 µm for WF -PP-5 composites to 2.9 µm for WF -PP-40. There was not much difference in the thickness of the skin layer, about 7.8 µm, between the composites with WF loading from 10 to 20%. In fact, the recycled WF-R-PP composites showed the same trend of skin layer thickness, from 20 µm for WF-R-PP−5 composites to 2.3 µm for WF-R-PP−40 composites. Compared to PP/R-PP, the layer is thicker and varies from 28.3 to 34.7 µm. The reason for the decreasing effect of skin thickness is probably due to the orientation of WF in the PP matrix, which are preferentially oriented in the core layer with PP matrix at low WF loading. Since this has been clearly studied in the case of PP-PET microfibril composites [45], we can predict the same conclusions there as well. When the content of WF in PP is low, the PP melt pushes WF into the mold, resulting in a preferential orientation of WF in the core. When the loading of WF in the PP matrix increases, the orientation of WF in the thin skin layer is also oriented. The results are in agreement with literature data, and it was found that with increasing WF loading, the skin layer of the polymer (PP, R-PP) becomes degreased, and this is more pronounced at 30 and 40% added WF. The latter is in accordance with the ATR-FTIR results. Among the surface parameters, the surface charge is a key parameter for enhancing or suppressing the interaction between dissolved compounds in an aqueous solution and solid material surfaces [47]. The zeta potential is a representative for the surface charge at the solid-water interface and a valuable parameter for the comparison of material surfaces before and after surface modification. The zeta potential is also applicable to characterize the effect of blending PP matrix with softwood fibers provided that the bulk composition of the WF-PP composites is reflected at the surface. Zeta potential results were thus obtained for unrecycled polypropylene (PP), recycled polypropylene (R-PP), and PP and R-PP with embedded softwood fibers (5−40 wt%). Figure 10 compares the raw measuring data, which is the dependence of the streaming potential on pressure difference, for pure PP and WF-PP-40 composite made of pure PP and 40 wt% WF at pH 6.1. We see a clear difference in the (negative) slope of the linear dependence of streaming potential on pressure difference, i.e., the streaming potential coupling coefficient dU str /d∆p, which is then used to calculate the surface zeta potential [48]. At pH 6.1 we obtain ζ = −55.1 mV for PP and ζ= −39.5 mV for WF-PP 40. Among the surface parameters, the surface charge is a key parameter for enhancing or suppressing the interaction between dissolved compounds in an aqueous solution and solid material surfaces [47]. The zeta potential is a representative for the surface charge at the solid-water interface and a valuable parameter for the comparison of material surfaces before and after surface modification. The zeta potential is also applicable to characterize the effect of blending PP matrix with softwood fibers provided that the bulk composition of the WF-PP composites is reflected at the surface. Zeta potential results were thus obtained for unrecycled polypropylene (PP), recycled polypropylene (R-PP), and PP and R-PP with embedded softwood fibers (5−40 wt%). Figure 10 compares the raw measuring data, which is the dependence of the streaming potential on pressure difference, for pure PP and WF-PP-40 composite made of pure PP and 40 wt% WF at pH 6.1. We see a clear difference in the (negative) slope of the linear dependence of streaming potential on pressure difference, i.e., the streaming potential coupling coefficient dUstr/dΔp, which is then used to calculate the surface zeta potential [48]. At pH 6.1 we obtain ζ = -55.1 mV for PP and ζ= -39.5 mV for WF-PP 40. Figure 11 shows the zeta potential of PP with different addition of softwood fibers (5−40 wt%) as a function of pH in the interval of pH 5−10. The extension of the pH dependence of the zeta potential towards lower pH approaches the isoelectric point (i.e., p.) in the range of pH 3−4 (data not shown). The sensitivity of the i.e., p.s of pristine and Figure 11 shows the zeta potential of PP with different addition of softwood fibers (5−40 wt%) as a function of pH in the interval of pH 5−10. The extension of the pH dependence of the zeta potential towards lower pH approaches the isoelectric point (i.e., p.) in the range of pH 3−4 (data not shown). The sensitivity of the i.e., p.s of pristine and scarcely functionalized polymer surfaces towards traces of impurities on the polymer surface and sample pre-treatment protocols make this parameter less applicable for a distinction between pristine PP and WF-reinforced composites. Instead we focus on the higher pH range where the zeta potential assumes a steady value due to the saturation of the polymer surface with hydroxide ions [49] or the complete de-protonation of acidic groups of WF exposed at the composite-water interface. For a series of alike material surfaces with a varying number of surface functional groups, the zeta potential correlates with surface hydrophilicity, which is usually represented by the water contact angle [50]. Such correlation is best observed when using zeta potential results obtained at higher pH, e.g., pH 8−9. From Figure 11 it can be seen that the contribution of the softwood fiber surface to the overall zeta potential of the WF-PP composites causes the shift of the negative zeta potential at high pH towards more positive values. As a representative indicator for the effect of the bulk WF fraction on the surface and interfacial charge, the negative zeta potential obtained at pH 8 is given in Table 2 for the series of WF-PP composites. It becomes evident that the zeta potential at pH 8 does not describe a continuous trend but reflects the discrepancy between bulk and surface compositions of the WF-PP composites. When comparing the experimental results for the zeta potential of WF-PP composites at pH 8 with the predicted zeta potential calculated by the weighted average of the zeta potential for unrecycled PP and spruce we conclude that except for WF-PP-40 the zeta potential of WF-PP composites suggests a higher surface concentration of softwood fibers compared to their bulk composition. In general, it may be seen that with increasing the added WF the ζ-potential at pH 8 became less negative. With the addition of WF to both PP and recycled PP composites (Figures 11 and 12), polar groups are introduced onto the composite surfaces (mainly OH and COOH), which cause the increase of surface hydrophilicity and thus a further decrease in the magnitude of the zeta potential at high pH. Figures 11 and 12 shows the corresponding zeta potential results for recycled PP and for R-PP with 5−40 wt% softwood fibers again in the range of pH 5−10. From Figure 11 it can be seen that the contribution of the softwood fiber surface to the overall zeta potential of the WF-PP composites causes the shift of the negative zeta potential at high pH towards more positive values. As a representative indicator for the effect of the bulk WF fraction on the surface and interfacial charge, the negative zeta potential obtained at pH 8 is given in Table 2 for the series of WF-PP composites. It becomes evident that the zeta potential at pH 8 does not describe a continuous trend but reflects the discrepancy between bulk and surface compositions of the WF-PP composites. When comparing the experimental results for the zeta potential of WF-PP composites at pH 8 with the predicted zeta potential calculated by the weighted average of the zeta potential for unrecycled PP and spruce we conclude that except for WF-PP-40 the zeta potential of WF-PP composites suggests a higher surface concentration of softwood fibers compared to their bulk composition. In general, it may be seen that with increasing the added WF the ζ-potential at pH 8 became less negative. With the addition of WF to both PP and recycled PP composites (Figures 11 and 12), polar groups are introduced onto the composite surfaces (mainly OH and COOH), which cause the increase of surface hydrophilicity and thus a further decrease in the magnitude of the zeta potential at high pH. Figures 11 and 12 shows the corresponding zeta potential results for recycled PP and for R-PP with 5−40 wt% softwood fibers again in the range of pH 5−10. Table 2. ζ-potential for PP and R-PP-softwood fiber composites and deviation between measured and predicted results (assuming a weighted average of the zeta potential for PP and R-PP and spruce, respectively). Table 2. ζ-potential for PP and R-PP-softwood fiber composites and deviation between measured and predicted results (assuming a weighted average of the zeta potential for PP and R-PP and spruce, respectively). Changes in the zeta potential are already evident with small additions of WF, which is probably also due to the analysis in the wet, where, in addition to the chemical nature, a faster wettability and reflection of the hydrophilic or hydrophobic character of surfaces occurs also due to morphological changes. Composites Material A comparison of the zeta potential for the series of WF-PP and WF-R-PP ( Figure 13) composites generally shows that recycling of PP shifts the zeta potential at high pH to slightly less negative values. The small difference in the surface hydrophilicity between the unrecycled and recycled PP resins of the PP-wood particle composite samples is evident, which may be related to the contribution of polyethylene (PE) to R-PP. For the WF-PP composites the difference in hydrophilicity for the neat polymers PP and R-PP is superimposed on their surface properties and thus reflected by the composites' zeta potential. Polymers 2021, 13, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/polymers By taking a closer look at the deviation of the measured zeta potential for the WF-R-PP composites from the predicted zeta potential derived from the zeta potential of the unrecycled materials R-PP and spruce we realize that the zeta potential results for WF-R-PP resemble the expected trend much better than the zeta potential for WF-PP. We conclude that the softwood fibers show a stronger affinity toward recycled PP while they are less accepted by the more hydrophobic pristine PP resin. For the latter composite softwood fibers experience a stronger repulsion in the bulk and tend to accumulate in the proximity of the surface. An important conclusion of the zeta potential measurements is that in all composites the anionic charge dominates in almost the whole pH range. This clearly indicates that these plastics have a high affinity for adhesion/adsorption of cationic substances and repeal anionic substances. In this way, some adsorption affinity and electrostatic interactions of those composites may be predicted. The further investigation was focused on nanoindentation measurements. As already mentioned, many such materials act on the interphase, so the first mechanical loads occur on their surface, it is highly recommended to monitor the mechanical properties of the surface layer of the material. The composite materials were analyzed by nanoindentation measurements, where the materials were tested to a maximal depth of 3900 nm. This range was subdivided into intervals of 500 nm, starting from 500-1000 nm, and going to a final interval of 3500-3900 nm. Within each interval, the nanoindentation modulus and hardness were averaged. The results of this analysis are shown in Figure 14. The red dotted lines in Figure 14 indicate the two indentation depths that were analyzed in detail in continuation ( Figure 14). By taking a closer look at the deviation of the measured zeta potential for the WF-R-PP composites from the predicted zeta potential derived from the zeta potential of the unrecycled materials R-PP and spruce we realize that the zeta potential results for WF-R-PP resemble the expected trend much better than the zeta potential for WF-PP. We conclude that the softwood fibers show a stronger affinity toward recycled PP while they are less accepted by the more hydrophobic pristine PP resin. For the latter composite softwood fibers experience a stronger repulsion in the bulk and tend to accumulate in the proximity of the surface. An important conclusion of the zeta potential measurements is that in all composites the anionic charge dominates in almost the whole pH range. This clearly indicates that these plastics have a high affinity for adhesion/adsorption of cationic substances and repeal anionic substances. In this way, some adsorption affinity and electrostatic interactions of those composites may be predicted. The further investigation was focused on nanoindentation measurements. As already mentioned, many such materials act on the interphase, so the first mechanical loads occur on their surface, it is highly recommended to monitor the mechanical properties of the surface layer of the material. The composite materials were analyzed by nanoindentation measurements, where the materials were tested to a maximal depth of 3900 nm. This range was subdivided into intervals of 500 nm, starting from 500-1000 nm, and going to a final interval of 3500-3900 nm. Within each interval, the nanoindentation modulus and hardness were averaged. The results of this analysis are shown in Figure 14. The red dotted lines in Figure 14 indicate the two indentation depths that were analyzed in detail in continuation ( Figure 14). The modulus of both materials, regardless of the wood concentration, increased with indentation depth. The rate of change decreased with increasing wood concentration and plateaued at around 3000-3500 nm depth. The increasing modulus with indentation depth could result from a higher concentration of WF through the sample depth, different polymer matrix structures, or a consequence of the substrate effect, where the bottom layers affect the properties of the top layer. The modulus of both materials, regardless of the wood concentration, increased with indentation depth. The rate of change decreased with increasing wood concentration and plateaued at around 3000-3500 nm depth. The increasing modulus with indentation depth could result from a higher concentration of WF through the sample depth, different polymer matrix structures, or a consequence of the substrate effect, where the bottom layers affect the properties of the top layer. Results suggest that, at the surface layers (up to 1000 nm), where we see minor differences between the materials and smaller modulus values, the pure polymer matrix material is present primarily. The increasing modulus with indentation depth could result from a higher concentration of WF through the sample depth, different polymer matrix crystalline structure caused by the injection moulding, or a consequence of substrate effect where the bottom layers affect the properties of the top layer. From Figure 14a the influence of WF` concentration on modulus can be seen for unrecycled material. However, similar conclusions can be drawn for recycled material, Figure 14b. The materials with 5% and 10% wood concentration (WF-PP-5 and WF-PP-10) follow the behavior of pure polymer material (PP). For clarity, we did not show the error bars on these diagrams. However, Results suggest that, at the surface layers (up to 1000 nm), where we see minor differences between the materials and smaller modulus values, the pure polymer matrix material is present primarily. The increasing modulus with indentation depth could result from a higher concentration of WF through the sample depth, different polymer matrix crystalline structure caused by the injection moulding, or a consequence of substrate effect where the bottom layers affect the properties of the top layer. From Figure 14a the influence of WF' concentration on modulus can be seen for unrecycled material. However, similar conclusions can be drawn for recycled material, Figure 14b. The materials with 5% and 10% wood concentration (WF-PP-5 and WF-PP-10) follow the behavior of pure polymer material (PP). For clarity, we did not show the error bars on these diagrams. However, the error bars of materials with 0, 5, and 10% of WF were overlapping, indicating that there was practically no difference between these three materials. The skin core effect can explain similar mechanical properties. As observed by optical microscopy and confirmed by ATR-FTIR, the materials form a thicker PP layer in the case of materials with a lower concentration of WF. Measurements show that in the surface layers (up to 1000 nm), where we see marginal differences between the three materials, the pure polymer matrix material is present primarily. From 20 wt. % of WF on, higher modulus values compared to the previous three concentrations can be observed. A higher modulus is seen at all indentation depths. The increase of the modulus can be attributed to decreasing layer of pure PP. It was shown from optical microscopy that at 20 wt. % the PP layer is 6900 nm thick for PP-based composites and 11,000 nm thick for R-PP-based composites. This layer decreases to 2900 nm (PP composites) and 2300 nm (R-PP composites). As the layer decreases, the substrate effect becomes more pronounced. And in the case of 30 and 40% wt. materials, the PP layer is thinner compared to indentation depth. The comparison of the modulus values of unrecycled ( Figure 14a) and recycled (Figure 14b) materials show that recycled materials have lower values of modulus regardless of the WF concentration. Comparing the pure polymer matrix materials (PP and R-PP) shows an inherent difference between the two matrix materials. Recycled material (R-PP) has modulus values lower by about 0.25-0.30 GPa throughout the surface depth. The lower modulus of pure matrix material may result from the recycling process since, during this process, the polymeric material is exposed to harsh conditions that could lead to material degradation. Another reason for lower modulus could be the presence of polyethylene-origin material that DSC detected in the R-PP material in our previous work [27]. Generally, PE materials have lower modulus values compared to PP materials. Therefore, a combination of PP and PE could result in a lower modulus compared to PP material. R-PP material's lower modulus also influences the mechanical properties of filled materials, as the modulus of all recycled filled materials is lower compared to unrecycled materials. The results of hardness (Figure 14c,d) showed the same trends as in the case of modulus; hardness was increasing with indentation depth. Also, here, increasing hardness can be attributed to changes in the polymer matrix structure through the depth and, in the case of filled materials, the thickness of the PP layer and different crystalline structures for materials based on PP material. In the case of unrecycled materials with 0, 5, and 10% wt. it might be concluded that low WF concentrations did not affect the material's hardness since the hardness of the materials with 5 and 10% addition of WF was similar to the hardness of pure PP material, and it stayed the same throughout the indentation depth. Comparing the hardness of unrecycled and recycled materials shows that the surface of unrecycled material is harder. As we stated for the modulus, the reason for the lower hardness of recycled material could be the effect of the recycling process and material degradation and the presence of (softer) PE material inside the R-PP. Also, DSC results showed [27] hat materials with PP matrix tend to form structure with higher crystallinity than R-PP, resulting in higher hardness of PP materials. A further investigation of surface mechanical properties was performed in the two areas marked in Figure 14 with red dashed lines. Specifically, between 500-1000 nm and 3000-3500 nm. The first area was selected since it was the most upper area of the sample, while the second area was selected as an area where properties start to become constant with depth. The results at selected depths are shown in Figure 15. Figure 15a show the relative modulus and (b) relative hardness at selected depths. The values were normalized to the values of pure matrix polymer material (PP and R-PP). By normalizing the values of modulus and hardness to the values of the neat polymer matrix, we obtained relative values of both physical properties, thus eliminating the inherent differences between the two materials. The relative modulus at the upper sample layers (500-1000 nm), marked with triangles (Figure 15a), is nearly constant up to 10% WF concentrations, as it changes for less than 10%. From 20% on, the modulus starts to increase gradually. At 40 wt.%, the modulus of recycled and unrecycled material is about 25% higher than the pure polymeric matrix material. There is no significant difference between the behavior of recycled or unrecycled material at this layer (500-1000 nm). This could be explained by the comparable thickness of the PP skin layer as shown by the optical microscopy and indicate the material's homogeneous structure. From the relative change of hardness ( Figure 15b) the hardness of unrecycled material changed by about 15% and stayed constant regardless of the WF concentrations. In contrast, the recycled PP after an initial drop at 5% and 10% wt% did not change significantly compared to pure R-PP material. It appears that the addition of WF makes the surface of unrecycled material harder, while it has a marginal effect in the case of recycled material. Deeper in the sample, 3000-3500 nm (rectangles), we may again separate the behavior of the material to that at low WF concentrations (up to 10%), where no significant changes of properties could be observed (modulus or hardness). While at higher WF concentrations, a substantial change of modulus and hardness can be seen compared to the neat matrix material's properties. A more significant change of both properties can be seen in the case of recycled material, where modulus changes by about 50% and hardness by 30% at the highest concentrations, compared to 35% chance of modulus and 18% of hardness in the case of unrecycled material. Overall, we may conclude that up to 10% WF concentrations there is no significant change to measured mechanical properties. The PP skin layer dominates the mechanical behavior of materials. The observed increase of mechanical properties at lower WF concentrations could be related to the gradual decrease of the PP skin layer. This amplifies the effect of the bottom layers (substrate effect). When the PP skin layer is smaller at higher WF concentrations, the presence of wood fibers reinforces the matrix material, as seen by The relative modulus at the upper sample layers (500-1000 nm), marked with triangles (Figure 15a), is nearly constant up to 10% WF concentrations, as it changes for less than 10%. From 20% on, the modulus starts to increase gradually. At 40 wt.%, the modulus of recycled and unrecycled material is about 25% higher than the pure polymeric matrix material. There is no significant difference between the behavior of recycled or unrecycled material at this layer (500-1000 nm). This could be explained by the comparable thickness of the PP skin layer as shown by the optical microscopy and indicate the material's homogeneous structure. From the relative change of hardness ( Figure 15b) the hardness of unrecycled material changed by about 15% and stayed constant regardless of the WF concentrations. In contrast, the recycled PP after an initial drop at 5% and 10% wt% did not change significantly compared to pure R-PP material. It appears that the addition of WF makes the surface of unrecycled material harder, while it has a marginal effect in the case of recycled material. Deeper in the sample, 3000-3500 nm (rectangles), we may again separate the behavior of the material to that at low WF concentrations (up to 10%), where no significant changes of properties could be observed (modulus or hardness). While at higher WF concentrations, a substantial change of modulus and hardness can be seen compared to the neat matrix material's properties. A more significant change of both properties can be seen in the case of recycled material, where modulus changes by about 50% and hardness by 30% at the highest concentrations, compared to 35% chance of modulus and 18% of hardness in the case of unrecycled material. Overall, we may conclude that up to 10% WF concentrations there is no significant change to measured mechanical properties. The PP skin layer dominates the mechanical behavior of materials. The observed increase of mechanical properties at lower WF concentrations could be related to the gradual decrease of the PP skin layer. This amplifies the effect of the bottom layers (substrate effect). When the PP skin layer is smaller at higher WF concentrations, the presence of wood fibers reinforces the matrix material, as seen by increased modulus and hardness. The effect of added WF on modulus and hardness, at least at 30 and 40 wt% is not the same for unrecycled and recycled matrix. The higher relative modulus and hardness at of recycled material show that the WF have a stronger reinforcing effect on the overall mechanical properties than unrecycled material. WF-PP and WF-R-PP composite can be used as a multifunctional material that exploits many properties related to of unique surface-skin-core orientation, surface zeta potential, and increased mechanical strength. On the other hand, the antioxidant properties of these WF-PP/WF-R-PP composites give them a value-added plastic product. Antioxidativity is defined as the ability to inhibit the oxidation process of a free radical. For many applications, the antioxidant properties of filled polymers such as WF-PP/WF-R-PP composite materials are obviously dependent on the amount of filler and they are very much related to the distribution and orientation in the polymer matrix [51,52]. The antioxidant activity was evaluated by DPPH • radical scavenging test and the reducing power test [31,33]. An antioxidant can be broadly defined as any substance that delays or inhibits oxidative damage to a target molecule. The main characteristic of an antioxidant material is its ability to scavenge free radicals and directly exhibit antioxidant activity. Specifically, antioxidant active packaging aims to prevent or slow down the oxidation of certain food components, such as lipids and proteins, resulting in a deterioration of the physical properties (such as taste and color) of these foods. This active material approach requires the intentional incorporation of antioxidants into the packaging materials and their further migration into these food products. On the other hand, the antioxidant surface also prevents the aging of plastics and is therefore very welcome. The antioxidant properties of the composites WF-PP/WF-R-PP can be seen from the color change of the DPPH• radical, which shifts from purple to yellow. Figure 16 represents the evolution of the UV-vis spectra of the WF-PP (Figure 16a)/WF-R-PP (Figure 16b) composites immersed in DPPH• solution and measured at 515 nm and evaluated from 0 to 300 min. As can be seen from the Figure 16. the WF themselves have the higher antioxidant activity which increases with time. PP itself shows the lowest antioxidant activity with no changes in the time interval. Moreover, with the increase of WF addition, the antioxidant activity also increases. In all the cases (5%, 10%, 20%, 30% and 40%), the antioxidant activity is time dependent and increases with the increase in time from 0 to 300 s. However, a plateau value is not observed, and it is assumed that when the time exceeds 300 s, the activity increases even further. The percentage antioxidant activity (AA 100 , AA 200 , AA 300 ) at time points 100, 200 and 300 s are shown in Table 3. The results show that the antioxidant activity of WF-PP and WF-R-PP composites increase with increasing WF loading, which is expected since wood is an antioxidant component. Again, the best antioxidant activity is obtained at 40% wood loading, where we also showed the maximum surface availability. These results indicate the ability of the composites WF-PP/WF-R-PP to act as antioxidants. As the WF load increases, the antioxidant ability of these materials increases. Therefore, we assume that the antioxidant ability of WF-PP composites depend on the active WF groups. It is believed that when the spherical disk of WF-PP is immersed in DPPH-solution, the antioxidant active groups of WF-PP/WF-R-PP deprotonate and donate electrons to the DPPH and finally inhibit the free radical [52]. This method, as a wet method, correlates quite well with another wet "zeta potential method". It shows a correlation between negative zeta potential as an indication of the hydrophilic character of the surface and antioxidant activity ( Figure 17). It can be concluded that the protonation and wettability that occurred in the water on contact with the material are the driving force for the antioxidant activity. In general, the antioxidant activity of WF-PP and WF-R-PP composites is not preferentially related to WF surface distribution and orientation. It can be concluded that the protonation and wettability that occurred in the water on contact with the material are the driving force for the antioxidant activity. In general, the antioxidant activity of WF-PP and WF-R-PP composites is not preferentially related to WF surface distribution and orientation. Conclusions The plastics industry expects much from advanced materials, but the relatively few that are commercially available cannot meet all applications and expectations. In this context, hybrid thermoplastic materials with renewable fillers can potentially provide all the benefits of traditional filled plastic composites and avoid their environmental drawbacks, such as non-biodegradability and extreme pollution. The influence of WF loading and recycling on the surface properties of wood-polymer composites was investigated. In the diffraction patterns of all WF-PP/WF-R-PP composite samples, the α-monoclinic PP formations were preferentially present. In general, the relationship between WF loading to PP/R-PP with increased distribution of the well-known phenomena of skin layer thickness of WF-PP/WF-R-PP composites was highlighted. Moreover, the ATR-FTIR measurements showed that up to 30% of the addition of WF to the PP, WF-PP composites accelerated the surface functional groups present in the wood. Indeed, the increased loading of WF in PP resulted in a thicker layer of the upper skin layer of WF-PP/R-PP polymer composites, as also observed with optical microscopy. WF concentration also caused changes in mechanical properties (nanoindentation modulus and hardness). Two distinct ranges were observed; at low concentrations (up to 10%), no significant changes in properties were observed. At concentrations above 20%, significant changes were observed (up to 40%). This behavior can be associated with the WF density and the formation of a particle network when the WF concentration is increased, as well as with the formation of a different crystalline structure in the case of non-recycled PP. Furthermore, we have shown that the properties in the top surface layers of the samples were similar to the properties of the pure matrix material at low concentrations and increased slightly at high concentrations. This indicates that a layer of pure polymer matrix material formed on the upper surface of the sample; however, with increasing WF concentration, this layer appears to decrease. At sufficiently high WF concentration, the fibers prevented the formation of a pure polymer layer on the surface, which was also confirmed by ATR measurements where the characteristic peaks for WF functional groups were found on the surface layers of the materials with 30, 40% WF concentration. The zeta potential measurements showed an increased surface hydrophilicity with the introduction of more polar groups of WF on the surfaces of the WF composites as directly responsible for the increased antioxidant activity, which increases with the added amount of WF. The nanoindentation modulus and hardness also increase with the added amount of WF, which is significant at 30% and 40% of WF, respectively. As mentioned above, many such materials act on the interphase (packaging materials, containers, bins.), so the first mechanical stresses occur on its surface, and it is therefore extremely important to improve it. Moreover, with introduced hydrophilicity and antioxidant activity packaging applications can be seen where anti-fog, properties are essential to prevent condensation and antioxidant activity prevents oxidation processes in the packaging system. All these properties inhibit the perishability of packaged goods and prolong their life. In addition, antioxidant activity influences the slower aging of plastics. It was shown that important surface properties could be improved by the addition of wood fillers without further adhesion chemicals. Zeta potential measurements also show that all materials still have anionic character, indicating adsorption affinity to cationic substances and their electrostatic interactions. The latter is extremely important in the surface functionalization of these composites, such as antimicrobial agents, where most antimicrobial agents have cationic character.
v3-fos-license
2021-08-28T06:17:21.335Z
2021-08-01T00:00:00.000
237330577
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-4601/18/16/8881/pdf", "pdf_hash": "aedf2ae3fe1410590a78343543d6ce74affb28cd", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42706", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "sha1": "b3b9914bbf510d7d82a8dc47cd521d76e44451ae", "year": 2021 }
pes2o/s2orc
Bioaccumulation of Macronutrients in Edible Mushrooms in Various Habitat Conditions of NW Poland—Role in the Human Diet Recently, the interest in mushroom consumption has been growing, since their taste and low calorific value are appreciated, but also due to their nutritional value. In determining the usefulness of mushrooms in the human diet, it is important to consider the conditions of their occurrence to perform the assessment of bioaccumulation of minerals. The aim of the study was: (a) to determine the content of selected macronutrients (P, K, Ca, Mg, Na) in fruiting bodies of Boletus edulis, Imleria badia, Leccinum scabrum and the soils, (b) to determine their bioaccumulation potential taking into account the habitat conditions, and (c) an attempt to estimate their role in covering the requirement for macronutrients of the human organism. The research material was obtained in the NW of Poland: Uznam and Wolin, the Drawa Plain and the Ińsko Lakeland. In the soil, we determined the content of organic matter, pH, salinity and the content of absorbable and general forms of macronutrients. The content of macronutrients in mushrooms was also determined. Chemical analyses were performed using the generally accepted test methods. The study showed that in NW Poland, B. edulis grew on the acidic soils of Arenosols, and I. badia and L. scabrum grew on Podzols. The uptake of K, Mg and Ca by the tested mushrooms was positively, and P and Na negatively correlated with the content of these elements in the soil. The acidity of the soil affected the uptake of K and Mg by mushrooms. There was no effect of the amount of organic matter in the soil noticed on the content of macronutrients (except sodium) in mushrooms. Among the studied macronutrients, none of the mushrooms accumulated Ca, while P and K were generally accumulated in the highest amounts, regardless of the species. Each of the other elements was usually accumulated at a similar level in the fruiting bodies of the species we studied. The exception was I. badia, which accumulated higher amounts of Mg compared to B. edulis and L. scabrum. Mushrooms can enrich the diet with some macronutrients, especially in P and K. Introduction Wild edible mushrooms in many countries, especially in central and eastern Europe in autumn, when they appear more often, are a frequent element in the human diet. Recently, the interest in their consumption has been increasing, not only because of the taste, but also due to their nutritional value [1][2][3]. The main mass of mushroom fruiting bodies is Study Area The mushrooms were collected in three physiographic regions of NW Poland: Uznam and Wolin, the Drawsko Plain and the Ińsko Lakeland [28]. Uznam and Wolin are the islands separating the Szczecin Lagoon from the Pomeranian Bay. Mushrooms for research were collected in the north-eastern part of the Wolin island. The low ridges of shore dunes covered by dry pine forests create the dominant topography in that region. The Drawsko Plain is an extensive outwash plain drained by Drawa river and its tributaries. The quite monotonous landscape of this region is diversified by numerous lakes, mid-forest pools and peat bogs. The Drawsko Plain is covered mainly by pine forests. The Ińsko Lakeland is hilly morainic upland intersected by numerous glacial troughs. Apart from the moraine hills and ravines, the region has numerous post-glacial lakes and pools, and vast water-logged area. In this region, the dense forests are present mostly in the zone of terminal moraine. Fungal and Soil Materials Three species of the most commonly picked wild-growing edible mushrooms in Poland were selected for the study: Boletus edulis, Imleria badia and Leccinum scabrum. The locations of material collection were in the distance from communication routes, and there were no other sources of environmental pollution in these areas as well. During the sampling, attention was paid to make sure that the fruiting bodies of the same species are collected in different places, are fully developed and are not attacked by insects, snails and molds. From each of the three studied regions, 3-7 pooled samples of each species were collected, with each pooled sample consisting of 5 fruiting bodies. The mushrooms were cleaned of sand and bedding, and after being transported to the laboratory, were dried in an electric dryer at 40 • C for 48 h. After drying, the whole fruiting bodies were milled to a powder in a mortar. The average dry weight of the pooled sample of Boletus edulis from Uznam and Wolin, Drawsko Plain, and Ińsko Lakeland averaged 10.63 g, 13.50 g, 8.60 g, respectively; of I. badia 7.25 g, 7.61 g, 7.55 g, respectively; and L. scrabum 7.96 g, 7.59 g, 7.78 g, respectively. The taxonomic identification of mushrooms was made according to Knudsen and Vesterholt [29], using standard methods of macroscopic mushroom testing, and the names of the species were given according to the Index Fungorum database (http://www.indexfungorum.org/ (accessed on 15 June 2021)). In each fruiting bodies picking location, the soil substrate was also collected from a depth of 0-20 cm for testing. Within the sample, the surface organic level (0-5(10) cm, about 0.5 kg; decomposed forest litter) and the mineral level below (5(10)-20 cm, about 0.5 kg) were collected. In the soil material the following determinations were made: loss on ignition (organic matter-OM) was determined by burning soil samples in a muffle furnace at the 550 • C; pH in 1 mol dm −3 KCl was determined potentiometrically; salinity was determined conductometrically. Content of available P, Mg, K and Ca was determined by extracting in 0.5 mol·dm −3 HCl; content of total forms in soil macroelements was determined after mineralization in HNO 3 and HClO 4 in a ratio of soil 1:1. The content of K, Na and Ca was measured with the atomic emission spectrometry, whereas Mg content was determined by flame atomic absorption spectroscopy using iCE 3000 Series. The content of available and total P was determined by spectrophotometric molybdenum blue method (690 nm wavelength) using spectrophotometer Marcel MEDIA™ [30]. The limits of were detection (mg·kg −1 ): Ca 0.004; Mg 0.002; K 0.001 and Na 0.004. Assessments of the accuracy and precision of the analytical methods and procedures used were determined using certified reference material: CRM036-050 Loamy Sand 4 (CRM 036-050 produced by Resource Technology Corporation, USA and UK). The effectiveness of the process has been validated with 90-95% efficiency. The results shown are the average of three measurements, working standards made from Merck standards with a concentration of 1000 mg·dm −3 . The content of elements in mushrooms were determined after mineralization of 1 g of mushroom dry weight: Mg, P, K, Na and Ca were measured after wet mineralization in H 2 SO 4 and HClO 4 . The content of K, Na and Ca was measured with the atomic emission spectrometry, Mg content with the flame atomic absorption spectroscopy. P was assessed by the colorimetric method. The efficiency of the process was validated with 90-95% success using certified reference materials, namely, tea leaves (INCT-TL-1) and a mixture of Polish herbs (INCT-MPH-2), both produced by the Institute of Nuclear Chemistry and Technology, Warsaw, Poland. All tests were performed in three replications. The coefficient of bioconcentration of macronutrients was calculated using relation: Cm-is the concentration of macronutrient in mushroom • Cs-is the concentration of macronutrient in mushroom substrate (soil) Statistical Analysis Statistical analysis of the obtained results of soil chemical properties was performed using Statistica 12.5 (StatSoft Polska, Cracow, Poland). Statistical significance of differences between means was determined by testing normality of distribution in each group and homogeneity of variance in all groups, followed by ANOVA with Tukey's post hoc test. The significance was set at p < 0.05. The multidimensional analysis was carried out using the analysis of the main components (PCA). The data were scaled during pre-processing automatically. The obtained results were subjected to agglomerative cluster analysis and classified into groups in a hierarchical arrangement by Ward's method. Soil Properties and Macronutrients Concentrations B. edulis were found in Arenosols soils, whereas I. badia and L. scabrum in Podzols soils [31]. Arenosols soils were characterized by a well-developed organic level with an average organic matter content of 62%, which was deposited at the humus level made of clay sand, with an average organic matter content of 9.74%. Podzols soils had a fleshy organic level with a clear division into the raw material, butvin and epihumus sub-levels, with an average organic matter content of 71.18% under bay boletes, and 64.50% under birch boletes. The humus levels located below the organic levels were rich in organic matter (from 21.37% under bay boletes to 35.22% under birch boletes). The studied Podzols and Arenosols were characterized by the typical features for these types of soil [32][33][34][35][36]. The organic and humus levels in both Arenosols and Podzols were strongly acidic, which is typical for these types of soil. The highest pH values were found in Arenosols under B. edulis, and the lowest in Podzols under I. badia (Table 1). These soils were characterized by a low electrolytic conductivity-from 109.89 under B. edulis to 159.81 µS/cm under L. scabrum. Mleczek [37] found that the salinity of soils under B. edulis, I. badia, L. scabrum and other mushrooms stays in the range of 22 to 144 mS/m. The organic level of Arenosols under B. edulis was moderately rich in available K and Mg and very rich in available P [38]. Significant differences were found in the content of K and Mg in the individual research points (from low to very high content of available K and from medium to high content of available Mg). Podzols organic levels were poor in available K and Mg, and very rich in available P [38]. The organic level of Arenosols under B. edulis was the most abundant in available K, Mg and P. There were no significant differences in the available Ca content, which accounted for about 80% of its total forms. In the total content of K and P, a high proportion of available forms of K: 69-82% and P: 51-69%, and a lower share of available Mg 28-39% were also found. In contrast to the organic levels, the sandy mineral levels of Arenozols and Podzols are very poor in nutrients. The studied soils under B. edulis were richer in Ca and Mg but poorer in Na and K (Table 2), than the soils under B. edulis from other regions of Poland [39]. On the other hand, the content of macroelements in soils under I. badia was within the ranges given by Malinowska et al. [40]. Macronutrient Concentrations in Mushrooms Mushrooms are responsible for the digestion of cellulose, chitin and all dead organic matter. They absorb macro-and microelements from the decomposing organic matter, and in certain amounts pass them further to plants (to the tree root system) in exchange for sugars [24,25]. The mycelium supplies plants with hardly available N and P anions in exchange for sugars, which the mycelium cannot produce itself, due to the lack of photosynthesis process in its cells [26]. Phosphorus Our research showed that B. edulis, I. badia and L. scabrum did not differ significantly in the P content, containing on average: 8.92 g/kg DM, 8.19 g/kg DM, and 7.70 g/kg DM, respectively (Table 3). Vogt and Edmonds [41], Nikkarinen and Mertanen [18], and Rudawska and Leski [42] also found no differences in the content of this element. In various regions of Poland, its average concentration in the examined species of mushrooms ranged from 1.4-10.0 g/kg DM [42][43][44][45]. In mushrooms growing in Germany, its typical content is 5-10 g/kg DM [4], while in Finland, the content is an average of 4-6.3 g/kg DM [18]; a much higher concentration was recorded in Japan, from 43-69 g/kg DM [46], whereas small levels were noted in Turkish L. scabrum, with a mean value of 3.22 g/kg DM [47]. The mushrooms we analyzed contained amounts of phosphorus that were within the range found in cultivated mushrooms: Agaricus bisporus and Pleurotus ostreatus [48,49]. Perez-Moreno and Read [24], Entry et al. [50], and Andersson et al. [51] showed that mycelium obtains large amounts of phosphorus from the substrate and supplies it to plants, as well as accumulates it itself in large amounts. The intake of P to the mycelium is 10 to 50 times higher than the levels of this element accumulated in the substrate [42]. In the fungi, we also observed bioconcentration of P, although to a much lesser extent, and the BCF bioconcentration coefficient was similar regardless of the species (Table 4). Bioconcentration of phosphorus in mushrooms was found by Kojta et al. [44], Chudzynski et al. [52], Falandysz et al. [53], Chudzyński and Falandysz [15] and Bučinová et al. [54]. Bučinová et al. [54] found a significantly higher bioconcentration factor for phosphorus in fungi with soil mineral levels than organic levels. A similar relationship was found in our own research. The ability of mushrooms to obtain phosphorus from organic compounds results from the presence of phosphatase produced in their cells [55,56]. Phosphorus is one of the main elements in mushrooms, and it is generally present in lower amounts than potassium [16,18,42,44]. However, in the mushroom fruiting bodies we examined, phosphorus was found in a higher concentration than the other macroelements (Table 3). Due to the significant potential of phosphorus accumulation, mushrooms can be an important source of this element in the human diet. The recommended daily allowance (RDA) of P according to the Institute of Nutrition and Food [57] ranges from 700 to 1250 mg. Assuming that this element is easily absorbed from mushrooms by humans, the consumption of approximately 85-150 g of dried or approximately 850-1500 g of fresh fruiting bodies of these three species collected in NW Poland would cover the total demand for this element. Likewise, the entire daily requirement would be covered by 80-140 g of dried B. edulis and about 800-1400 g of fresh ones, and in case of L. scabrum, by about 90-160 g of dried mushrooms or 900-1600 g of fresh ones, respectively. In the human diet the consumption of 100 g of dried mushrooms (e.g., B. edulis) covers 71-100% of the daily requirement for P. The bioavailability of phosphorus is unknown [4,16]. Potassium Potassium is the major element in mushrooms, along with N and P [9,18,42]. Different species of mushrooms take up similar amounts of potassium [41,42]. Additionally, our research did not show any significant differences in its content in B. edulis, I. badia and L. scabrum; its average concentration was: 7.79 g/kg DM, 4.59 g/kg DM, and 6.91 g/kg DM, respectively ( Table 3). The mushrooms we tested were poorer in K, compared to samples collected in other regions of Poland. Potassium content in L. scabrum in Poland ranged on average from 21.3 to 52 g/kg DM [42,43,58], in I. badia on average from 22.5 to 35.1 g/kg DM [40,42] and B. edulis on average from 25-51 g/kg DM [19,39,44,45,59,60]. Higher concentrations in various species of mushrooms were also recorded in other regions of the world, e.g., in Finland from 23.5-26.7 g/kg DM [18], in Japan from 24.9-51.5 g/kg DM [46], in Turkey from 12.6 to 51.0 g/kg DM (in L. scabrum 21.1 g/kg DM) [47,61] and in Germany, from 20-40 g/kg DM [4]. The wild mushrooms we analyzed contained less potassium than the cultivated mushrooms A. bisporus and P. ostreatus [48,49]. The tested mushrooms showed the bioconcentration of K, with the highest BCF coefficient in L. scabrum (12.12 in the organic level and 20.32 in the mineral level), (Table 4). This element bioconcentration in mushrooms in the BCF ranging from 2.2 to 93.8 [4,39,40,42,43,53,59]. Whereas Bučinová et al. [54] found a clear influence of the nature of the substrate (organic, mineral) on the value of bioconcentration factor. Mushrooms contain similar amounts of K as plants. The potassium content in plants, depending on the species, ranges from 2 to 18 g/kg DM [62]. The adequate intake (AI) of K according to WHO [63], EFSA [64] and the Institute of Nutrition and Food [57] ranges from 2400 to 3500 mg/day. The daily requirement for K would be provided by about 370-540 g of dried, or 3700-5400 g of fresh mushrooms from NW Poland, assuming that the accumulated potassium is fully absorbed by the human body. Out of the three tested species, boletuses are the best source of potassium, capable to cover the daily requirement by respectively 308 g to 449 g of dried, or from 3080 g to 4490 g of fresh fruiting bodies. In the human diet the consumption of 100 g of dried mushrooms (e.g., B. edulis) covers 22-33% 8 of 15 of the daily requirement for K. The bioavailability of K is unknown [4,16]. Mushrooms can enrich a human diet with potassium, but are unlikely to satisfy the full daily requirement. Magnesium The tested mushrooms contained significantly less Mg than P and K. A similar distribution of these elements in mushrooms was found by Mattila et al. [16] and Nikkarinen and Mertanen [18]. The mushroom fruiting bodies we collected generally accumulated Mg to a similar degree. Nevertheless, B. edulis contained on average significantly more of this element than I. badia and birch boletes: 1.37 g/kg DM, 0.91 g/kg DM and 0.89 g/kg DM, respectively ( Table 3). The reason for the differences was the greater abundance of absorbable Mg in the substrate on which the mushroom fruiting bodies grew in the Drawa Plain (Table 1). According to Vogt and Edmonds [41] and Rudawska and Leski [42], individual species of mushrooms do not differ significantly in their Mg content. In other regions of Poland, the concentration of Mg in L. scabrum, I. badia and B. edulis ranged from 0.162 g/kg to 1.20 g/kg DM [19,39,40,[42][43][44][45][58][59][60]. However, in France the levels found in these mushrooms were from 0.449 to 1.150 g/kg [65], in Finland from 0.696-1.053 g/kg DM [18], in Greece 0.782 g/kg DM [9], in Turkey from 0.850 to 4.54 g/kg DM [47,61] and in Germany from 0.8 to 1, 8 g/kg DM [4]. In Japan, the content of Mg in various species of mushrooms ranged from 0.682 to 1.400 g/kg DM [46]. The mushrooms we analyzed contained amounts of magnesium that were within the range found in cultivated mushrooms: A. bisporus and P. ostreatus [48,49]. Among the mushrooms we tested, a clear accumulation of Mg was found only in bay boletes, while in the case of B. edulis and L. scabrum, the bioconcentration factors (BCF) > 1 only in relation to the mineral level (Table 4). Magnesium is generally bioconcentrated in mushrooms in the BCF range from 1.5 to 7.2 [39,40,42,43,53,59]. It happens, however, that under certain environmental conditions, no bioconcentration of this element is observed [4,40]-only in a few cases. Whereas Bučinová et al. [54] found bioconcentration of Mg with mineral horizons, she did not find it with organic horizons of the soil. The content of magnesium in plants ranges from 3 to 10 g/kg [62,64], which is more than in the tested mushrooms ( Table 3). The recommended daily allowance (RDA) of Mg according to the Institute of Nutrition and Food [57] ranges from 240 to 420 mg per day. Although mushrooms are not a better source of this macroelement than plants, they can also affect the level of Mg in the human body to some extent. The daily requirement for Mg is provided by about 300 g of the tested dried mushrooms or 3000 g of fresh mushrooms, respectively. In the human diet the consumption of 100 g of dried mushrooms (e.g., B. edulis) covers 33-57% of the daily requirement for Mg. The bioavailability of Mg is unknown [4,16]. Calcium Among the elements tested in mushrooms, calcium was present only in insignificant amounts, showing no significant differences in levels (Table 3). B. edulis contained on average 0.12 g/kg, I. badia 0.14 g/kg, and L. scabrum 0.16 g/kg DM (Table 3). Vogt and Edmonds [41] and Rudawska and Leski [42] draw attention to the lack of differentiation in Ca content in mushrooms, and its small amounts were noticed, among others, by Mattila et al. [16], Nikkarinen and Mertanen [18], Falandysz et al. [19,58], Frankowska et al. [39], Malinowska et al. [40], Zhang et al. [59], Kojta and Falandysz [60]. However, there is a lot of calcium (more than K, P, Mg and Na) in mushrooms from India [1]. The mushrooms we analyzed contained amounts of calcium that were within the range found in cultivated mushrooms: A. bisporus and P. ostreatus [48,49]. The content of calcium in plants ranges from 3 g/kg to 18 g/kg DM [45], therefore mushrooms contain much smaller amounts of Ca than plants ( Table 3). The recommended daily allowance (RDA) of Ca according to the Institute of Nutrition and Food [57] ranges from 1000 to 1300 mg. The daily requirement for Ca, if this element was fully available to the human body from mushrooms, would require as much as 9 kg of dried mushrooms or 90 kg of fresh fruiting bodies. In the human diet the consumption of 100 g of dried mushrooms (e.g., B. edulis) covers 0.9-1.2% of the daily requirement for Ca. This means that mushrooms are definitely not a good source of calcium. Sodium Sodium was the fourth element in terms of the concentrations of macronutrients found in the tested mushrooms (Table 3). Similar results for B. edulis were obtained by Zhang et al. [59] and Nikkarinen and Mertanen [18]. The B. edulis, I. badia and L. scabrum we tested did not differ significantly in the Na content and, respectively, they contained on average: 0.68 g/kg MD, 0.57 g/kg MD, and 0.53 g/kg MD of this element (Table 3). In Poland, the Na content in these species ranged on average from 0.010 to 0.773 g/kg MD [19,37,39,43,45,[58][59][60]. On the other hand, in Japan, in various species of mushrooms, the Na content ranged from 0.167 to 1.782 g/kg [46], in Turkey from 0.03 to 4.85 g/kg [47,61], in Finland from 0.065-0.519 g/kg DM [18], and in Germany from 0.1 to 0,8 g/kg [4]. The wild mushrooms we tested had similar amounts of sodium to the cultivated mushrooms A. bisporus but more than double that of P. ostreatus [48]. The results of our research confirm that mushrooms can bioconcentration Na, with the bioconcentration coefficient being the highest in B. edulis and the lowest in L. scabrum ( Table 4). The properties of mushrooms to bioconcentration Na in mushroom fruiting bodies were previously found by Chudzyński and Falandysz [15], Malinowska et al. [40], Kowalewska et al. [43] and Falandysz et al. [53]; however, this was not confirmed by Kalač [4], Frankowska et al. [39] and Zhang et al. [59]. The content of sodium in plants ranges from 0.3 g/kg to 1.0 g/kg DM [62]. The adequate intake (AI) of Na according to Institute of Nutrition and Food [57] ranges from 1300 to 1500 mg/day. The daily requirement for Na would be provided by about 2 kg of dried mushrooms or 20 kg of fresh mushrooms. In the human diet the consumption of 100 g of dried mushrooms (e.g., B. edulis) covers 4-5% of the daily requirement for Na. Thus, in terms of sodium content, mushrooms can only slightly supplement the diet. The Principal Component Analysis (PCA) for Soil and Mushroom Chemical Composition and Ward's Cluster Analysis for Macronutrients Content in Soils and Mushrooms A higher proportion of organic matter did not significantly affect the content of available and total forms of phosphorus, potassium and magnesium in the soil. On the other hand, the content of available and total forms of calcium and sodium decreased. The lack of correlation between organic matter and the content of macroelements in soils results from its different degree of distribution and negative correlation with pH values. In contrast, a positive correlation was found between organic matter and soil salinity ( Figure 1). The content of phosphorus in fungi was significantly negatively correlated with organic matter, and to a lesser extent negatively correlated with the amount of available and total phosphorus. In contrast, soil pH had no effect on the content of this element in mushrooms. The amount of phosphorus in mushrooms was positively correlated with the amount of calcium, and negatively correlated with the content of magnesium, potassium, and sodium ( Figure 1). The content of phosphorus in fungi was significantly negatively correlated with organic matter, and to a lesser extent negatively correlated with the amount of available and total phosphorus. In contrast, soil pH had no effect on the content of this element in mushrooms. The amount of phosphorus in mushrooms was positively correlated with the amount of calcium, and negatively correlated with the content of magnesium, potassium, and sodium ( Figure 1). Fungi uptake very high amounts of potassium, generally higher than other elements. Potassium uptake depended positively on the content of exchangeable and total forms of this element in the soil and soil pH. The amount of organic matter in the soil did not affect potassium uptake by fungi. The content of potassium was positively correlated with the content of magnesium in fungi, and negatively correlated with the content of phosphorus and calcium (Figure 1). The amount of magnesium in fungi was positively correlated with the content of available and total forms of magnesium in the soil, to a lesser extent with soil pH, and negatively correlated with the amount of organic matter and salinity. In mushrooms, magnesium content was positively correlated with potassium content, and negatively correlated with phosphorus and calcium content (Figure 1). Fungi uptake very high amounts of potassium, generally higher than other elements. Potassium uptake depended positively on the content of exchangeable and total forms of this element in the soil and soil pH. The amount of organic matter in the soil did not affect potassium uptake by fungi. The content of potassium was positively correlated with the content of magnesium in fungi, and negatively correlated with the content of phosphorus and calcium (Figure 1). The amount of magnesium in fungi was positively correlated with the content of available and total forms of magnesium in the soil, to a lesser extent with soil pH, and negatively correlated with the amount of organic matter and salinity. In mushrooms, magnesium content was positively correlated with potassium content, and negatively correlated with phosphorus and calcium content (Figure 1). The amounts of sodium in the fungi were negatively correlated with the amount of sodium in the soil and soil pH, and positively correlated with the amount of organic matter. Salt concentration had no effect on uptake of this element. In fungi, sodium did not show a strong relationship with magnesium and potassium, but was significantly negatively correlated with calcium and phosphorus (Figure 1). The relationship between the concentration of elements in mushrooms and their content in soil was already indicated by Garcia et al. [17], Nikkarinen and Mertanen [18]. However, Chudzyński and Falandysz [15] and Malinowska et al. [40] did not find such a dependence. It was previously pointed out [66,67] that soil parameters (including pH and organic matter) had only a little effect on the content of certain elements in mushrooms. In the tested fruiting bodies, a great uptake of P and Na was found, although the content of these elements in the substrate was relatively low. In sandy soils, phosphorus is present in small amounts and, together with nitrogen, is a deficient element. In such conditions, mycorrhizal mushrooms intentionally accumulate large amounts of this element in order to exchange it for the products of photosynthesis with trees [24,50,51]. The cluster analysis ( Figure 2) identified two groups of substrates under the tested mushrooms that differed in terms of the chemical composition. The first group included the substrates found under boletuses, and the second group contained the substrates under I. badia and L. scabrum. Similar groups were found in terms of the macronutrient content in mushrooms (1-B. edulis, 2-I. badia and L. scabrum). Our results indicate that B. edulis grew on soils with different chemical properties than I. badia and L. scabrum, which translates into their different chemical composition (Figure 3). dependence. It was previously pointed out [67,68] that soil parameters (including pH and organic matter) had only a little effect on the content of certain elements in mushrooms. In the tested fruiting bodies, a great uptake of P and Na was found, although the content of these elements in the substrate was relatively low. In sandy soils, phosphorus is present in small amounts and, together with nitrogen, is a deficient element. In such conditions, mycorrhizal mushrooms intentionally accumulate large amounts of this element in order to exchange it for the products of photosynthesis with trees [24,50,51]. The cluster analysis ( Figure 2) identified two groups of substrates under the tested mushrooms that differed in terms of the chemical composition. The first group included the substrates found under boletuses, and the second group contained the substrates under I. badia and L. scabrum. Similar groups were found in terms of the macronutrient content in mushrooms (1-B. edulis, 2-I. badia and L. scabrum). Our results indicate that B. edulis grew on soils with different chemical properties than I. badia and L. scabrum, which translates into their different chemical composition (Figure 3). Conclusions In NW Poland, mushrooms grew on strongly acidic soils, such as Arenosols and Podzols which showed properties characteristic for these types of soil. The soil under the Conclusions In NW Poland, mushrooms grew on strongly acidic soils, such as Arenosols and Podzols which showed properties characteristic for these types of soil. The soil under the B. edulis differed in the content of macronutrients from the soil under the I. badia and L. scabrum. The substrates under the Boletus edulis we tested were richer in Ca and Mg, but poorer in Na and K, compared to the soils under B. edulis in other regions of Poland. B. edulis, I. badia and L. scabrum growing in NW Poland did not differ significantly in the content of P, K, Ca and Na; however, the levels of Mg were significantly higher in Boletus edulis growing on the Drawa Plain. The uptake of K, Mg and Ca by the tested mushrooms was positively, and P and Na was negatively correlated with the content of these elements in soil. Only the content of K and Mg in the mushrooms was positively related to soil pH. There was no effect of the amount of organic matter in the soil noticed on the content of macronutrients (except sodium) in mushrooms. Our results indicate that B. edulis grow on soils with different chemical properties than I. badia and L. scabrum, which is reflected in their different chemical composition. The tested fungi bioaccumulated P, K, Mg and Na, while they did not bioaccumulate Ca. P and K bioaccumulated in the greatest amounts, regardless of species. Although the bioaccumulation of K occurred, its concentration in fruiting bodies was lower than in other regions of Poland, which was caused by its lower content in soils. Each of the remaining elements was usually bioaccumulated at a similar level by the fruiting bodies of the species we studied. The exception was the I. badia, which bioaccumulated higher amounts of Mg compared to B. edulis and L. scabrum. The contents of P, K, Ca, and Na were not significantly different between B. edulis, bay I. badia, and L. scrabum. However, Boletus edulis was significantly more abundant in Mg. Mushrooms can enrich the human diet with some macronutrients, especially in P, Mg and K. 100 g of dried mushrooms provide almost 100% of the daily requirement for P, about 40% for Mg and about 25% for K. However, the content of Ca and Na in mushrooms is very low and has no significance in the diet.
v3-fos-license
2021-05-12T06:16:53.724Z
2021-05-11T00:00:00.000
234361483
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://journals.biologists.com/jeb/article-pdf/224/11/jeb242240/2077703/jeb242240.pdf", "pdf_hash": "aa3b3b9312a9889921ec9487ccfcdd67a74b68b6", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42707", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "430acf3e954aee60a3c70678061cb757746ca5f3", "year": 2021 }
pes2o/s2orc
Alligators employ intermetatarsal reconfiguration to modulate plantigrade ground contact Feet must mediate substrate interactions across an animal’s entire range of limb poses used in life. Metatarsals, the ‘bones of the sole’, are the dominant pedal skeletal elements for most tetrapods. In plantigrade species that walk on the entirety of their sole, such as living crocodylians, intermetatarsal mobility offers the potential for a continuum of reconfiguration within the foot itself. Alligator hindlimbs are capable of postural extremes from a belly sprawl to a high walk to sharp turns – how does the foot morphology dynamically accommodate these diverse demands? We implemented a hybrid combination of marker-based and markerless X-ray reconstruction of moving morphology (XROMM) to measure 3D metatarsal kinematics in three juvenile American alligators (Alligator mississippiensis) across their locomotor and maneuvering repertoire on a motorized treadmill and flat-surfaced arena. We found that alligators adaptively conformed their metatarsals to the ground, maintaining plantigrade contact throughout a spectrum of limb placements with non-planar feet. Deformation of the metatarsus as a whole occurred through variable abduction (twofold range of spread) and differential metatarsal pitching (45 deg arc of skew). Internally, metatarsals also underwent up to 65 deg of long-axis rotation. Such reorientation, which correlated with skew, was constrained by the overlapping arrangement of the obliquely expanded metatarsal bases. Such a proximally overlapping metatarsal morphology is shared by fossil archosaurs and archosaur relatives. In these extinct taxa, we suggest that intermetatarsal mobility likely played a significant role in maintaining ground contact across plantigrade postural extremes. INTRODUCTION An animal's foot must effectively mediate substrate interactions across the entire range of limb poses used in life. The foot is relatively underexamined in studies of postural and locomotor evolution, which tend to focus on the hip joint (Romer, 1923;Jenkins, 1971;Charig, 1972;Jenkins and Camazine, 1977;Hutchinson and Gatesy, 2000). Even when the distal limb is analyzed functionally, the foot and ankle are often simplified and treated as a 'black box'an anatomically complex set of visually obscured components that are difficult to measure or simulate. Metatarsals, the 'bones of the sole', are the dominant skeletal elements of the pes. In plantigrade animals, the heel, metatarsus and phalanges contact the ground during terrestrial locomotion (Hildebrand, 1985;Gebo, 1992;Carrier and Cunningham, 2017). In this foot posture, the entire length of the metatarsus is engaged with the substrate, and individual metatarsals are approximately parallel to the ground surface. Individual metatarsals articulate through a complex network of soft tissues (Schaeffer, 1941;Brinkman, 1980a,b;Cong et al., 1998;Schachner et al., 2011;Suzuki et al., 2011;Hattori and Tsuihiji, 2020), and have been suggested to move independently in some saurians (Brinkman, 1980a;Sullivan, 2007). Such intermetatarsal mobility offers the potential for a continuum of active and passive reconfiguration within the foot itselfa largely unexplored but potentially important contributor to the range of plantigrade limb placements available for an animal to employ. Extant crocodylians are plantigrade, and have long been recognized to locomote and maneuver using a broad range of hindlimb postures (Cott, 1960;Zug, 1974). From a relatively more erect 'high walk', with feet held beneath the body, to a sprawling 'low walk', with feet held laterally to the side, this crown group is able to modulate limb pose and foot placement across a postural continuum (Gatesy, 1991;Blob and Biewener, 1999). Because of this diverse repertoire of terrestrial locomotion, living crocodylians have been of interest to paleontologists studying locomotor evolution in Archosauria (Gatesy, 1991;Reilly and Elias, 1998;Hutchinson, 2006;Sullivan, 2015). As such, crocodylian locomotion, particularly the high walk, has been well studied (Schaeffer, 1941;Brinkman, 1980a;Gatesy, 1991;Reilly and Elias, 1998;Blob and Biewener, 1999;Reilly and Blob, 2003;Willey et al., 2004;Reilly et al., 2005;Sullivan, 2007;Baier and Gatesy, 2013;Baier et al., 2018;Tsai et al., 2020). Terrestrial maneuvers (yaws, turns, backing up, striking, etc.) have received substantially less attention. These more disparate behaviors, however, are important to consider in an analysis of locomotor kinematics and functional morphology, as they involve extreme limb and foot poses not typically found in steady forward locomotion. The crocodylian foot has four dominant, weight-bearing metatarsals, which overlap at their mediolaterally expanded proximal ends. Numerous extinct archosaurs and their relatives also share this overlapping morphology, appearing in basal, croc-line and bird-line taxa, in the early Mesozoic, when these lineages underwent other morphological transitions in the ankle, knee and hip (Tarsitano, 1983;Parrish, 1987;Novas, 1989;Hutchinson, 2006;Nesbitt, 2011;Padian, 2017). Inferred reconstructions of specific taxa and transitions in hindlimb posture remain contentious (Sullivan, 2015). However, an in-depth study of metatarsal motion may elucidate the presence and magnitude of intermetatarsal reconfiguration, as well as the role of overlapping metatarsals in the hindlimb complex. Measuring individual metatarsal kinematics is challenging. Although studies have been able to infer internal foot kinematics using optical motion capture of external markers in humans (e.g. Simon et al., 2006;Leardini et al., 2007;Jenkyn et al., 2009;McDonald et al., 2016;Welte et al., 2018;Holowka et al., 2021), artifacts of soft tissue movement (Kessler et al., 2019) and an inability to track separate bones hamper resolution. The combination of single-plane X-ray videography with single-plane light videography (Brinkman, 1980a,b;Sullivan, 2007Sullivan, , 2015 led to major advances in our understanding of crocodylian and lizard metatarsal kinematics. However, these studies were limited by low frame rate, an absence of markers, an inability to reconstruct long-axis rotation (LAR), and no treatment of variability in metatarsal motion. Advances in biplanar X-ray videography have enabled highresolution 3D skeletal movement to be seen and measured inside the foot of avian (Falkingham and Gatesy, 2014;Turner et al., 2020) and human (Kessler et al., 2019;Maharaj et al., 2020) bipeds. The implants necessary for marker-based analysis is challenging in animals with mobile metatarsals, as they are surrounded by many small muscles, nerves and vessels that require careful surgical planning to avoid. Additionally, the biplanar X-ray field of view limits the overall animal size, as the X-rays must often penetrate through the body to capture the distal limb, requiring large markers in the relatively narrow metatarsal shafts to maintain contrast in the X-rays. This study reports results of in vivo 3D metatarsal kinematics of the American alligator, Alligator mississippiensis (Daudin 1802), with particular emphasis on plantigrade foot poses across the locomotor and maneuvering repertoire on flat surfaces. The position and orientation of all four weight-bearing metatarsals (metatarsals I-IV) were reconstructed using hybrid X-ray reconstruction of moving morphology (XROMM), a method that combines marker-based XROMM (Brainerd et al., 2010) and scientific rotoscoping . Animated bone models allowed high-resolution measurement of skeletal kinematics using anatomically derived coordinate systems. The data obtained are used to address three fundamental questions. (1) Does the metatarsus as a whole undergo significant reconfiguration throughout the range of plantigrade postures? (2) If so, what degrees of freedom do metatarsals employ to conform to the ground? (3) What might the dynamic interactions among the metatarsals in alligators reveal about pedal evolution? The crocodylian ability to employ a spectrum of locomotory postures and maneuvers provides an opportunity to look inside the black box and test the role of intermetatarsal mobility in maintaining plantigrade ground contact. Using this new functional perspective, we examine overlapping metatarsal anatomy in the Archosaurian fossil record and infer foot function in extinct members of this posturally diverse clade. MATERIALS AND METHODS Animals and surgery Biplanar X-ray data were collected from three female juvenile American alligators, Alligator mississippiensis (5.15, 6.25 and 7.09 kg). These animals were initially acquired from the Rockefeller Wildlife Refuge (Grand Chenier, LA, USA) as embryos, captive raised in the alligator colony at California State University, San Bernardino (San Bernardino, CA, USA), then housed in the Brown University Center for Animal Resources and Education. All live animal experiments were conducted in accordance with protocols approved by the Institutional Animal Care and Use Committee of Brown University. For marker implantation, animals were induced and maintained on inhaled isoflurane anesthesia within a sterile surgical environment. Radiopaque markers were inserted into metatarsals on both sides of each animal. Implants consisted of either conical markers (0.8 mm diameter and 2-3 mm long) fashioned from carbide steel rods (Kambic et al., 2014) and introduced manually with a pin vise, or 1 mm solid tantalum beads (Baltec, Los Angeles, CA, USA) press-fitted into a hand-drilled 1.0 mm hole. Metatarsals I and IV were each implanted with two markers, spaced as far proximally and distally apart as possible (Fig. 1A) to maximize animation accuracy. Given the anatomical complexity of the soft tissue, we limited our access to the medial (for metatarsal I) and lateral (for metatarsal IV) margins of the pes, navigating between extensor and abductor muscle groups on either side to minimize invasive dissection. A single bead was inserted subcutaneously on the opposite side of the implants near the distal condyle of metatarsals I and IV by inserting a 0.8 mm solid tantalum bead (Baltec) via a hypodermic needle (18 gauge, 3.5 cm long), driven down with the bore of a steel rod plunger. Additional bone and softtissue implants were also made in the pelvis and hindlimb for other studies. Skin incisions were closed with 4.0 Vicryl suture (Ethicon Inc.). The alligators ate and behaved normally the day after surgery, and were allowed to recover for at least 1 week before data collection. No signs of locomotor impairment were observed. Recording and experimental setup Walking and maneuvering alligators were recorded at 80 frames s -1 (1000 μs shutter speed, up to 1800 frames per trial) by two standard light cameras and two X-ray cameras in the W. M. Keck Foundation XROMM Facility at Brown University. This system uses X-ray image chains (Imaging Systems and Service, Painesville, OH, USA) comprised of Varian model G-1086 X-ray tubes (80-90 kV, 200 mA, magnification level 0, 2 ms pulsed beam) suspended by ceiling-mounted telescoping cranes and 16 in diameter mobile-arm mounted Dunlee model TH9447QXH590 image intensifiers (126-140 cm source-to-image distance). Image intensifiers were backed with Phantom v10 high-speed cameras (Vision Research, Wayne, NJ, USA), recording at 1760×1760 pixel resolution and 150 extreme dynamic range. Light video was captured at 1600×1200 resolution using Phantom v9.1 cameras; all cameras were synchronized to within ±4 μs. Images for camera calibration (Knörlein et al., 2016) and undistortion (Brainerd et al., 2010) were recorded before and after each session. To capture alligator motion diversity, the animals were recorded in two experimental settings: a 35 cm-wide×148 cm-long×48 cmhigh acrylic-enclosed motorized treadmill (model DC5, JOG A DOG, Ottawa Lake, MI, USA), and a quadrilateral, acrylic-enclosed arena (wall lengths, 121.9×48.3×31.4×30.5 cm; floor area, 3345.8 cm 2 ; height, 38.1 cm) with a 5 cm thick floor of EPS foam (Owens Corning Foamular 150). The tread and foam substrates provided firm, flat, low-slip and level surfaces. The orientations of the two X-ray beams (5 deg and 85 deg from vertical, crossing in a plane nearly transverse to the tread) were chosen specifically to reduce pedal marker occlusion and improve marker contrast from the ground in both recording environments. slice thickness. A 0.100 mm Cu filter was placed between the X-ray source and the sample to compensate for beam hardening and reduce artifacts from the metal markers. CT data were reconstructed in Amira 6.0 (Thermo Fisher Scientific) and thresholded surfaces saved as .obj format polygonal models. Models of each element were isolated in Geomagic Wrap 2017 (3D Systems, Morrisville, NC, USA) and cleaned of marker artifacts and internal structure. Geometric primitives were fitted to polygonal patches ( Fig. 1B) manually selected from proximal and distal articular surfaces of the metatarsal models in Geomagic. A cylinder was fitted to the rollerlike distal metatarsal condyles and a plane to the proximal metatarsals (Fig. 1C). The centroids of these fitted primitives and the cylinder axes ( Fig. 1D) were used to form anatomical coordinate systems (ACSs) for each individual metatarsal, from which more derived measures of bone-bone and bone-ground motion were calculated. Asymmetric ACSs were employed to maintain comparable rotations among right and left feet (following Kambic et al., 2014). Details on coordinate system construction are provided in the Appendix. The long axes of metatarsals I ( purple) and IV (gold) are anatomically static 3D line segments between the proximal and distal primitive centroids of the same metatarsals (Fig. 1E). Metatarsal pitch is the angle each axis makes with its projection onto the ground; we maintain rotational continuity to enable pitch to pass 90 deg. Proximal (solid) and distal (dashed) transverse axes are anatomically dynamic 3D line segments between primitive centroids of metatarsals I and IV (Fig. 1F). Metatarsal spread is calculated as the width of the distal transverse axis divided by the width of the proximal transverse axis, expressed as a percentage. The midpoints (Fig. 1F,G,H, open circles) of the two transverse segments serve as endpoints of a dynamic middle axis, representing the metatarsus as a whole. This virtual middle axis is used to create two parallel planes (Fig. 1I, gray lines), onto which other previously described 3D axes are projected and measured in 2D. These proximal and distal projection planes include their respective proximal and distal midpoints and are both perpendicular to the middle axis. Skew is calculated as the 2D rotation of the projected proximal transverse axis with respect to the projected distal transverse axis, when viewed distal to proximal along the middle axis (Fig. 1G). The two metatarsal long axes and two transverse axes form the four sides of a dynamic quadrilateral (Fig. 1H). The cylinder axis of each metatarsal condyle (black cone) is likewise projected on the proximal and distal perpendicular planes (Fig. 1I). Projected cylinder axes of metatarsal I (purple cone) and metatarsal IV (gold cone) on these planes permit a simplified 2D view of the quadrangle along the middle axis (Fig. 1J). Metatarsal LARs can then be measured in 2D (Fig. 1J) as the angles of the projected condylar axes with respect to the proximal and distal transverse axes. Animation CT-based metatarsal bone models were animated using a hybrid XROMM method, which combined marker-based XROMM (Brainerd et al., 2010) and scientific rotoscoping (markerless XROMM; Gatesy et al., 2010). As metatarsals were not able to be implanted with a minimum of three markers (a requisite for the marker-based method), they were instead animated using fewer strategically placed markers. These markers serve as 3D-world coordinate anchors that drive anatomically informed animation constraints, serving as a 'base animation' that can be further refined through scientific rotoscoping. Here, we followed a three-step process to animate the four dominant alligator metatarsals using hybrid XROMM. First, unfiltered 3D coordinates of the six metatarsal markers (four implanted, two injected) per foot were extracted in XMALab (Knörlein et al., 2016) and animated in Maya 2020 (Autodesk Inc., San Rafael, CA, USA). These marker coordinates controlled an initial metatarsal I and IV base animation, in which models were positioned based on the implanted markers and given a preliminary orientation based on the injected distal condyle marker. Second, camera calibrations and undistorted X-ray video exported from XMALab were used to create virtual cameras matching the relative positions and orientations of the real-world X-ray sources in Maya. By viewing undistorted video through these virtual X-ray cameras, we refined the initial orientations of the marked metatarsals by aligning the individual bone model to its X-ray shadow in the two views . Finally, the unmarked middle metatarsals (II and III) were given preliminary translations and rotations based on weighted averages of their animated neighbors, followed by rotoscopic refinement. The marked, outer metatarsals (I and IV) were animated using a two-point rotoscoping method. In each bone, the two implanted markers constrained the base animation such that only one degree of freedom (rotation about an axis between the markers) was left to be controlled. The implanted subcutaneous marker near the distal end of each metatarsal was used to guide the initial rotation animation about this axis. As this injected marker was in mobile soft tissue, this constraint was only used to roughly orient the bone model. Rotation about the implanted marker axis was further refined by rotoscoping. As the implanted markers were on the margin of the bone (medial aspect, metatarsal I; lateral aspect, metatarsal IV), rotation about the axis between these markers made discrepancies in bone-shadow matching apparent. In the case in which an implanted metatarsal marker fell out of the bone into the surrounding tissue and only one rigidly embedded marker remained, a one-point rotoscoping method was used. In this case, the fixed marker was used for model translations and initial rotational animation was guided by the other two markers; all rotations were further refined by rotoscoping. The unmarked metatarsals II and III were animated using a constrained application of the metatarsal I and IV animation in addition to rotoscoping. Positionally, metatarsal II and III were constrained to the proximal transverse metatarsus axis, such that the proximal end of all four metatarsals were aligned and equally spaced. The orientations of metatarsals I and IV were likewise used to animate preliminary rotations of metatarsal II and III about their proximal centroids, weighted by proximity. Metatarsal I had twice the influence on metatarsal II rotations than metatarsal IV, and metatarsal IV had twice the influence on metatarsal III rotations than metatarsal I. Translations and rotations were further refined by rotoscoping. Determination of plantigrade ground contact and data presented in this study The first and last frames of ground contact of each stance phase were identified from video. For treadmill data, the initial contact of the subsequent stance phase was also identified to constitute a full stride. As light video often captured more of the limbs than the X-ray volume, complete stride frame ranges captured by any camera were used to normalize a partially animated stride. Duty factor was calculated as the fraction of stride duration that a hindlimb was in contact with the ground. As transitions in foot-ground contact were gradual, metatarsal pitch was used to establish a plantigrade threshold. Plantigrady was defined as any stance phase pose with either metatarsal I or IV pitched 15 deg or less. These threshold angles were consistent with transitions in soft-tissue foot contact observed from light and X-ray video. We analyzed 24 treadmill high walk strides from all three individuals (7, 14, 3), resulting in 3017 frames of data. Given marker placement, pitch and spread measurements were relatively insensitive to metatarsal LAR. Such insensitivity permitted the inclusion of these measurements from additional trials beyond those able to receive rotoscopic refinement, as only small changes in LAR occur when refining the base animation. All 24 high walk strides are presented in Figs 2 and 3. A subset of 13 high walk strides, along with 13 maneuvers, were refined for LAR of the metatarsals, representing the left and right feet of two individuals and resulting in 3924 frames of 6 degree-of-freedom plantigrade data. All figures show right feet; left feet were mirrored and noted. Graphs were created in R (https:// www.R-project.org/), and rendered images and videos produced in Maya 2020. Figures were compiled in Adobe Illustrator version 24.3. High walk and maneuvers Alligators walked on the treadmill for minutes at a time, typically holding the body far off the ground in the well-described high walk 'semi-erect' posture (Zug, 1974;Gatesy, 1991;Reilly and Elias, 1998; see Movie 1). However the high walk was not always maintained; animals occasionally would drop hip height to a medium or low walk before either raising back up again or and IV (C) pitch colored by stance (medium purple/gold) and swing (light purple/gold). (D) Graph of metatarsal spread colored by stance (medium gray) and swing (light gray). Graphs include data from 24 high walk strides of three individuals. The representative stride (dark purple/gold/gray) is circled at the corresponding foot poses in A. A 15 deg pitch threshold (dashed lines) was used for determining plantigrade contact. Vertical bar at mean duty factor (stance/swing transition) drawn at a width of two standard deviations. descending to a stop. The feet were placed nearly under the hips pointing anteriorly, and, once in contact with the ground, did not slide. Occasionally, the pes would briefly step on the ipsilateral manus, or a digit would get trapped curled under the pes. High walk strides chosen for analysis excluded significant within-stride hip height variability or toe curl. The 24 strictly high walk strides analyzed had a mean duty factor of 0.82±0.03. Stance-phase duration averaged 1.62±0.31 s, of which about half entailed plantigrade contact (mean, 0.81±0.25 s). Alligators executed a range of maneuvers including tight ( pivot) and wide turns while moving forward, sidestep and non-sidestep yaws in place, backing up, low walking with body contact and climbing with forelimbs on the wall. These maneuvers often occurred in combination and were blended rather than discrete. The feet occasionally slid in place against the ground and reoriented. The limbs were never the only point of contact. Often the body, tail and head touched the floor and walls of the enclosure. Plantigrade contact during maneuvers ranged widely; the longest period recorded was 8.9 s (outside foot of a side step yaw, Fig. 4) and shortest was 0.8 s (inside foot of a pivot turn, Fig. 5). Metatarsal pitch during the high walk stride cycle As expected, the metatarsals largely pitched as a single unit during the high walk stride cycle ( Fig. 2A; Movie 1); however, the pitch of metatarsal I (Fig. 2B, purple) was almost always higher than that of metatarsal IV (Fig. 2C, gold) during stance. Foot pose at initial contact (0-2% stride) varied from heel first (<0 deg pitch) to toe first (>15 deg pitch) before quickly flattening into full-foot contact. Metatarsal pitch remained under the plantigrade threshold of 15 deg pitch for approximately the first half of stance. During this plantigrade phase, metatarsal IV was often parallel (0 deg pitch) to the ground. Metatarsal I remained above 5 deg pitch, crossing the threshold earlier (mean, 36.5±7.6%) in the stride cycle than metatarsal IV (mean, 45.7±5.5%). As the foot transitioned through digitigrade and unguligrade contact, metatarsal pitch increased, passed vertical (90 deg) and peaked (∼120 deg) at the end of stance. Metatarsal spread Metatarsals spread and collapsed cyclically throughout the high walk stride cycle (Fig. 2D), being most compressed (as low as 124% proximal transverse axis width) when in swing, and most spread (∼200%) when plantigrade. The metatarsals were often highly spread at the start of the stance phase, although spreading was slightly delayed in steps with heel-first contact. Metatarsal spread during the plantigrade phases of the treadmill locomotion pattern provides context for the variation found across maneuvers: the 90 deg spread range during the complete high walk cycles is the same (albeit shifted) as that seen during only the plantigrade phases of maneuvering. Whereas the high walk spread was relatively constant when plantigrade (Fig. 3, dark gray), the metatarsals abducted and adducted much more dynamically in maneuvers (Fig. 3, light gray), reaching a maximum spread of 256%. Metatarsal skew when plantigrade Metatarsal I and metatarsal IV rarely had the same pitch, such that the proximal transverse axis was non-parallel, or 'skewed', with respect to the horizontally grounded distal transverse axis. Disparities in metatarsal pitch when plantigrade were far greater during maneuvers than during the high walk (Fig. 2B,C). Fig. 4 shows pitch and skew data and three sample video frames for a left yaw (see also Movie 2). The sequence begins after the alligator had shifted its body to the right until metatarsal I approached the midline (t1). With its right foot still planted, the left limb stepped to the left and turned the body away from the right foot (t2). The animal continued moving to the left, extending the right leg laterally as the foot became abducted relative to the body and remained fully engaged with the ground (t3). Throughout the sequence, the metatarsals of the right foot remained on the ground distally, but raised and lowered proximally while the pitch of metatarsal I or IV remained under the plantigrade threshold ( Fig. 4B). At t1, differential pitching of metatarsal I ( purple, 9.5 deg) and metatarsal IV (gold, 0.2 deg), led to a −16.7 deg skewing of the transverse axes ( Fig. 4C-E). At t2, the metatarsals briefly shared the same pitch as the transverse axes passed through a skew of 0 deg. At t3, the pitch of metatarsal I was 2.1 deg and IV 11.5 deg, flipping the skew to 18.7 deg. Thus, alligators maintained plantigrade contact in three different metatarsal configurations. Only rarely were the metatarsals coplanar (0 deg skew). Far more common was for the base of metatarsal I to be elevated above that of metatarsal IV (negative skew, medial raised) or the reverse ( positive skew, lateral raised). The diversity of maneuvers analyzed reveal that transitions in skew occurred smoothly from negative to positive (e.g. Fig. 4) and from positive to negative (e.g. Fig. 5) across the total 47 deg range of plantigrade skew measured (−28.0 to 19.1 deg). Metatarsal LAR and condylar axis orientation when plantigrade Skewing of the metatarsus was accompanied by LAR of the individual metatarsals. These relationships are shown by measuring each metatarsal's changing orientation relative to the proximal and distal transverse axes in a pivot to the right ( Fig. 5A; Movie 3). The pivot sequence began with the animal having taken a right step backwards, limb extended anterolaterally (t1). With its right foot planted, the head and body turned right, and the right manus was placed to the outside of the foot (t2). The animal continued to pivot the body about the right foot, which slipped slightly as the foot reached a highly adducted pose relative to the body (t3). As in the previous sequence (Fig. 4), the metatarsals pitched and skewed while maintaining plantigrade contact (Fig. 5B-E). As the foot transitioned from 10.7 to 0.3 to −16.5 deg skew (Fig. 5E), metatarsals were observed to differentially long-axis rotate. Projecting the first and fourth condylar axes onto the two transverse planes revealed distinct modes of metatarsal reorientation. Metatarsal I maintained a relatively consistent orientation (−27.1, −30.9, −27.3 deg at the three sampled times) relative to the proximal transverse axis across the sequence (Fig. 5F,G, purple). By contrast, metatarsal IV underwent significant LAR (−14.2, −26.8, −40.8 deg) with respect to the proximal transverse axis (Fig. 5F,G, gold). The opposite relationships were found when condylar axes were projected on the distal transverse plane (Fig. 5F,H) These asymmetrical patterns of metatarsal LAR with skew were found among plantigrade data (N=3924 poses) from both maneuvers (light color) and high walks (medium color) (Fig. 5I,J, Table 1). The nearly horizontal slopes of the condylar axis angle of metatarsal I ( purple, slope −0.10) proximally and metatarsal IV (gold, slope −0.02) distally reveal weak relationships between LAR and skew at these two corners of the metatarsus quadrilateral. On the opposing two corners, the much steeper slope of the condylar axis angle of metatarsal IV (gold, slope 0.98) proximally and metatarsal I ( purple, slope −1.27) distally reveal strong inverse relationships between LAR and skew. Maneuvers typically spanned large ranges of skew in a single period of plantigrade contact. The three timepoints from the right-pivot maneuver in Fig. 5I,J (circled) fall near the middle of the pose clouds. Negative skews were more commonly sampled; however, some extreme positively skewed poses were analyzed (e.g. pose t3 in Fig. 4, shown as asterisks in Mobility of overlapping proximal metatarsals The overlapping proximal metatarsals exhibited a substantial amount of mobility throughout maneuvers involving changes in skew. As metatarsal I maintained a relatively constant relationship with the proximal transverse axis (Fig. 5I, purple), the motion of the other three weight-bearing metatarsals can be visualized relative to a stable first metatarsal (Fig. 6). The same three times sampled from the right pivot featured in Fig. 5 . t1), the proximal bases were most tightly packed as the lateral metatarsals externally rotated and brought the expanded overlapping facets in near contact. At positive skews (e.g. t3), internal LAR of metatarsal III and IV increased spacing substantially. Patterns of spacing and proximal reconfiguration seen in the featured maneuver are representative of those observed across the entire spectrum of skew. DISCUSSION This study specifically targeted in vivo intermetatarsal mobility of a plantigrade quadruped. Through a hybrid XROMM analysis combining marker-based and markerless XROMM, we were able to visualize, reconstruct and measure the position and orientation of all four dominant, weight-bearing metatarsals in Alligator. The ground imposes unique constraints on the metatarsus not typically experienced by more proximal limb segments, and, as such, is an important reference in all analyses performed here. The spectrum of locomotor behaviors sampled affords a quantitative view of previously unseen movements such as spreading and differential pitching, as well as coordination between metatarsal skewing and LAR during plantigrade contact. These kinematic patterns reveal how the long bones of the foot continuously reconfigure to conform with the ground across a diversity of high walk and maneuvering postures. The modulation of the alligator metatarsus under highly varied foot placements and limb postures has potential application in bio-inspired legged robotics, where a machine's ability to span multiple terrain types and locomotor modes (Raibert, 1986;Fukuda et al., 2009) may be aided by novel foot designs beyond the typical shapes [flat, cylindrical and spherical (Ding et al., 2013)] used. Such data on Alligator also provide a dynamic context for interpreting the evolutionary history of metatarsals in the fossil record. The importance of sampling limb pose diversity in studies of locomotor kinematics and functional morphology Across all measured variables, metatarsal mobility was greater in maneuvers than in the high walk. This is not surprising, as the cyclical movements of foot and body remained relatively aligned with the direction of the tread. By contrast, foot orientation and placement relative to the body underwent dynamic extremes during maneuvers. In the open arena, the direction of travel often was substantially different from that of the foot, which deviated from the body more variably. A notable difference between these two locomotor modes was the degree of foot slipping on ground. Once planted, the feet did not slip as the body passed over it during the high walk. In maneuvers, particularly yaws and turns, the feet intermittently slid against the ground. The diversity of foot poses throughout the disparate behaviors presented in this study underscores the importance of sampling noncyclical behavior in studies of locomotion. As an animal's foot must effectively mediate animal-substrate interactions across its entire repertoire, the morphology and mobility of the metatarsals must function under all possible postural extremes, not just the locomotor mode or gait most commonly used. This disparity in foot function between steady forward locomotion and maneuvering is seen in the Alligator data here. Our plantigrade high walk treadmill data sampled only a fraction of the range of spread (Fig. 3) and skew (Fig. 5I,J) found across sampled maneuvers. Subtle patterns of metatarsus deformation and internal reconfiguration can be discerned during the high walk (Fig. 5I,J, medium purple and gold), but are magnified and clarified by the greater range of foot pose extremes (Fig. 5I,J, light purple and gold). XROMM sampling of highly variable locomotor behaviors has revealed previously unseen patterns of hindlimb function (e.g. Kambic et al., 2015;Turner et al., 2020) critical to the interpretation of avian functional morphology. A much larger range of foot placement extremes is found in plantigrade quadrupedal taxa, and thus are critical to incorporate into the study of the locomotor system. Intermetatarsal abduction In plantigrade poses, the metatarsus deformed in two primary dimensions: intermetatarsal abduction (spread) and differential pitching (skew). With skew, metatarsals internally reconfigured through differential LAR. Intermetatarsal abduction was measured as the spreading of the distal metatarsals with respect to a relatively constant proximal metatarsal width. Dynamic spreading occurred throughout the high walk stride cycle, reaching a maximum in the first half of stance and a minimum when in swing (Fig. 2D). Intermetatarsal abduction was relatively constant during plantigrade high walk, and rarely exceeded ∼200% of the proximal width. By contrast, dynamic spreading occurred during plantigrade maneuvers, the maximum (256%) greatly exceeding the high walk plantigrade range (Fig. 3). The substantially greater spread achieved during maneuvers reveals that the potential limiting factors (bone, cartilage, muscle, ligament, integument and neural recruitment of abductor muscles) are not restrictive to ∼200% spread. Thus, the apparent limit on maximum spread during high walk is instead likely to be due to active forces, such as increased metatarsal adductor muscle activity or decreased load on the metatarsals due to a more cyclical gait. The texture of the tread possibly provided greater frictional resistance to spreading. However, four of the maneuvers recorded here occurred on the treadmill, and were all found to have a maximum spread between 225% and 250%. Future force plate, electromyography and substrate studies designed to explore the interplay of these factors will be useful in elucidating the functional constraints on metatarsal spread during locomotion. The plantigrade foot is not flat: metatarsus skewing as a result of differential metatarsal pitch and LAR Contrary to simplistic depictions of a flat, immobile metatarsus, the four weight-bearing metatarsals have the ability to break from a planar configuration while maintaining plantigrade contact. Such deviations are often referred to as inverted and everted in humans and other taxa (McDonald and Tavener, 1999;Klenerman and Wood, 2006). However, without considering the crus, we cannot quantify inversion and eversion with metatarsals alone, and here focus only on the relationship between the metatarsus and ground. Differences in metatarsal I and IV pitch skewed the metatarsus in both positive (greater metatarsal IV pitch) and negative (greater metatarsal I pitch) directions. Metatarsals of the alligator rarely shared the same pitch, only briefly passing through a 'flat' co-planar configuration when transitioning between positive and negative skewed poses (e.g. t2 in Fig. 4E, Fig. 5E). Due to such prevalent skewing, the quadrilateral of the metatarsus was most often in a three-point contact formation: both distal metatarsals and either proximal metatarsal I or IV closely engaged with the ground. While the foot was typically visible from only the medial or lateral side in a given stance phase, plantar soft tissue appeared to maintain ground contact on both medial and lateral sides throughout the range of skew found under the plantigrade threshold. Brinkman (1980a) identified differential pitching in the spectacled caiman as the source of what he termed 'metatarsal rotation' (skew in the present study) (Fig. 7A). Our observations from alligator support the reported caiman pattern of positive skewing (elevated proximal metatarsal IV) in more sprawling poses. However, both Brinkman (1980a) and Schaeffer (1941) interpreted the metatarsals of the spectacled caiman and alligator, respectively, as being entirely flat (thus, 0 deg skew) during the high walk. Whereas the soft tissue of the alligator metatarsus visibly appears flat on the ground, we instead found that, internally, the skeleton was consistently negatively skewed (a more elevated proximal metatarsal I) during treadmill high walk (Fig. 5I,J, medium purple and gold). Given the highly conserved metatarsal morphology among crocodylians, we hypothesize that the metatarsus is negatively skewed during high walk throughout extant members of this clade. Additionally, future work on contextualizing skew with ankle kinematics and limb posture may reveal important insights into how foot contact is achieved across a spectrum of sprawling to erect postures. Condylar axes, and thus metatarsal LARs, do not maintain a constant relationship during skewing deformations. Contrary to Brinkman's illustrations of parallel condyle axes (Fig. 7A), our alligator data show differential LAR throughout the spectrum of skew (Fig. 7B). Metatarsal I and IV condylar axes were only parallel at an approximate skew of −7 deg, and become increasingly divergent in both extreme positive and negative skew poses. As the condyles were constantly engaged with the ground throughout plantigrade poses, the distal transverse axis can be used as a proxy for horizontal. Given this, the condylar axes were almost always internally rotated (negative LAR) with respect to the ground and the animal was almost always walking on the more medial sides of each metatarsal. Only the metatarsal I condylar axis ever reached parallel with the ground during plantigrade postures, occurring during maneuvers reaching extreme (approximately −25 deg) skew (Fig. 5J). In contrast, metatarsal IV condylar axis was held at a near-constant approximate −25 deg angle with respect to the ground. Because of the ground constraints on the distal metatarsus, any differential pitching of the metatarsals impacted the relative skew of the proximal transverse axis. Given this geometry, the LAR of individual metatarsals relative to the proximal transverse axis shared an inverse relationship with the distal counterpart. As shown in Fig. 5I, metatarsal I maintained a near-constant (approximate −30 deg) LAR relative to the more proximal transverse axis, and thus the tarsus. By contrast, metatarsal IV rotated over 50 deg LAR for the same range of skew. This graded LAR of the metatarsals was reflected in the reorientation of their proximal overlapping morphology. More medial metatarsal facets are more horizontally oriented with respect to the ground when at negative skews (e.g. foot beneath the body), and more vertically oriented at positive skews (e.g. during a more sprawling behavior). As the stacked metatarsal arrangement likely precludes underlying lateral metatarsals from pitching above their overlying medial neighbors, vertically reorienting the proximal ends (Fig. 4B, t3; Fig. 5B, t1) would appear to take advantage of the axis of freedom they do havemedial metatarsals dorsiflex and lateral metatarsals plantarflex. In the extreme negative skews, the expanded metatarsal heads stack horizontally (Fig. 4B, t1; Fig. 5B, t3) relative to the ground and may offer stability within the foot when beneath the body. The combination of differential metatarsal pitch and differential metatarsal LAR permits extremes in foot-ground contact to be accommodated within the metatarsus. Metatarsal I LAR is relatively static to the proximal metatarsus transverse axis (Fig. 5I, Brinkman (1980a) and this study. With stabilized metatarsal I from anterior view, (A) speckled caiman foot poses redrawn from Brinkman (1980a), and (B) alligator foot poses of key timepoints (t1, t2, t3) in the featured maneuver sequence from Fig. 4. Overlain distal (dashed) transverse axes along with projected condyle axes (inferred from drawing in caiman) of metatarsal I ( purple) and IV (gold) reveal differences in reconstructed metatarsal long-axis rotation among the two studies. and metatarsal IV LAR relatively static to the distal transverse axis (Fig. 5J, gold). As such, metatarsal I primarily moves with the tarsus and rolls against the ground, whereas metatarsal IV registers with the ground and rolls against the ankle. The strong correlation between differential LAR and skewing suggests that these movements are mechanically linked with more proximal elements within the foot. Extremes in spacing among the proximal metatarsals support involvement of other anatomical structures (articular cartilage, joint capsules, ligaments, muscles, tendons), as the bones are not simply sliding past one another. Overlapping metatarsals in extinct reptiles The presence of overlapping metatarsals is among the first major shifts from a basal amniote 'mosaic' ankle (Schaeffer, 1941;Thulborn, 1980;Sumida, 1989;Laurin and Reisz, 1995) into an 'integrated' diapsid pes (Gauthier et al., 1988). Several significant changes in foot and ankle structure evolved throughout diapsid clades [e.g. hooked fifth metatarsal (Lee, 1997;Sullivan, 2010;Borsuk-Bialynicka, 2018), reduction of distal tarsals (Joyce et al., 2013), 'rotary' proximal tarsal joint (Sereno and Arcucci, 1990)]. Croc-line archosaurs retain this primitive metatarsal condition (Fig. 8) and all share a similar ankle structure (Parrish, 1987;Farlow et al., 2014). Our findings of metatarsal function in Alligator are most appropriately considered within this clade and provide dynamic context for interpreting fossil morphology. With such potential for continuous internal reconfiguration, the morphology of the overlapping metatarsals in croc-line archosaurs do not directly correspond with any one skeletal arrangement with respect to the ground, but rather speak to the overall mobility of the metatarsus as a whole. Although this metatarsal mobility is likely to be an advantage for the semi-aquatic Alligator, the patterns of metatarsal movement under the constraint of ground contact provides a valuable reference for terrestrial biomechanical reconstructions of croc-line archosaurs. Given the mobility and kinematic relationships of the metatarsus revealed in Alligator, we propose several functional constraints when reconstructing or mounting fossil croc-line archosaur hindlimbs. (1) Extinct plantigrade taxa need not have held their metatarsal condyles parallel to the ground. Metatarsals were likely almost always internally rotated when engaged with relatively flat terrain. (2) Proximal metatarsal heads need not be maximally congruentopen gaps were likely present among the lateral metatarsals depending on the metatarsus skew. (3) Metatarsal alignment can go beyond planar, even if raising the base of metatarsal I above lateral metatarsals appears non-intuitive. (4) Metatarsal spread should be greater in stance than swing, but can expand further during maneuvers. We propose spreading the distal metatarsals 200% that of the proximal width as a starting hypothesis for plantigrade poses during high walk of all croc-line archosaurs. Adducted, fused, appressed or compact metatarsal configurations are found to have convergently evolved in several erect/cursorial tetrapod lineages. Such feet with at least the proximal half of metatarsals II-IV contacting each other appear in pterosaurs, some crocodylomorphs, as well as most ornithischian, sauropodomorph and theropod taxa (character 382 in Nesbitt, 2011). The likely reduction of metatarsal mobility associated with such intermetatarsal contact suggests a reduction or complete loss of metatarsus-based ground conformation within these groups, and thus taxa with adducted metatarsals were not likely to be plantigrade. Indeed, pterosaurian (Padian, 1983) and dinosaurian (Holtz, 1995;Jannel et al., 2019) lineages, along with few croc-line taxa [e.g. Poposaurus gracilis (Farlow et al., 2014;Schachner et al., 2020)], are suggested to be digitigrade and rely on the toes to maximize ground contact. As many taxa throughout Archosauria exhibit varying degrees of adduction or fusion along the length of the metatarsus, we suggest that this character is likely to be an indicator of the degree of metatarsus conformation possible within the pes. Continued work on the mechanical relationship of intermetatarsal adduction/abduction, foot posture and limb posture may provide a novel perspective on the evolution of erect posture in Archosauria, as foot and limb posture are suggested to be correlated (Coombs, 1978;Hutchinson, 2006). Although metatarsal morphology and kinematics are one piece of the locomotor system whole, the results here show that intermetatarsal mobility likely plays a significant role in maintaining ground contact in plantigrade Archosaur species with complex rotary ankles and greater postural extremes. Considering the variability and unevenness of natural terrain, the ability for the foot to deform is likely an advantage for conforming to and pushing off from a variety of surfaces. It is likely that this ability to deform shares a significant relationship with foot placement and limb posture, and may be a key functional trait contributing to the locomotor diversity recorded in the Archosaurian fossil record. APPENDIX Details on coordinate system construction Construction details of dynamically calculated anatomical coordinate systems (ACSs) (Fig. A1) in Autodesk Maya 2020. Language specific to this software is used herein. All coordinate systems were constructed using the centroids of the fitted geometric primitives and cylinder axes (Fig. A1A), as outlined in the Materials and Methods (Fig. 1B-D). All coordinate systems were designed to isolate and measure specific movements in 2D. Asymmetries in right and left systems were employed to maintain comparable rotations among all feet (following Kambic et al., 2014). Right-foot coordinate system construction is detailed here; differences in leftfoot coordinate system construction are noted in each respective section. Pitch MTgroundACS (Y positive down). The z-axes of both MTpitchACS and MTgroundACS remained parallel to the ground surface (resulting in Z positive medially) and were perpendicular to the long axis of the metatarsal, permitting a 2D dynamic measurement of pitch, regardless of foot orientation in space. Identical construction was followed for the left foot, resulting in X positive distally, Y positive ventrally and Z positive laterally. Skew (Fig. A1C): The midpoint of the metatarsal I and IV metatarsal plane centroids formed the origin of proximalProjectedTransverseACS. proximalProjectedTransverseACS_X was aim constrained at the midpoint of the metatarsal I and IV metatarsal cylinder centroids (X positive distally), with a Z up vector to the metatarsal IV metatarsal plane centroid (Z positive laterally). The midpoint of the metatarsal I and IV metatarsal cylinder centroids formed the origin of distalProjectedTransverseACS. distalProjectedTransverseACS_X was aim constrained at the midpoint of the metatarsal I and IV metatarsal plane centroids (X positive distally), with a Z up vector to metatarsal IV cylinder centroid (Z positive laterally). The x-axes of both proximalProjectedTransverseACS and distalProjectedTransverseACS represent the midline of the metatarsus (see also Fig. A1D,E), and permit a dynamic 2D measurement of skew, regardless of metatarsal configuration. Construction for left feet differed only in sign of both aim constraints and up vectors, such that the resulting axes were X positive proximally, Y positive dorsally and Z positive laterally. Projected condylar axes (Fig. A1D-F): Four projected condylar axes (a pair of proximal and distal for each metatarsal) were constructed in two steps. First, condylar axis aims were created, then projected condylar axes were constructed. Condylar axis aim objects were created for metatarsals I and IV. MT1condylarAim was point constrained to the metatarsal I cylinder centroid and orient constrained to the metatarsal I cylinder axis, such that MT1condylarAim_Z was in line with the cylinder axis. Both point and orient constraints were deleted. MT1condylarAim was parent constrained to the cylinder axis, then translated along the zaxis several centimeters off to the lateral side of the foot. The same was repeated for metatarsal IV. A pair of proximal and distal projected condylar ACSs was created for each metatarsal. We detail the construction of metatarsal I distal projected condylar ACS here. MT1distalProjectedCondylarACS was point and orient constrained to distalProjectedTransverseACS. Constraints were then deleted. MT1distalProjectedCondylarACS was parented under distalProjectedTransverseACS. MT1distalProjectedCondylarACS translate_X, translate_Y and rotate_Y, rotate_Z were locked. MT1distalProjectedCondylarACS translate_Z was point constrained to metatarsal I cylinder centroid. MT1distalProjectedCondylarACS_Z was aim constrained at MT1condylarAim (Z positive laterally), constraining axes to only rotating about X, no up vector. All four projected condylar axes were constructed in the same manner, using respective centroids and metatarsal condylar axis aim object. The yand z-axes of all projected condylar axes remained within the planes of the projected transverse axes. Rotation about the x-axis permitted a dynamic 2D measurement of condyle axis angle, regardless of metatarsal configuration. Identical construction was followed for the left foot, resulting in axes X positive proximally, Y positive dorsally and Z positive laterally.
v3-fos-license
2020-07-23T09:09:08.450Z
2020-07-14T00:00:00.000
220922746
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-8994/12/7/1167/pdf", "pdf_hash": "42abe395c457f92fe7a33f696eadb13cd4bf2c41", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42709", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "sha1": "9ef1582b0f75e053404f9d10e3d7eac14a7aade7", "year": 2020 }
pes2o/s2orc
28 GHz Single-Chip Transmit RF Front-End MMIC for Multichannel 5G Wireless Communications : Millimeter-wave wireless networks of the new fifth generation (5G) have become a primary focus in the development of the information and telecommunication industries. It is expected that 5G wireless networks will increase the data rates and reduce network latencies by an order of magnitude, which will create new telecommunication services for all sectors of the economy. New electronic components such as 28 GHz (27.5 to 28.35 GHz) single-chip transmit radio frequency (RF) front-end monolithic microwave integrated circuits (MMICs) will be required for the performance and power consumption of millimeter-wave (mm-wave) 5G communication systems. This component includes a 6-bit digital phase shifter, a driver amplifier and a power amplifier. The output power P3dB and power-added e ffi ciency (PAE) are 29 dBm and 19.2% at 28 GHz. The phase shifter root-mean-square (RMS) phase and gain errors are 3 ◦ and 0.6 dB at 28 GHz. The chip dimensions are 4.35 × 4.40 mm. Introduction The population of our planet is gradually increasing and has already exceeded 7 billion people. The information needs of the population are growing; at the same time, new technologies such as the Internet of Things (IoT), intelligent transport systems (such as Vehicular Ad hoc Network (VANETs)) and virtual and augmented reality are actively developing [1]. The growth of data traffic and device connections will require data rates to increase by more than an order of magnitude [2]. Responding to these requests, the International Telecommunication Union (ITU) decided to develop a new generation of 5G wireless communications with high transmission speeds (>10 Gbit/s) and ultralow response times (<1 ms). However, an increase in the transmission date rate is mainly possible due to the expansion of the band that the frequencies use. The requirements for 5G networks can only be implemented in the millimeter-wave frequency range [3]. The advantages of millimeter waves (mm-waves) when used for radio communications have been well known for many years [4]. The advantageous features of millimeter-wave radio waves are responsible for their widespread use in radar systems, remote sensing, navigation and communications. Interest in millimeter waves has increased due to the need to expand the radio frequency spectrum for commercial applications. Compared with previous generations, mm-wave 5G wireless communication systems have higher data rates and data transfer density, millisecond latency and enhanced spectral energy. The International Telecommunication Union (ITU) has specified a number of the mm-wave frequency bands for 5G [5] wireless networks, but the 27. 5-28.35 GHz band was proposed for wide usage in many countries and was licensed by the Federal Communications Commission (FCC) [6]. Power consumption is one of the most significant technical barriers for practical mm-wave 5G wireless communications because multiple devices connected at the same time increase the power consumption of the base stations and data centers. Photonic technologies can be utilized in order to solve this problem. In data centers, chip-to-chip optical or electro-optical interconnects enable an increase in the bandwidth and capacity of those systems as well as a reduction in power consumption [7,8]. In a conventional 4G communication system, one or more passive antennas are used. Wireless 5G networks are based on active massive-element antenna systems that improve the capacity, efficiency and coverage of RF streams [9] (Figure 1). GHz band was proposed for wide usage in many countries and was licensed by the Federal Communications Commission (FCC) [6]. Power consumption is one of the most significant technical barriers for practical mm-wave 5G wireless communications because multiple devices connected at the same time increase the power consumption of the base stations and data centers. Photonic technologies can be utilized in order to solve this problem. In data centers, chip-to-chip optical or electro-optical interconnects enable an increase in the bandwidth and capacity of those systems as well as a reduction in power consumption [7,8]. In a conventional 4G communication system, one or more passive antennas are used. Wireless 5G networks are based on active massive-element antenna systems that improve the capacity, efficiency and coverage of RF streams [9] (Figure 1). To improve the bandwidth and data rate, the multiple-input multiple-output (MIMO) transceivers based on phased beamforming arrays are used [10−13]. Usually, active antenna systems consist of massive antenna arrays with integrated MIMO transceiver RF front-ends. Figure 2 shows the design of a multichannel transmit RF front-end (RFFE) module for 5G MIMO transceivers. There is a symmetrical antenna array where each channel consists of a phase shifter, a driver amplifier and a power amplifier. The input splitter divides the RF signal into a number of channels. The symmetry of the RFFE module architecture allows for a balance between input and output losses and power consumption. To improve the bandwidth and data rate, the multiple-input multiple-output (MIMO) transceivers based on phased beamforming arrays are used [10][11][12][13]. Usually, active antenna systems consist of massive antenna arrays with integrated MIMO transceiver RF front-ends. Figure 2 shows the design of a multichannel transmit RF front-end (RFFE) module for 5G MIMO transceivers. There is a symmetrical antenna array where each channel consists of a phase shifter, a driver amplifier and a power amplifier. The input splitter divides the RF signal into a number of channels. The symmetry of the RFFE module architecture allows for a balance between input and output losses and power consumption. GHz band was proposed for wide usage in many countries and was licensed by the Federal Communications Commission (FCC) [6]. Power consumption is one of the most significant technical barriers for practical mm-wave 5G wireless communications because multiple devices connected at the same time increase the power consumption of the base stations and data centers. Photonic technologies can be utilized in order to solve this problem. In data centers, chip-to-chip optical or electro-optical interconnects enable an increase in the bandwidth and capacity of those systems as well as a reduction in power consumption [7,8]. In a conventional 4G communication system, one or more passive antennas are used. Wireless 5G networks are based on active massive-element antenna systems that improve the capacity, efficiency and coverage of RF streams [9] (Figure 1). To improve the bandwidth and data rate, the multiple-input multiple-output (MIMO) transceivers based on phased beamforming arrays are used [10−13]. Usually, active antenna systems consist of massive antenna arrays with integrated MIMO transceiver RF front-ends. Figure 2 shows the design of a multichannel transmit RF front-end (RFFE) module for 5G MIMO transceivers. There is a symmetrical antenna array where each channel consists of a phase shifter, a driver amplifier and a power amplifier. The input splitter divides the RF signal into a number of channels. The symmetry of the RFFE module architecture allows for a balance between input and output losses and power consumption. The performance and power consumption of millimeter-wave 5G communication systems are mainly dependent on the electrical parameters of using RF electronic components based on semiconductor monolithic integrated circuits, which are the key elements for mm-wave RF transmit/receive modules. The development of such elements is a difficult challenge. A previous study presented the results of the development of a 28 GHz phase adjustable power amplifier monolithic microwave integrated circuit (MMIC) for 5G front-ends [14]. It consisted of a 4-bit digitally controlled phase shifter and power amplifier. The MMIC was designed by Plextek RFI and fabricated by Win Semiconductors using a 0.15 µm GaAs pHEMT process. The main disadvantage of this MMIC is a low phase shift resolution of 22.5 • , which results in reduced beamforming opportunities and low antenna gain. In this study, the design approach for a 28 GHz single-chip transmit RFFE MMIC with high phase shift resolution (5.625 • ) for multichannel 5G wireless communications is presented, along with its electrical performance. The integrated circuit (IC) consists of a 6-bit digital phase shifter, a driver amplifier and a power amplifier and was designed using a 0.25 µm GaAs pHEMT process of JSC Micran (Tomsk, Russian Federation) for low-cost volume production. Figure 3 shows a photo of a fabricated one-channel single-chip transmit RF front-end MMIC. The chip dimensions are 4.35 mm × 4.40 mm. The single-chip IC consists of a 6-bit phase shifter with a transistor-transistor logic (TTL) driver, a driver amplifier and a power amplifier (PA) and was designed using a 0.25 µm GaAs pHEMT process for low-cost volume production. The performance and power consumption of millimeter-wave 5G communication systems are mainly dependent on the electrical parameters of using RF electronic components based on semiconductor monolithic integrated circuits, which are the key elements for mm-wave RF transmit/receive modules. The development of such elements is a difficult challenge. Design Approach A previous study presented the results of the development of a 28 GHz phase adjustable power amplifier monolithic microwave integrated circuit (MMIC) for 5G front-ends [14]. It consisted of a 4-bit digitally controlled phase shifter and power amplifier. The MMIC was designed by Plextek RFI and fabricated by Win Semiconductors using a 0.15 µm GaAs pHEMT process. The main disadvantage of this MMIC is a low phase shift resolution of 22.5°, which results in reduced beamforming opportunities and low antenna gain. In this study, the design approach for a 28 GHz single-chip transmit RFFE MMIC with high phase shift resolution (5.625°) for multichannel 5G wireless communications is presented, along with its electrical performance. The integrated circuit (IC) consists of a 6-bit digital phase shifter, a driver amplifier and a power amplifier and was designed using a 0.25 µm GaAs pHEMT process of JSC Micran (Tomsk, Russian Federation) for low-cost volume production. Figure 3 shows a photo of a fabricated one-channel single-chip transmit RF front-end MMIC. The chip dimensions are 4.35 mm x 4.40 mm. The single-chip IC consists of a 6-bit phase shifter with a transistor-transistor logic (TTL) driver, a driver amplifier and a power amplifier (PA) and was designed using a 0.25 µm GaAs pHEMT process for low-cost volume production. The phase shift level of a single-chip transmit RFFE MMIC is controlled by an integrated TTL driver. The 6-bit digital driver can precisely adjust a phase from 0 to 360° with a step of 5.625°. Table A1 in Appendix A shows the phase shifter state table. There are two control levels (0 and 1) for all bits. Low (0) and high (1) TTL control levels are 0 and +5 V. Applying different control levels for all 6 bits of the digital phase shifter may change the phase shift across the full 360° range. The phase shift level of a single-chip transmit RFFE MMIC is controlled by an integrated TTL driver. The 6-bit digital driver can precisely adjust a phase from 0 to 360 • with a step of 5.625 • . Table A1 in Appendix A shows the phase shifter state table. There are two control levels (0 and 1) for all bits. Low (0) and high (1) TTL control levels are 0 and +5 V. Applying different control levels for all 6 bits of the digital phase shifter may change the phase shift across the full 360 • range. Design Approach The high-and low-pass (HP-LP) RF filters were used to design the 6-bit digital phase shifter [15]. The applied TTL control voltages across all bits allowed the switching of states between HP-LP filters to form the required output phase shift level. This solution exhibits good return losses, phase shift performance in RMS phase and gain errors. The electrical schemes and layout plots of 180, 90, 45, 22.5, 11.25 and 5.625 • bits are presented in Figures 4-9. For 180 and 90 • bits, a circuit was selected with switchable filters using an inactive arm in the filter. The classical solution of using switchable filters does not provide a sufficient level of decoupling of the active and inactive arms, which leads to an increase in the initial bit losses. The 45 and 22.5 • bits were designed according to the scheme with switched elements in the filter. This is the only possible solution for the 27.5-28.35 GHz band. For 11.5 and 5.625 • bits, a circuit was designed with a serial connection of filters due to a very small inductance. In this circuit, the inductance is shunted because the required phase shift requires a large length of the microstrip. The high-and low-pass (HP-LP) RF filters were used to design the 6-bit digital phase shifter [15]. The applied TTL control voltages across all bits allowed the switching of states between HP-LP filters to form the required output phase shift level. This solution exhibits good return losses, phase shift performance in RMS phase and gain errors. The electrical schemes and layout plots of 180, 90, 45, 22.5, 11.25 and 5.625° bits are presented in Figures 4-9. For 180 and 90° bits, a circuit was selected with switchable filters using an inactive arm in the filter. The classical solution of using switchable filters does not provide a sufficient level of decoupling of the active and inactive arms, which leads to an increase in the initial bit losses. The 45 and 22.5° bits were designed according to the scheme with switched elements in the filter. This is the only possible solution for the 27.5-28.35 GHz band. For 11.5 and 5.625° bits, a circuit was designed with a serial connection of filters due to a very small inductance. In this circuit, the inductance is shunted because the required phase shift requires a large length of the microstrip. The choice in the order of the phase shifter bits was carried out strictly on the principle of minimal influence of the reflection coefficient between the bits. To do this, the sections with the lowest similarity of the reflection coefficient are placed between the sections with the highest similarity of the reflection coefficient. The most stable bits are 180 and 90°, and the least stable bits are 11.25 and 5.625°. Therefore, the optimal bit ordering is selected as follows: 22.5 to 11.25 to 90 to 5.625 to 180 to 45°. This ordering allows the RMS phase shift error to be reduced. The high-and low-pass (HP-LP) RF filters were used to design the 6-bit digital phase shifter [15]. The applied TTL control voltages across all bits allowed the switching of states between HP-LP filters to form the required output phase shift level. This solution exhibits good return losses, phase shift performance in RMS phase and gain errors. The electrical schemes and layout plots of 180, 90, 45, 22.5, 11.25 and 5.625° bits are presented in Figures 4-9. For 180 and 90° bits, a circuit was selected with switchable filters using an inactive arm in the filter. The classical solution of using switchable filters does not provide a sufficient level of decoupling of the active and inactive arms, which leads to an increase in the initial bit losses. The 45 and 22.5° bits were designed according to the scheme with switched elements in the filter. This is the only possible solution for the 27.5-28.35 GHz band. For 11.5 and 5.625° bits, a circuit was designed with a serial connection of filters due to a very small inductance. In this circuit, the inductance is shunted because the required phase shift requires a large length of the microstrip. The choice in the order of the phase shifter bits was carried out strictly on the principle of minimal influence of the reflection coefficient between the bits. To do this, the sections with the lowest similarity of the reflection coefficient are placed between the sections with the highest similarity of the reflection coefficient. The most stable bits are 180 and 90°, and the least stable bits are 11.25 and 5.625°. Therefore, the optimal bit ordering is selected as follows: 22.5 to 11.25 to 90 to 5.625 to 180 to 45°. This ordering allows the RMS phase shift error to be reduced. The choice in the order of the phase shifter bits was carried out strictly on the principle of minimal influence of the reflection coefficient between the bits. To do this, the sections with the lowest similarity of the reflection coefficient are placed between the sections with the highest similarity of the reflection Symmetry 2020, 12, 1167 6 of 13 coefficient. The most stable bits are 180 and 90 • , and the least stable bits are 11.25 and 5.625 • . Therefore, the optimal bit ordering is selected as follows: 22.5 to 11.25 to 90 to 5.625 to 180 to 45 • . This ordering allows the RMS phase shift error to be reduced. The power amplifier (PA) of a single-chip transmit front-end MMIC consists of three power stages and matching networks (Figure 10a). The base active element of the PA is a GaAs pHEMT with a 0.25 µm length gate. To achieve a balance between output power capability and cutoff frequency, the transistor gate width is 100 µm (Figure 11). The increase in gate width can improve the output power of the transistor, but higher parasitic capacitance will reduce the cutoff frequency. The peripheries of the transistors are 1600 µm (16 × 100 µm) for the first stage (Q1), 3200 µm (32 × 100 µm) for the second stage (Q2) and 3200 µm (32 × 100 µm) for the third stage (Q3). The supply voltage for all power stages is Vd = 6 V. The power amplifier (PA) of a single-chip transmit front-end MMIC consists of three power stages and matching networks (Figure 10a). The base active element of the PA is a GaAs pHEMT with a 0.25 µm length gate. To achieve a balance between output power capability and cutoff frequency, the transistor gate width is 100 µm (Figure 11). The increase in gate width can improve the output power of the transistor, but higher parasitic capacitance will reduce the cutoff frequency. The peripheries of the transistors are 1600 µm (16 × 100 µm) for the first stage (Q1), 3200 µm (32 × 100 µm) for the second stage (Q2) and 3200 µm (32 × 100 µm) for the third stage (Q3). The supply voltage for all power stages is Vd = 6 V. The matching networks of the power amplifier consist of thin-film NiCr-based resistors, metal-insulator-metal (MIM) capacitors based on silicon nitride, Au-based transmission lines and Lange quadrature couplers (LQCs). The proposed LQCs (Figure 10b) are used in input and output matching networks to improve the bandwidth and achieve a compact PA size. The symmetrical design of the PA layout and electromagnetic simulation were completed at the AWR Microwave The power amplifier (PA) of a single-chip transmit front-end MMIC consists of three power stages and matching networks (Figure 10a). The base active element of the PA is a GaAs pHEMT with a 0.25 µm length gate. To achieve a balance between output power capability and cutoff frequency, the transistor gate width is 100 µm (Figure 11). The increase in gate width can improve the output power of the transistor, but higher parasitic capacitance will reduce the cutoff frequency. The peripheries of the transistors are 1600 µm (16 × 100 µm) for the first stage (Q1), 3200 µm (32 × 100 µm) for the second stage (Q2) and 3200 µm (32 × 100 µm) for the third stage (Q3). The supply voltage for all power stages is Vd = 6 V. The matching networks of the power amplifier consist of thin-film NiCr-based resistors, metal-insulator-metal (MIM) capacitors based on silicon nitride, Au-based transmission lines and Lange quadrature couplers (LQCs). The proposed LQCs (Figure 10b) are used in input and output matching networks to improve the bandwidth and achieve a compact PA size. The symmetrical design of the PA layout and electromagnetic simulation were completed at the AWR Microwave Office [16]. matching networks to improve the bandwidth and achieve a compact PA size. The symmetrical design of the PA layout and electromagnetic simulation were completed at the AWR Microwave Office [16]. Figure 12 shows the dependences of the simulated phase shift performance on the frequency for 64 states of a one-chip RF front-end MMIC. Electrical Performance Symmetry 2020, 12, x FOR PEER REVIEW 7 of 13 Figure 12 shows the dependences of the simulated phase shift performance on the frequency for 64 states of a one-chip RF front-end MMIC. Electrical Performance where is the number of phase states number, εφ is the measured phase shift in degrees and <εφ> is the average phase shift for states of the 6-bit phase shifter. According to the results presented in Figure 13, RMS phase shift error is about 3° across the entire 27.5-28.35 GHz range. where N is the number of phase states number, ε ϕ is the measured phase shift in degrees and <ε ϕ > is the average phase shift for states of the 6-bit phase shifter. Figure 12 shows the dependences of the simulated phase shift performance on the frequency for 64 states of a one-chip RF front-end MMIC. Figure 13 shows the dependence of the RMS phase shift error (PSE) of the single-chip RF front-end MMIC on frequency in the range of 26 to 30 GHz. The measured RMS PSE was calculated according to Equation (1): Electrical Performance where is the number of phase states number, εφ is the measured phase shift in degrees and <εφ> is the average phase shift for states of the 6-bit phase shifter. According to the results presented in Figure 13, RMS phase shift error is about 3° across the entire 27.5-28.35 GHz range. According to the results presented in Figure 13, RMS phase shift error is about 3 • across the entire 27.5-28.35 GHz range. Figure 14 shows the dependence of small signal gain in all phase states of the single-chip RF front-end MMIC on frequency in the 26 to 30 GHz frequency range. There is a nominal small signal gain of over 20 dB in the frequency range of 27.5 to 28.35 GHz. The maximum gain of 22.7 dB is achieved at 29 GHz. Symmetry 2020, 12, x FOR PEER REVIEW 8 of 13 Figure 14 shows the dependence of small signal gain in all phase states of the single-chip RF front-end MMIC on frequency in the 26 to 30 GHz frequency range. There is a nominal small signal gain of over 20 dB in the frequency range of 27.5 to 28.35 GHz. The maximum gain of 22.7 dB is achieved at 29 GHz. where is the number of phase states, |S21| is the measured gain in dB and <|S21|> is the average gain for states of the 6-bit phase shifter. According to the results presented in Figure 15, RMS GE is 0.6 dB for the entire 27.5-28.35 GHz range. where N is the number of phase states, |S 21 | is the measured gain in dB and <|S 21 |> is the average gain for states of the 6-bit phase shifter. According to the results presented in Figure 15, RMS GE is 0.6 dB for the entire 27.5-28.35 GHz range. Symmetry 2020, 12, 1167 9 of 13 Figure 16 shows the dependence of the output power capability at 3 dB gain compression (P3dB) for the 0 • phase state of the single-chip RF front-end MMIC on frequency. The output power P-3dB is 29 dBm across the full 27.5 to 28.35 GHz band. The maximum P3dB of about 29.5 dBm is achieved at 30 GHz. Figure 15. Dependence of RMS gain error of the single-chip RF front-end MMIC on the frequency. Figure 16 shows the dependence of the output power capability at 3 dB gain compression (P3dB) for the 0° phase state of the single-chip RF front-end MMIC on frequency. The output power P-3dB is 29 dBm across the full 27.5 to 28.35 GHz band. The maximum P3dB of about 29.5 dBm is achieved at 30 GHz. Table 1 shows the electrical performance of the developed single-chip transmit RFFE MMIC in comparison with state-of-the-art single-chip transmit RFFE MMICs [14]. The fabricated MMIC has a comparable performance and a higher phase shift resolution. The improved output power and PAE of the RF front-end presented in [14] can be attributed to the use of the 0.15 µm GaAs pHEMT process of Win Semiconductors, with lower gate length resulting in better high-frequency performance. Table 1 shows the electrical performance of the developed single-chip transmit RFFE MMIC in comparison with state-of-the-art single-chip transmit RFFE MMICs [14]. The fabricated MMIC has a comparable performance and a higher phase shift resolution. The improved output power and PAE of the RF front-end presented in [14] can be attributed to the use of the 0.15 µm GaAs pHEMT process of Win Semiconductors, with lower gate length resulting in better high-frequency performance. Conclusions Millimeter-wave wireless networks have attracted the most interest as 5G communication systems of the new generation (5G). The 27.5 to 28.35 GHz band was licensed for 5G wireless networks by the FCC. The performance and power consumption of millimeter-wave 5G communication systems mainly depend on the electrical parameters of the electronic RF components used inside RF transmit/receive modules. The design approach for a 28 GHz single-chip transmit RF front-end MMIC is presented in this paper, along with its electrical performance. The IC includes a 6-bit digital phase shifter, a driver amplifier and a power amplifier. It was designed using a 0.25 µm GaAs pHEMT process for low-cost volume production. The output power P3dB and PAE are 29 dBm and 19.2% at 28 GHz. The phase shifter RMS phase and gain errors are 3 • and 0.6 dB at 28 GHz. The fabricated single-chip RF front-end MMIC can be used in multichannel transmit 5G front-end modules based on phased antenna arrays. Funding: The work was carried out with financial support from the Ministry of Science and Higher Education of the Russian Federation (Project name: Theoretical and experimental studies of ultra-wideband optoelectronic devices of fiber-optic information systems and radiophotonics based on photonic integrated circuits own development, Agreement No. 075-03-2020-237/1 from 05.03.2020, project number: FEWM-2020-0040) and Project No. AAAA-A19-119110690036-9. Experimental results were obtained by the team of the Integrated Optics and Radiophotonics Laboratory of the Tomsk State University of Control Systems and Radioelectronics using equipment of the "Impulse" center of collective usage (registration number 200568).
v3-fos-license
2023-08-30T15:19:23.914Z
2023-08-25T00:00:00.000
261559578
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/15/17/12902/pdf?version=1692975253", "pdf_hash": "b29bc1a7f7969f0cfbbf1454fa3b348bcf5ee684", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42710", "s2fieldsofstudy": [ "Engineering" ], "sha1": "f905e81a4fc97dfe6704b3dc7e11c0d9a01d074e", "year": 2023 }
pes2o/s2orc
A Simplified Finite Element Model for Seismic Analysis of a Single-Layer Reticulated Dome Coupled with an Air Duct System : Damage to nonstructural components, such as air ducts, in buildings during earthquakes, which are more fragile than single-layer reticulated domes, has a significant impact on the sustainability of the building’s functionality. To study the coupling effect and failure mode of a single-layer reticulated dome with an air duct system, then, simplified finite element models of air ducts and flange bolt joints were established and validated against the solid element model. Moreover, the simplified finite element models of support hangers were also built and validated against the existing experiment. Three kinds of support hanger layout schemes were studied to analyze the dynamic characteristics and seismic responses of a single-layer reticulated dome with an air duct system from earthquakes at different intensities. The results showed that the simplified finite element model can effectively simulate the coupling effect and failure mode of the single-layer reticulated dome with an air duct system. The coupling effect of the air duct system reduces the natural vibration frequency in the dome and increases the number of damaged members in the dome by strong earthquakes. The rate of falling air ducts with all the seismic support hangers is the highest compared to the two other support hanger layout schemes. Introduction Nonstructural components are ancillary components in a building and can be divided into water pipes, air ducts, electrical equipment, etc.In the past, the seismic performance of the main structure was more highly considered in the research of long-span buildings.However, it was noted from much of the seismic damage data from around the world that the damage to the main structure of long-span buildings by earthquakes was relatively light, whereas the damage to the nonstructural components was more serious.For example, in the 2010 Chilean earthquake with a magnitude of 8.3, there was no significant damage to the main structure of Santiago Airport, yet its internal air conditioning equipment and firefighting pipes were severely damaged, causing the airport to stop operations and causing tens of millions of dollars in economic losses [1].The seismic dynamic response by nonstructural components depends not only on the ground motion characteristics but also on the dynamic characteristics of the main structure.Long-span spatial structures have many degrees of freedom, dense distributions of natural vibration frequencies, and complex and coupled vibration modes.Moreover, vertical vibration is often the main mode of the first vibration.Considering the resonance and coupling effects between the main structure and nonstructural components, the dynamic response and failure of nonstructural components will be significantly amplified, resulting in severe earthquake losses and a reduction in overall sustainability.From many practical engineering cases, it is found that the single-layer reticulated dome structure often arranges more air pipes to increase air circulation in the dome's interior, and other pipelines, such as water supply and drainage pipes are less arranged.As shown in Figure 1, air ducts are connected by flange bolt joints.They are located under the radial rods of a single-layer reticulated dome and are connected to the dome by support hangers. Sustainability 2023, 15, x FOR PEER REVIEW 2 of 16 engineering cases, it is found that the single-layer reticulated dome structure often arranges more air pipes to increase air circulation in the dome's interior, and other pipelines, such as water supply and drainage pipes are less arranged.As shown in Figure 1, air ducts are connected by flange bolt joints.They are located under the radial rods of a single-layer reticulated dome and are connected to the dome by support hangers.An air duct system is a typical large nonstructural component, which has important functions, including ventilation, air supply, exhaust, and dust removal.Similar to other pipeline systems, there are three main types of damage to the air duct system by an earthquake, namely joint damage between pipelines, connection damage between pipelines and structures, and coupling damage between pipelines and other nonstructural components [2].In the aspect of air pipe joints, some scholars have carried out related research on the mechanical characteristics and seismic performance of air pipe flange bolted joints.Kim et al. [3] studied the mechanical behavior of bolt flange joints under tensile load and believed that the maximum increase in bolt force was 40% of the preload.According to Wu et al. [4], the level of bolt preload and the friction coefficient of the interface have little influence on the axial tensile stiffness of the bolt flange, and the bolt preload mainly improves the resistance to flange separation.Couchaux et al. [5] studied the expressions of the initial rotational stiffness and the bending moment of a circular pipe connected by bolts and flanges under the bending moment and the axial action.Luan et al. [6] studied the force of a circular pipe connected with flange bolts under the action of a bending moment and proposed a simplified model composed of beam elements and spring elements.Chen et al. [7] designed a new flange bolt joint.Through testing, it was found that the joint was semi-rigid, capable of transferring part of the bending moment, and offered a good stiffness, bearing capacity, and energy dissipation capacity.Wang et al. [8] studied the mechanical properties of bolt flange joints under tensile, bending moment, and torsional loads, with the results showing that the more bolts, the larger the diameter, while the greater the pre-tightening force of the bolts, the greater the bending stiffness of the flange cylinder.However, the mechanical properties and hysteretic characteristics of flange bolt joints in air ducts have been less studied. In another aspect of the connection between the air duct and the main structure, many scholars have carried out related research on the seismic support and suspension of air ducts.Goodwin et al. [9] conducted shaking table tests on hospital pipeline systems with or without seismic support hangers, to determine their deformation capacity and failure mode.The results showed that seismic support hangers reduced the displacement response by pipeline systems but could not reduce the acceleration response.Hoehler et al. [10] studied the seismic performance of seismic support hangers under different seismic excitations and found that the load of the load-bearing support hangers under the action of an earthquake was much greater than the anchorage force of seismic support hangers.To determine the seismic performance of pipeline systems with different forms of support, Tian [11] conducted dynamic tests on three groups of pipeline systems with An air duct system is a typical large nonstructural component, which has important functions, including ventilation, air supply, exhaust, and dust removal.Similar to other pipeline systems, there are three main types of damage to the air duct system by an earthquake, namely joint damage between pipelines, connection damage between pipelines and structures, and coupling damage between pipelines and other nonstructural components [2].In the aspect of air pipe joints, some scholars have carried out related research on the mechanical characteristics and seismic performance of air pipe flange bolted joints.Kim et al. [3] studied the mechanical behavior of bolt flange joints under tensile load and believed that the maximum increase in bolt force was 40% of the preload.According to Wu et al. [4], the level of bolt preload and the friction coefficient of the interface have little influence on the axial tensile stiffness of the bolt flange, and the bolt preload mainly improves the resistance to flange separation.Couchaux et al. [5] studied the expressions of the initial rotational stiffness and the bending moment of a circular pipe connected by bolts and flanges under the bending moment and the axial action.Luan et al. [6] studied the force of a circular pipe connected with flange bolts under the action of a bending moment and proposed a simplified model composed of beam elements and spring elements.Chen et al. [7] designed a new flange bolt joint.Through testing, it was found that the joint was semi-rigid, capable of transferring part of the bending moment, and offered a good stiffness, bearing capacity, and energy dissipation capacity.Wang et al. [8] studied the mechanical properties of bolt flange joints under tensile, bending moment, and torsional loads, with the results showing that the more bolts, the larger the diameter, while the greater the pre-tightening force of the bolts, the greater the bending stiffness of the flange cylinder.However, the mechanical properties and hysteretic characteristics of flange bolt joints in air ducts have been less studied. In another aspect of the connection between the air duct and the main structure, many scholars have carried out related research on the seismic support and suspension of air ducts.Goodwin et al. [9] conducted shaking table tests on hospital pipeline systems with or without seismic support hangers, to determine their deformation capacity and failure mode.The results showed that seismic support hangers reduced the displacement response by pipeline systems but could not reduce the acceleration response.Hoehler et al. [10] studied the seismic performance of seismic support hangers under different seismic excitations and found that the load of the load-bearing support hangers under the action of an earthquake was much greater than the anchorage force of seismic support hangers.To determine the seismic performance of pipeline systems with different forms of support, Tian [11] conducted dynamic tests on three groups of pipeline systems with different forms of support, and the results showed that in the dynamic tests with seismic support hangers, the degree of damage specimens with seismic support hangers was small.The suspension screw, ceiling, spray joint, and pipeline joint of the pipeline system without seismic support and suspension were damaged.Wood et al. [12] studied the force-displacement hysteretic relationship of two types of commonly used longitudinal seismic support hangers and found that the performance of the support hangers under monotonic loading and cyclic loading had a great influence on their mechanical properties.Zhu et al. [13] proposed a coupling model of a structure-seismic support hanger and found that, compared to the equivalent lateral force method, the time-history analysis method can more accurately calculate the seismic action of the seismic support hanger in high-rise building structures.Qu et al. [14] analyzed the seismic response of the coupled system of a frame structure and large air duct equipment under the action of rare earthquakes; the results showed that in online elastic and elastoplastic analyses, the structure-equipment interaction would have a significant impact on the maximum response of the main structure.In summary, current research is on water pipelines and their coupling effects to the main structure by earthquakes; however, air ducts and their seismic coupling effects with other large-span space structures, such as single-layer reticulated domes were being less frequently studied. As mentioned previously, this paper focuses on the coupling effect and failure mode of a single-layer reticulated dome coupled to an air duct system since the coupling effect of the air ducts is complex.In this study, the finite element (FE) software ABAQUS [15] was used to establish simplified FE models of the dome structure, air duct, and support hanger with the B31 elements and the model of the flange bolt joint with the connector elements.The simplified FE models of the flange bolt joints were validated against the solid element model, and the simplified FE models of the support hangers were validated against the existing experiment.Then, the seismic response and failure modes of the coupling FE models in three different support hanger layout cases were analyzed. General The FE program ABAQUS [15] was used to establish a nonlinear FE model to simulate the behavior of a single-layer reticulated dome coupled with an air duct system.Flange bolts are mainly used to connect air ducts.The main damage to the air ducts by an earthquake is that the flange bolt cracks under the complicated action of a large shear force and a tension force, leading to the air ducts falling.To reduce the calculation time of the finite element model, the BEAM 31 element was used to simulate the air ducts in ABAQUS [15], which adopts the box section as the beam section, and the connector element was used to simulate the connection of the flange bolt joints in this paper, as shown in Figure 2. The connector element types selected were Cartesian and Align, that is, the connector has translational degrees of freedom in directions of U1, U2, and U3, and limits the rotational degrees of freedom in directions of UR1, UR2, and UR3.The connector element can define the complex mechanical relationship between two nodes.In this paper, the friction, damping, elasticity, and failure of the connector element are mainly defined according to The connector element types selected were Cartesian and Align, that is, the connector has translational degrees of freedom in directions of U 1 , U 2 , and U 3 , and limits the rotational degrees of freedom in directions of UR 1 , UR 2 , and UR 3 .The connector element can define the complex mechanical relationship between two nodes.In this paper, the friction, damping, elasticity, and failure of the connector element are mainly defined according to the hysteretic performance of the flange bolt joint between the air ducts. Simplified FE Model of Flange Bolt Joints Using solid element C3D8R to simulate the mechanical behavior of bolted joints is a technique that has been adopted by many scholars [16][17][18].C3D8R is used to establish a refined finite element model for the numerical analysis of the hysteresis properties of the air duct flange bolt joint by ABAQUS, as shown in Figure 3.The fine finite element model mainly includes two sides of the air ducts, flange, and bolts, where the unit length of the air pipe is 1 m, the section height of the air duct is 400 mm, the width of the air duct is 800 mm, and the duct thickness is 4 mm.The flange thickness is 4 mm, and the bolts are arranged as two bolts on each of the left and right sides and three bolts on the upper and lower sides.The bolt has a diameter of 8 mm and a length of 33 mm.The numbering and arrangement of the bolts are shown in Figure 3b.The connector element types selected were Cartesian and Align, that is, the connector has translational degrees of freedom in directions of U1, U2, and U3, and limits the rotational degrees of freedom in directions of UR1, UR2, and UR3.The connector element can define the complex mechanical relationship between two nodes.In this paper, the friction, damping, elasticity, and failure of the connector element are mainly defined according to the hysteretic performance of the flange bolt joint between the air ducts. Simplified FE Model of Flange Bolt Joints Using solid element C3D8R to simulate the mechanical behavior of bolted joints is a technique that has been adopted by many scholars [16][17][18].C3D8R is used to establish a refined finite element model for the numerical analysis of the hysteresis properties of the air duct flange bolt joint by ABAQUS, as shown in Figure 3.The fine finite element model mainly includes two sides of the air ducts, flange, and bolts, where the unit length of the air pipe is 1 m, the section height of the air duct is 400 mm, the width of the air duct is 800 mm, and the duct thickness is 4 mm.The flange thickness is 4 mm, and the bolts are arranged as two bolts on each of the left and right sides and three bolts on the upper and lower sides.The bolt has a diameter of 8 mm and a length of 33 mm.The numbering and arrangement of the bolts are shown in Figure 3b.The coupling point is located at the center of the flange face of one side of the duct.A vertical shear F is added at the coupling point to analyze the hysteretic curve and skeleton curve in the bolt group under seismic shear.The two ends of the model are coupled to the central reference points, respectively, and the boundary conditions of the coupling points are set as symmetric constraints to simulate the limiting effect of the seismic support hangers in the displacement of the duct in each direction.The contact in ABAQUS is arranged on the surface between the bolt and flange.There are two analysis steps in the static analysis.The first step applies gravity and bolt preload, and the second step applies cyclic shear load F. To improve the computational efficiency and accuracy of the simulation, the mesh of the flange and bolt contact parts was finely divided.Furthermore, the mesh density of the duct was gradually reduced from one end of the flange to the other end. The air duct is made of Q235 steel, with an elastic modulus of 2.1 × 10 5 MPa, a Poisson ratio of 0.3, and a mass density of 7850 kg/m 3 [19].The elastic-plastic metal constitutive model is adopted for the duct material, and the specific material parameters are shown in Table 1.The bolts are made of 40 Cr alloy steel with an elastic modulus of 2.06 × 10 5 MPa, a Poisson ratio of 0.29, and a mass density of 7820 kg/m 3 .The elastic-plastic metal constitutive model was also adopted for the bolts, and the flexible damage and damage evolution were set.The damage evolution takes the form of displacement, and the failure displacement is 1 mm.The specific elastic-plastic parameters of the bolts are shown in Table 2. To test the precision of the constitutive model for the bolt material, numerical simulations of tensile and shear failure for the individual bolts were carried out using ABAQUS.The numerical simulation was carried out with reference to the experiments of Guzas et al. [22].Figure 4 shows the fracture pattern and the force-displacement curve for the tensile failure of a single bolt.It can be seen from Figure 4 that the failure location is concentrated in the bolt screw, which exhibits significant shrinkage during the tensile process.The maximum tensile capacity of a single bolt is 55 kN, and the bolt screw is eventually broken by the tensile force.In general, the numerical simulation results were close to the experimental results.Therefore, the selected bolt material constitutive model can effectively simulate the peak value and decreasing process of bolt tensile bearing capacity.In this study, the numerical simulation of the bolt double shear test was carried out, in reference to Guo et al. [23].The shear failure mode and its force-displacement curve for a single bolt under the action of two steel flanges are shown in Figure 5.It can be seen from Figure 5 that the failure location was in the single shear plane of the bolt crew and the maximum shear bearing capacity of the single bolt was 30 kN.Compared to the experimental result by Guo et al., the curve obtained in this paper is similar to the experimental one, which also shows the accuracy of the constitutive model of the bolt material.The numerical simulation results also showed that the end plates were partially damaged when the holes in the end plates were being compressed, which indicated that the damage to the flange plates around the bolts must be considered when the bolt is completely cut.In this study, the numerical simulation of the bolt double shear test was carried out, in reference to Guo et al. [23].The shear failure mode and its force-displacement curve for a single bolt under the action of two steel flanges are shown in Figure 5.It can be seen from Figure 5 that the failure location was in the single shear plane of the bolt crew and the maximum shear bearing capacity of the single bolt was 30 kN.Compared to the experimental result by Guo et al., the curve obtained in this paper is similar to the experimental one, which also shows the accuracy of the constitutive model of the bolt material.The numerical simulation results also showed that the end plates were partially damaged when the holes in the end plates were being compressed, which indicated that the damage to the flange plates around the bolts must be considered when the bolt is completely cut. the maximum shear bearing capacity of the single bolt was 30 kN.Compared to the experimental result by Guo et al., the curve obtained in this paper is similar to the experimental one, which also shows the accuracy of the constitutive model of the bolt material.The numerical simulation results also showed that the end plates were partially damaged when the holes in the end plates were being compressed, which indicated that the damage to the flange plates around the bolts must be considered when the bolt is completely cut.The numerical simulation of the bolt failure above verifies the accuracy of the material structure of the bolt.Therefore, the quasi-static numerical simulation was further used to carry out the cyclic loading of the fine finite element model of the rectangular air duct flange joint.The loading curve is shown in Figure 6a, which shows that the time step is 40 The numerical simulation of the bolt failure above verifies the accuracy of the material structure of the bolt.Therefore, the quasi-static numerical simulation was further used to carry out the cyclic loading of the fine finite element model of the rectangular air duct flange joint.The loading curve is shown in Figure 6a, which shows that the time step is 40 s and cyclic displacement gradually increases from 0 to 10 mm.In addition, a preload of 25 kN was applied to each bolt to ensure the bolt was connected to the air duct.The specific loading system and hysteresis curve are shown in Figure 6.It can be seen from the hysteresis curve in Figure 6b that the hysteresis curve of the flange bolt joint in the air duct was an inverted S-shape, which means it has a strong rheostriction effect and a long slip segment.The results showed that the joint was affected by more bolt slippage under vertical shear force, and the ductility and energy dissipation capacity were weak.s and cyclic displacement gradually increases from 0 to 10 mm.In addition, a preload of 25 kN was applied to each bolt to ensure the bolt was connected to the air duct.The specific loading system and hysteresis curve are shown in Figure 6.It can be seen from the hysteresis curve in Figure 6b that the hysteresis curve of the flange bolt joint in the air duct was an inverted S-shape, which means it has a strong rheostriction effect and a long slip segment.The results showed that the joint was affected by more bolt slippage under vertical shear force, and the ductility and energy dissipation capacity were weak.Through the hysteretic simulation by ABAQUS, the final failure mode of the flange bolt joint under the action of the cyclic shear force is shown in Figure 7, where the loading displacement is 7 mm.According to the analysis in Figure 7, the stress on the air duct and the flange is lower, and the stress on the bolt is higher.The damage occurred mainly in the bolts.The bolts on the left and right sides of the flange had large shear deformation and tensile deformation, and the failure occurred under the combined action of the tension and shear force.The bolts on the upper and lower sides of the flange exhibited a certain degree of tensile deformation in the hysteresis simulation, without tensile failure or shear failure, indicating that the stress was relatively lower than those on both sides.Through the hysteretic simulation by ABAQUS, the final failure mode of the flange bolt joint under the action of the cyclic shear force is shown in Figure 7, where the loading displacement is 7 mm.According to the analysis in Figure 7, the stress on the air duct and the flange is lower, and the stress on the bolt is higher.The damage occurred mainly in the bolts.The bolts on the left and right sides of the flange had large shear deformation and tensile deformation, and the failure occurred under the combined action of the tension and shear force.The bolts on the upper and lower sides of the flange exhibited a certain degree of tensile deformation in the hysteresis simulation, without tensile failure or shear failure, indicating that the stress was relatively lower than those on both sides.Through the hysteretic simulation by ABAQUS, the final failure mode of the flange bolt joint under the action of the cyclic shear force is shown in Figure 7, where the loading displacement is 7 mm.According to the analysis in Figure 7, the stress on the air duct and the flange is lower, and the stress on the bolt is higher.The damage occurred mainly in the bolts.The bolts on the left and right sides of the flange had large shear deformation and tensile deformation, and the failure occurred under the combined action of the tension and shear force.The bolts on the upper and lower sides of the flange exhibited a certain degree of tensile deformation in the hysteresis simulation, without tensile failure or shear failure, indicating that the stress was relatively lower than those on both sides.According to the hysteresis simulation results for the air duct joint, it is feasible to simplify the air duct because the force of the air duct is much lower than the bolts and no damage occurs in the ducts.At the same time, due to the failure of the bolts, the connector element was used to simulate the tensile shear failure of the air duct flange bolt joint.The According to the hysteresis simulation results for the air duct joint, it is feasible to simplify the air duct because the force of the air duct is much lower than the bolts and no damage occurs in the ducts.At the same time, due to the failure of the bolts, the connector element was used to simulate the tensile shear failure of the air duct flange bolt joint.The connector element is a versatile connection element that can simulate a wide range of connection behaviors in ABAQUS, including elasticity, plasticity, and damage.Additionally, the spring element is one of the connection elements in ABAQUS.The connector element can define a variety of behavioral attributes through the connector section.However, we can only define the linear or nonlinear stiffness of the spring in the spring element.Therefore, we chose the connector element to model the mechanical behavior of the flange bolt joints in this study.The connection type of the connector element was Cartesian, which means that the connector provides a connection between the two air duct nodes, which allows for independent behavior in each of the three local Cartesian directions that follow the system at the node of one side of the air duct.In the connector section, we defined the friction, damping, elasticity, and failure of the connector element.The elastic modulus in each direction of the connector element was derived from the initial slope of the forcedisplacement curves obtained by the tension simulation and shear simulation for a single bolt.By defining the axial elastic-plastic behavior of the connector element in the axial direction, the force-displacement curve of the bolt was simulated.By defining the nonlinear elastic behavior of the connector element in the shear direction, the shear slip effect of the bolt was simulated.The fracture failure of the bolt was simulated by defining the tensile limit displacement U max of the connector element.The simplified mechanical model of the air duct flange bolt joint in this paper is shown in Figure 8a, and the comparison of the hysteresis curves obtained from the two models by using the connector element and solid C3D8R element is shown in Figure 8b.As shown in Figure 8a, the elasticity of the connector in the tangential direction was set to nonlinear in the connector section manager, and the deformation to its stiffness decreased when the displacement reached the plastic onset, thereby simulating the mechanical behavior of the flange bolt joint in the tangential direction.The results showed that the hysteresis curve for the simplified model had a more serious rheostriction phenomenon, which is a conservative calculation compared to the solid element.When the vertical displacement was loaded to 7 mm, the bearing reaction force on the left side of the simplified model was 375 kN, while that of the solid element model was 397 kN.The value of the simplified model was about 5.5% lower than for the solid element model, and the error was very small.With the same hardware and software, the solid element model took 34 min to compute, and the connector element model took 21 min, which showed the higher computational efficiency of the connector element model.In summary, the simplified model can accurately simulate the hysteresis characteristic of the flange bolt joint in the air duct and is faster than the solid element model in calculation, meaning that it can be applied in subsequent research. onset, thereby simulating the mechanical behavior of the flange bolt joint in the tangential direction.The results showed that the hysteresis curve for the simplified model had a more serious rheostriction phenomenon, which is a conservative calculation compared to the solid element.When the vertical displacement was loaded to 7 mm, the bearing reaction force on the left side of the simplified model was 375 kN, while that of the solid element model was 397 kN.The value of the simplified model was about 5.5% lower than for the solid element model, and the error was very small.With the same hardware and software, the solid element model took 34 min to compute, and the connector element model took 21 min, which showed the higher computational efficiency of the connector element model.In summary, the simplified model can accurately simulate the hysteresis characteristic of the flange bolt joint in the air duct and is faster than the solid element model in calculation, meaning that it can be applied in subsequent research. Simplified FE Model of Support Hangers Support hangers are used in the buildings to bear the self-weight of nonstructural components, such as pipes and ducts, and are firmly connected to the main structure.Support hangers are generally divided into ordinary support hangers and seismic support hangers.Compared to the ordinary support hangers, the seismic support hangers add seismic bracings in two horizontal directions, which can effectively bear the horizontal forces and limit the horizontal deformation of the pipes and air ducts under earthquake conditions, control the vibration of nonstructural components, and finally, protect the nonstructural components.The seismic support hangers are equipped with two longitudinal bracings and one lateral bracing with the same section and the same angle (45 • ).Furthermore, the only difference between the longitudinal bracings and the lateral bracings is their restriction of the direction of the air duct movement.In this paper, the BEAM 31 element was used to conduct finite element modeling of ordinary support hangers and seismic support hangers, and the established finite element models are shown in Figure 9.The bracings, vertical hanging rods, and horizontal clamping bars are all channel-section members with a height and width of 41 mm and a thickness of 2 mm, while the vertical clamping bars adopt a circular section with a radius of 6 mm to fix the left and right sides of the air duct.The connections between each member of the support hanger system and the connections between the support hanger and the main structure are all hinged.To simulate the buckling failure process, the beam elements of each member were meshed into four elements.Tie constraints were used to simulate the connection between the air ducts and the support hanger members because the air ducts are firmly bound to the support hangers by bolts. To verify the accuracy of the simplified model of the seismic support hangers, a numerical simulation of the static experiment, in reference [24], was carried out using the simplified model.The material of the bracket members was Q235 steel.Hinged constraints were established at the end of the members and brace, and the same gradually increasing horizontal force was established at both ends of the horizontal channel steel.The skeleton curve obtained from the test and the force-displacement curve simulated by the simplified model in this paper are shown in Figure 10b.In the stress figure in Figure 10b, a Song et al. [23] suggested that when the lateral displacement of the loading point is 50 mm, the test load is 13.05 kN, while the numerical simulation load is 14.2 kN, which is 8.8% larger than the test value.When the test load is loaded to 17.7 kN, the brace buckling fails, while the numerical simulation failure load is 7.74 kN, which is only 0.2% higher than the test value.Therefore, the simplified finite element model of the seismic support hangers established in this paper can effectively simulate the actual stress of the seismic support hangers under seismic horizontal shear and can be used in future research. nonstructural components.The seismic support hangers are equipped with two longitudinal bracings and one lateral bracing with the same section and the same angle (45°).Furthermore, the only difference between the longitudinal bracings and the lateral bracings is their restriction of the direction of the air duct movement.In this paper, the BEAM 31 element was used to conduct finite element modeling of ordinary support hangers and seismic support hangers, and the established finite element models are shown in Figure 9.The bracings, vertical hanging rods, and horizontal clamping bars are all channel-section members with a height and width of 41 mm and a thickness of 2 mm, while the vertical clamping bars adopt a circular section with a radius of 6 mm to fix the left and right sides of the air duct.The connections between each member of the support hanger system and the connections between the support hanger and the main structure are all hinged.To simulate the buckling failure process, the beam elements of each member were meshed into four elements.Tie constraints were used to simulate the connection between the air ducts and the support hanger members because the air ducts are firmly bound to the support hangers by bolts.To verify the accuracy of the simplified model of the seismic support hangers, a numerical simulation of the static experiment, in reference [24], was carried out using the simplified model.The material of the bracket members was Q235 steel.Hinged constraints were established at the end of the members and brace, and the same gradually increasing horizontal force was established at both ends of the horizontal channel steel.The skeleton FE Model of the Single-Layer Reticulated Dome The Kiewitt-8 single-layer spherical reticulated dome was selected as the research object in this paper, as shown in Figure 11.The span of this dome is 40 m, and the rise-tospan ratio is 1/4.The boundary conditions include fixed hinge supports, and the dome material is Q235 steel, with a yield strength of 235 MPa, a density of 7850 kg/m 3 FE Model of the Single-Layer Reticulated Dome The Kiewitt-8 single-layer spherical reticulated dome was selected as the research object in this paper, as shown in Figure 11.The span of this dome is 40 m, and the rise-tospan ratio is 1/4.The boundary conditions include fixed hinge supports, and the dome material is Q235 steel, with a yield strength of 235 MPa, a density of 7850 kg/m 3 , and an elastic modulus of 2.06 × 10 5 MPa.There are six annular circles and the dead load of the roof is 1.0 kN/m 2 .Different pipe sections of the single-layer reticulated dome are shown in Table 3.The finite element software ABAQUS was also used to establish the dome model.The bars in the dome were rigidly connected and the mass element was set at each joint.The element type of the dome bar adopted the Beam 31 element, and each bar was meshed into six elements.The constitutive model of the steel material is the ductile metal elastic-plastic damage model, and the constitutive parameters were derived from reference [18]. FE Model of Air Duct System According to the practical case of the Qionghai City Stadium in Ch arrangement of the air duct system is shown in Figure 12.The air duct sy of eight air ducts, which are arranged under the radial bars in the dome.divided into two parts: the main air duct is arranged in the radial dir dome, and the branch air duct is arranged in the annual direction along bottom of the main air duct is the air inlet, and the end of the branch a outlet.The total length of each main air duct is 14.9 m, and the length of duct is 3.1 m.The rectangular section parameters of the main and bran consistent with those in Section 2.2. FE Model of Air Duct System According to the practical case of the Qionghai City Stadium in China, the specific arrangement of the air duct system is shown in Figure 12.The air duct system has a total of eight air ducts, which are arranged under the radial bars in the dome.Each air duct is divided into two parts: the main air duct is arranged in the radial direction along the dome, and the branch air duct is arranged in the annual direction along the dome.The bottom of the main air duct is the air inlet, and the end of the branch air duct is the air outlet.The total length of each main air duct is 14.9 m, and the length of each branch air duct is 3.1 m.The rectangular section parameters of the main and branch air ducts are consistent with those in Section 2.2. divided into two parts: the main air duct is arranged in the radial direction along the dome, and the branch air duct is arranged in the annual direction along the dome.The bottom of the main air duct is the air inlet, and the end of the branch air duct is the air outlet.The total length of each main air duct is 14.9 m, and the length of each branch air duct is 3.1 m.The rectangular section parameters of the main and branch air ducts are consistent with those in Section 2.2.To inspect the influence of the different bracket layouts on the support hangers' response to the dome structure and air ducts, three types of bracket layout schemes were selected as the research objects in this paper, as shown in Figure 13.Among them, is case 1, where all the brackets of the main air duct are ordinary brackets without bracings.Case 2, where all the brackets of the main air duct are seismic support hangers with bracings.Case 3, where the staggered arrangements are used with the ordinary brackets and the seismic support hangers.In case 3, the distance between the seismic support hangers is 8.1 m, which meets Table 8.To inspect the influence of the different bracket layouts on the support hangers' response to the dome structure and air ducts, three types of bracket layout schemes were selected as the research objects in this paper, as shown in Figure 13.Among them, is case 1, where all the brackets of the main air duct are ordinary brackets without bracings.Case 2, where all the brackets of the main air duct are seismic support hangers with bracings.Case 3, where the staggered arrangements are used with the ordinary brackets and the seismic support hangers.In case 3, the distance between the seismic support hangers is 8.1 m, which meets Table 8.2.3 in the Chinese Code GB 50981-2014 [25], whereby the maximum distance between lateral seismic support hangers for air ducts of ordinary rigid materials in new construction projects is 9 m, and the maximum distance in the longitudinal direction is 18 m.Due to the short length of the branch air duct, ordinary brackets are used on them.Finally, the simplified finite element model of the reticulated dome, air ducts, and brackets is assembled together.The connector element established in this paper was used to simulate the flange bolt joint between the air ducts.The connection between the air ducts and brackets is bound by the tie constraint in ABAUQS and it is assumed that the rectangular air duct can be restrained by the right, left, up, and downsides of the bracket members and bolts.The connection between the brackets and the dome is hinged.In accordance with Section 8.2.2 in the Chinese code GB 50011-2010 [26], the damping ratio in the elastic time-history analysis of the whole FE model is 0.02, and that in the elastic-plastic analysis is 0.05, which is input into the material manager in ABAQUS. Results of Dynamic Characteristics Analysis Modal analysis is a common method applied to studying the dynamic characteristics of a structure.Modes are the inherent vibration characteristics of a structure, and each mode has a specific natural vibration frequency and deformation.In this paper, the mode analysis of a single-layer reticulated dome without an air duct system, which is called case 0, and the dome coupled with an air duct system in three support hanger layouts, which are called case 1, case 2, and case 3, were carried out, and the first three modes and frequencies from the four cases were acquired, as shown in Table 4.The results in Table 4 show that the natural vibration frequency of case 1 was the highest in the first three modes, Results of Dynamic Characteristics Analysis Modal analysis is a common method applied to studying the dynamic characteristics of a structure.Modes are the inherent vibration characteristics of a structure, and each mode has a specific natural vibration frequency and deformation.In this paper, the mode analysis of a single-layer reticulated dome without an air duct system, which is called case 0, and the dome coupled with an air duct system in three support hanger layouts, which are called case 1, case 2, and case 3, were carried out, and the first three modes and frequencies from the four cases were acquired, as shown in Table 4.The results in Table 4 show that the natural vibration frequency of case 1 was the highest in the first three modes, which means that the stiffness of the single-layer reticulated dome without any ducts was the maximum and the overall stiffness of the single-layer reticulated dome coupled with an air duct system will become smaller.The results also show that by comparing the three cases of the coupling air duct systems, the natural vibration frequency of case 1 was the lowest, while that of case 2 was the highest, and for case 3 it was located between case 1 and case 2. This illustrates that the overall stiffness of a dome fully arranged with ordinary support hangers without bracings is lower, and the overall stiffness of a dome fully arranged with seismic support hangers with bracings is higher.The overall stiffness of a dome arranged with the ordinary support hangers and the seismic support hangers is located between the two preceding cases., seismic hazards are defined as frequent earthquakes, moderate earthquakes, and rare earthquakes.Moreover, the probability of frequent earthquakes occurring is 63% beyond the 50-year probability, the probability of moderate earthquakes occurring is 10% beyond the 50-year probability, and the probability of rare earthquakes occurring is 3% beyond the 50-year probability.Taking the peak ground acceleration (PGA) as the index of ground motion intensity, 3D earthquakes, including the El Centro Earthquake wave, Taft Earthquake wave, Loma Prieta Earthquake wave, and Tianjin Earthquake wave, were adopted to conduct an elasticplastic time-history analysis.The PGA of the seismic waves was adjusted to 110 cm/s 2 , 300 cm/s 2 , and 510 cm/s 2 .The PGA ratio in the X, Y, and Z directions was 1:0.85:0.65.ABAQUS was used to simulate the dynamic response of the four cases, and the maximum displacements of the dome nodes in the three directions were selected as the dynamic response index. The maximum vertical displacements of the dome nodes in four cases under different earthquake waves are shown in Figure 14.The results in Figure 14 show that the maximum vertical displacements of the four cases were different.When the PGA was 110 gal, the maximum vertical displacements of the four cases were 6.42 mm, 11.13 mm, 5.88 mm, and 6.38 mm, respectively, which illustrates that case 1 increased by 73.36% compared to case 0, while the data for case 2 and case 3 hardly increased.When the PGA was 300 gal, the maximum vertical displacements of the four cases were 17.91 mm, 20.09 mm, 16.83 mm, and 18.39 mm, respectively, which indicates that case 1 increased by 12.17% compared to case 0, whereas case 2 and case 3 hardly increased, thereby showing the same pattern as above.When the PGA was 510 gal, the maximum vertical displacements of the four cases were 30.97 mm, 31.60 mm, 30.08 mm, and 32.45 mm, respectively, which means the air duct system has little influence on the vertical displacements in the dome structure under rare earthquakes.This is because the vertical stiffness in the air duct system is very small compared to the dome structure, meaning the air duct system will be destroyed first.To sum up, considering the coupling effect in the air duct system, the layouts of all ordinary support hangers have a great impact on the vertical displacement of the dome structure under frequent earthquakes, a certain impact under medium earthquakes, and little impact under rare earthquakes.However, there is little effect on the vertical displacement of the dome structure under frequent earthquakes, moderate earthquakes, or rare earthquakes when the seismic support hangers are adopted in whole or in part. pattern as above.When the PGA was 510 gal, the maximum vertical displacements of the four cases were 30.97 mm, 31.60 mm, 30.08 mm, and 32.45 mm, respectively, which means the air duct system has little influence on the vertical displacements in the dome structure under rare earthquakes.This is because the vertical stiffness in the air duct system is very small compared to the dome structure, meaning the air duct system will be destroyed first.To sum up, considering the coupling effect in the air duct system, the layouts of all ordinary support hangers have a great impact on the vertical displacement of the dome structure under frequent earthquakes, a certain impact under medium earthquakes, and little impact under rare earthquakes.However, there is little effect on the vertical displacement of the dome structure under frequent earthquakes, moderate earthquakes, or rare earthquakes when the seismic support hangers are adopted in whole or in part.Since only a small amount of the support hangers in the four cases showed plastic deformation when the PGA was equal to 510 gal, and neither the dome members nor the Since only a small amount of the support hangers in the four cases showed plastic deformation when the PGA was equal to 510 gal, and neither the dome members nor the air ducts were damaged, the IDA time-history analysis method was adopted in this paper to increase the amplitude of the PGA of the earthquake waves to 1300 gal, and investigate the failure mode of the coupled system of the single-layer reticulated dome, the air ducts, and the support hangers under strong earthquake excitations.The positions of the damaged members of the dome and the falling distribution modes of the air ducts are shown in Figure 15.By calculating the number of buckling members in each dome case, the number of dome buckling members in case 0, case 1, case 2, and case 3 were obtained as 0, 7, 3, and 2, respectively.The results showed that the dome without a coupled air duct system did not cause any damage to the members under strong earthquakes and the number of damaged members in the dome with the coupled air ducts of all ordinary support hangers was the largest.The damaged bars in case 1 were mainly distributed in the outermost second annual bars and the diagonal bars connected with the fixed hinged support.The damaged bars in cases 2 and 3 were mainly distributed in the diagonal bars connected to the support.Therefore, considering that the coupling effect of the air ducts will increase the number of damaged members in the dome under strong earthquakes, different support hanger layouts will also affect the number of members in the dome.air ducts were damaged, the IDA time-history analysis method was adopted in this paper to increase the amplitude of the PGA of the earthquake waves to 1300 gal, and investigate the failure mode of the coupled system of the single-layer reticulated dome, the air ducts, and the support hangers under strong earthquake excitations.The positions of the damaged members of the dome and the falling distribution modes of the air ducts are shown in Figure 15.By calculating the number of buckling members in each dome case, the number of dome buckling members in case 0, case 1, case 2, and case 3 were obtained as 0, 7, 3, and 2, respectively.The results showed that the dome without a coupled air duct system did not cause any damage to the members under strong earthquakes and the number of damaged members in the dome with the coupled air ducts of all ordinary support hangers was the largest.The damaged bars in case 1 were mainly distributed in the outermost second annual bars and the diagonal bars connected with the fixed hinged support.The damaged bars in cases 2 and 3 were mainly distributed in the diagonal bars connected to the support.Therefore, considering that the coupling effect of the air ducts will increase the number of damaged members in the dome under strong earthquakes, different support hanger layouts will also affect the number of members in the dome.By defining the ratio of the length of the falling ducts to the total length of the whole air duct as the falling rate, the falling rate of air ducts in case 1, case 2, and case 3 were, 13.6%, 25.6%, and 17%, respectively.The results showed that the ducts would be destroyed before the dome structure under a strong earthquake.For the dome with only ordinary support hangers, the falling rate of the air ducts under a strong earthquake is the lowest and for the dome with only seismic support hangers, the falling rate of the air ducts By defining the ratio of the length of the falling ducts to the total length of the whole air duct as the falling rate, the falling rate of air ducts in case 1, case 2, and case 3 were, 13.6%, 25.6%, and 17%, respectively.The results showed that the ducts would be destroyed before the dome structure under a strong earthquake.For the dome with only ordinary support hangers, the falling rate of the air ducts under a strong earthquake is the lowest and for the dome with only seismic support hangers, the falling rate of the air ducts under a strong earthquake is the highest.This is because the horizontal stiffness of the seismic support hangers is larger than that of the ordinary support hangers, the greater the horizontal forces transmitted by the support hangers to the air ducts under earthquake actions, the more the air ducts fall.Therefore, the falling rate of the air ducts under the action of a strong earthquake also has a great relationship with the different layout schemes of the support hangers. Conclusions In this study, the simplified finite element models of the air ducts and flange bolt joints were established and verified by the fine solid element model.Three kinds of support hanger arrangement schemes were considered and the coupling models for a single-layer reticulated dome with air duct systems were also established.The natural vibration frequency analysis and seismic time-history analysis of four cases were carried out and the damage mode of the dome and the falling rate of the air ducts were analyzed.The main conclusions are summarized as follows: (1) The hysteresis curve of the flange bolt joint in the air ducts simulated by a fine solid element was S-shaped, with a strong pinching effect and a relatively long sliding segment, and the energy dissipation capacity and ductility of the joint were relatively weak.The simplified mechanical model of the flange bolt joint established by the connector element can effectively simulate the pinching effect and hysteresis curve of the joint and is faster than the solid element model in numerical calculation.(2) The air duct system has an obvious influence on the dynamic characteristics and natural vibration frequency of the single-layer reticulated dome.Compared to the single-layer reticulated dome model without an air duct system, the maximum vertical displacement of the dome nodes with all ordinary air duct support hangers increased by 73.36% under frequent earthquakes, 12.17% under moderate earthquakes, and 2.03% under rare earthquakes.The maximum vertical displacement of the dome nodes was less affected by the air duct system with full or partial seismic support hangers.The results illustrate that the full or partial use of seismic support hangers can preferentially reduce the vertical displacement of air ducts.Therefore, the coupling effect of nonstructural components, such as air ducts and support hangers, on the main structure should be considered in response to a single-layer reticulated dome under earthquake excitation.(3) Under strong earthquakes, when the PGA was equal to 1300 gal, the dome model without the air duct system caused no damage to any member, yet considering the coupling effect of the air duct system, some members in the dome models in three cases were damaged and some air ducts fell, indicating that the coupling effect of the air duct system will cause damage to the dome model in advance and affect the sustainability of the building.Among the three cases of the hanger arrangement, the falling rate of the air ducts was the highest with all the seismic hangers when the dome model was under strong earthquake, which shows that a single-layer reticulated dome with staggered seismic conditions and an ordinary support hanger is the best hanger arrangement due to the minimal damage to the dome and the lower falling rate of the air ducts.For this reason, when arranging seismic hangers, it is necessary to consider the characteristics of the air ducts to improve the sustainability of the dome and air duct system. Author Contributions: T.Z.: Conceptualization, methodology, supervision, and writing-original draft preparation; Y.Z.: Software, validation, and writing-review and editing.All authors have read and agreed to the published version of the manuscript. Figure 1 . Figure 1.Single-layer reticulated dome with an air duct system. Figure 1 . Figure 1.Single-layer reticulated dome with an air duct system. Sustainability 2023 , 16 Figure 2 . Figure 2. Simplified FE model of air ducts and flange bolt joints. Figure 2 . Figure 2. Simplified FE model of air ducts and flange bolt joints. Figure 2 . Figure 2. Simplified FE model of air ducts and flange bolt joints. Figure 3 .Figure 3 . Figure 3. Solid finite element model of rectangular air duct flange bolt joint: (a) overall model; (b) air duct flange model; (c) bolt model.The coupling point is located at the center of the flange face of one side of the duct.A vertical shear F is added at the coupling point to analyze the hysteretic curve and skeleton curve in the bolt group under seismic shear.The two ends of the model are coupled to the central reference points, respectively, and the boundary conditions of the coupling points are set as symmetric constraints to simulate the limiting effect of the seismic Figure 4 . Figure 4. Numerical simulation of tensile failure by a single bolt: (a) static tension test; (b) forcedisplacement curves. Figure 4 . Figure 4. Numerical simulation of tensile failure by a single bolt: (a) static tension test; (b) force-displacement curves. Figure 5 . Figure 5. Numerical simulation of shear failure by a single bolt: (a) static shear test; (b) force-displacement curves. Figure 5 . Figure 5. Numerical simulation of shear failure by a single bolt: (a) static shear test; (b) force-displacement curves. Figure 7 . Figure 7. Hysteresis simulation result of the air duct flange bolt joint: (a) stress distribution; (b) failure mode of bolt no.2; (c) failure mode of bolt no.3; (d) failure mode of bolt no. 4. Figure 7 . Figure 7. Hysteresis simulation result of the air duct flange bolt joint: (a) stress distribution; (b) failure mode of bolt no.2; (c) failure mode of bolt no.3; (d) failure mode of bolt no. 4. Figure 8 . Figure 8. Force-displacement curves of the air duct flange bolt joint: (a) simplified mechanical model; (b) connector element model.Figure 8. Force-displacement curves of the air duct flange bolt joint: (a) simplified mechanical model; (b) connector element model. Figure 8 . Figure 8. Force-displacement curves of the air duct flange bolt joint: (a) simplified mechanical model; (b) connector element model.Figure 8. Force-displacement curves of the air duct flange bolt joint: (a) simplified mechanical model; (b) connector element model. Figure 9 . Figure 9. Simplified finite element model of duct support hangers: (a) ordinary support hangers without bracings; (b) seismic support hangers. Figure 11 . Figure 11.Case model of the single-layer spherical reticulated dome. Figure 11 . Figure 11.Case model of the single-layer spherical reticulated dome. Figure 12 . Figure 12.Simplified FE model of air ducts: (a) single air duct model; (b) all air duct models. Figure 12 . Figure 12.Simplified FE model of air ducts: (a) single air duct model; (b) all air duct models. Figure 13 . Figure 13.Support hanger layout schemes: (a) case 1: ordinary support hangers without bracings; (b) case 2: seismic support hangers with bracings; (c) case 3: staggered arrangement with the ordinary support hangers and the seismic support hangers. Figure 13 . Figure 13.Support hanger layout schemes: (a) case 1: ordinary support hangers without bracings; (b) case 2: seismic support hangers with bracings; (c) case 3: staggered arrangement with the ordinary support hangers and the seismic support hangers. redder color indicates higher stress and a bluer color indicates lower stress.The test results reported by Table 3 . Pipe sections of the single-layer reticulated dome. Table 3 . Pipe sections of the single-layer reticulated dome. Table 4 . The natural vibration frequency for the four cases.Results of the Seismic Time-History Response Analysis According to the Chinese code GB 55002-2021, Table2.1.2[27]
v3-fos-license
2020-07-09T09:12:23.999Z
2021-04-30T00:00:00.000
229403448
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://dergipark.org.tr/en/download/article-file/1194521", "pdf_hash": "94a5a445e6afbbee789b19de2bc2d81efba0fdc8", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42711", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "37e9b4decb1b0627470bc9514c538c9e57ac48ca", "year": 2021 }
pes2o/s2orc
Morphologic characteristics and length-weight relationships of Sciaena umbra (Linnaeus, 1758) in the Black Sea coast In this study, the morphological characteristics and length-weight relationships of Sciaena umbra (Linnaeus, 1758) belonging to the Sciaenidae family, which is represented by five species in the Mediterranean basin and two species in the Black Sea, were investigated. Sampling was carried out in the Black Sea Region (Samsun, Ordu, Giresun, Trabzon) between March 2019 and February 2020. A total 54 of individuals were sampled and 15 different metric measurements were performed in each sample to determine their morphological characteristics. The mean total length and weight were estimated as 357.8 mm (117-580) and 845.3 g (16.4-2485.1), respectively. Total length was compared with morphometric characters and the lowest ratio was found with eye diameter (4.3%) and the highest ratio was with anal distance (59.9%). In the relation between the total length and morphological characters of the highest and the lowest correlation were observed in dorsal distance with r 2 =0.993 and the anal height with r 2 =0.938. A strong correlation (r 2 = 0.993) was found between the total length and weight relationship and the growth was positive allometric b> 3. This paper reports the first documented of morphometric characteristics of the Introduction The brown meagre, Sciaena umbra Linnaeus, 1758, is one of the five species of the Sciaenidae (croakers or drums) family present in the Mediterranean Sea (Fischer et al., 1987). It is a * Corresponding author E-mail address: maydin69@hotmail.com (M. Aydın) demersal species with a wide distribution from the East Atlantic Ocean to the Mediterranean, Aegean, Black Sea and Azov Sea (Artüz, 2006;La Mesa et al., 2008;Chao, 2015). This species, which mostly lives on rocky and hard substrata, can grow up to a maximum length of 70 cm, but they are mostly found around 30 cm (Bauchot, 1987). The brown meagre is distributed in all the coasts of Turkey. This species is social and lives in small groups (20-150 individuals) (Artüz, 2006). The brown meagre is a sedentary and gregarious species living in shelters on rocky bottoms close to caves or large crevices in which it can shelter, or hidden within Posidonia and Zostera beds (Harmelin, 1991;Keskin, 2007). It is a nocturnal fish but it can sometimes to be found during the day (Frimodt, 1995). The brown meagre occurs in shallow coastal waters but especially when the water temperature down they prefer deeper waters and it may be found 200 m depth (Chauvet, 1991;Artüz, 2006). In the North Mediterranean Region, it has been reported that the species stocks have decreased significantly due to factors such as its life history, behavioral characteristics, habitat degradation, and pressures of small-scale professional and amateur fishing (Harmelin, 1991). In addition, spearfishing had a negative impact on its stocks (Harmelin-Vivien et al., 2015). There are 289 different species belonging to the Sciaenidae family (Chao, 1986;Chao, 2015;Parenti, 2020). The family is represented by two species (Sciaena umbra and Umbrina cirrosa) in the Black Sea (Fischer et al., 1987;Chao, 2015). There are some studies on the growth, reproduction and feeding habits of the species (Chakroun and Ktari, 1981;Fabi et al., 1998;Froglia and Gramitto, 1998;Chakroun-Marzouk and Ktari, 2003;Fabi et al., 2006;Derbal and Kara, 2007;Engin and Seyhan, 2009). However, there is no detailed study on the morphometric character of the species. Identification of morphometric characters is very important for fish fauna studies in marine ecosystem and determination of intra-species variations (Çoban et al., 2013). In addition, length-weight relationships allow morphological comparisons between different fish species or fish populations from different habitats and different regions (Gonçalves et al., 1997;Oscoz et al., 2005;Gül et al., 2017).The aim of this study was to provide data on the length and weight and morphometric characters of S. umbra species in the Black Sea. Material and Methods A total of 54 individuals were collected on a monthly and transported to the laboratory then measurements were made during the day. Fifteen metric measurements from S. umbra were performed. These measurements were 1. All individuals were measured for total length (TL, mm) to the nearest 0.1 and weighted (W, g) to the nearest 0.01. Digital compass with 0.1 cm sensitivity was used for morphometric measurements. Lengths that cannot be measured with calipers are used with a ruler. Thirteen morphometric characters were evaluated as TL%. Regression analysis of differences body parts against TL of the fish were drawn by least square method. Dependent and independent variables, TL and morphometric measurements were transformed using log 10. Length-weight relationship was estimated using the equation = (W: Weight (g), L: total length (cm)), where "a" is the coefficient and "b" is an exponent indicating isometric growth when equal to 3. The "b" value was tested by student's t-test to verify if it was significantly different from isometric growth (Ricker, 1975;Pauly, 1984). Length and Weight Relationships A total of 54 different size of S. umbra (36 female, 18 male) were sampled with the smallest individuals 117 mm and the largest 580 mm. Length and weight relationships of S. umbra was shown in Figure 2. A strong correlation relationship between length and weight (r 2 = 0.993) was calculated. The value of "b=3.190" is different than 3 (p> 0.05). It was determined that growth was positive allometric b> 3. The length-weight relationship parameters for Sciaena umbra were given regardless of gender (Table 1). Morphologic characteristics S. umbra has a double dorsal fin. The second dorsal fin is longer than the first and located very close to each other. In addition, juvenile individuals have high first dorsal fin. As the individuals grow, the first dorsal and second dorsal highs are almost similar. The pectoral fin position is ahead of the position of the first dorsal and pelvic fin and the length of the pectoral fin does not extend until the end of the pelvic fin. Even though its appearance can change in different habitats, in generally its dorsal part is dark brownish and purplish in color and the lower part of the line lateral has a lighter bronze metallic color. Dorsal fins are bronze metallic light brown, the first rays of the pelvic fin are white, while the other parts are dark black like the anal fin. Also, the anal fin has a white and very thick bony structure. It is surrounded by a black band at the ends of the caudal and dorsal fins and the caudal fin has a single lobed structure. S. umbra has a single continuous lateral line extending to hind margin of caudal fin. Scales ctenoid (edge comb-like) cover entire body, except tip of snout. The head is covered with cycloid scales. The head length is about 25.9% of the total length ( Table 3). The eye size is relatively larger than the head. Even though some species of the Sciaenidae has barbels, this species has not. Swim bladder is located between the viscera and the backbone and the organ is a carrot-shaped form (Figure 3). The inflated swim bladder is 15 cm long and has a diameter of 5 cm for a fish with a length of 44.3 cm. S. umbra has 3-4 rows of villiform teeth in both jaws and it also has dense pharynx teeth. On the first gill arch has 14-15 short, blunt shape gill raker (Figure 4). Six meristic characters were examined. The lists of meristic characters used for analysis of S. umbra are presented in Table 2. The first dorsal fin has 10 spine rays and the second dorsal fin has one spine ray and 23 soft rays. The anal fin has two spine rays and 7 soft rays. The second spine of ray is almost 7 times the length of the other spine. Morphometric characters The mean total length and weight of the individuals sampled was 357.8 mm (117-580) and 845.3 g (16.4-2485.1), respectively. The mean, standard errors, minimum and maximum values of the morphometric properties of all samples are given in Table 3. In addition, the morphometric properties of the S. umbra were proportional to the total length and the smallest ratio was eye size (4.3%) and the highest ratio was the pre-anal distance (59.9%). The maximum body depth of the species is 28.1% of the total length. The relationship between the morphometric characteristics and total length were analyzed with regression equations. Correlation coefficients of morphometric lengths-total length relationships were given in Table 4. The closest relationship was found between total length (TL) and pre-dorsal distance (PDD) according to linear regression values (r 2 =0.993) and the weakest relationship with depth of anal fin (DAF) (r 2 =0.938). Discussion In the study, a total of 54 individuals were sampled (36 females and 18 males). Length of all individuals ranged from 117-580 mm with 357.8 mm average. Karakulak et al. (2006) reported the maximum length as 29.8 cm, Karachle and Stergiou (2008) Considering these results, it can be said the population in the Black Sea has larger individuals than the Aegean Sea population. The "a" and "b" coefficients obtained in the relationship between length and weight may differ depending on reasons such as environmental factors, nutrient abundance, reproductive activities (Mommsen, 1998). In this study, "b" value was calculated as 3.1909 and it was determined that growth was positive allometric (b>3). Few studies on the species reported that the growth were negative allometry (Karachle and Stergiou, 2008;Maci et al., 2009;Crec'hriou et al., 2013), while most of study reported to be positive allometric growth (Morey et al., 2003;Karakulak et al., 2006;La Mesa et al., 2008;Engin and Seyhan, 2009;Grau et al., 2009;Bilge et al., 2014;Chater et al., 2018). The "b" value may be different from one population to another of the same species. The fluctuating can be assigned to factors such as food availability, feeding rates, whether sampling was done during the spawning season, differences in the number of specimen sampled, the period of sampling (Bagenal and Tesch, 1978;Moutopoulos and Stergiou, 2002;Mahé et al., 2018). Karachle and Stergiou (2008) and Maci et al. (2009) were used very small individuals in their study. Therefore they maybe have estimated the "b" value less than 3. Crec'hriou et al. (2013) reported "b" value as 2.91. It can be said to be use few individuals (n: 16). It is determined that the species has a highly developed swim bladder. Similarly, Picciulin et al. (2016) stated that the swim bladder of the species has a highly developed (Figure 3), it can make sounds using the muscles in the lower parts and they can establish social relationships with other individuals around them. There may be some changes in the morphometric characters of the fish after adaptation of a fish species to different environmental conditions (Blackith and Albrecht, 1959;Avşar, 1995). Morphometric measurements are used to determine similarities or differences between one stock and another. In addition, it is widely use taxonomic categories for fisheries biology area (Dwivedi and Dubey, 2013). Although the S. umbra species is distributed to the East Atlantic Ocean, the Mediterranean, Aegean, Marmara, Black Sea and the Sea of Azov (Artüz, 2006;Chao, 2015), there are very few biological studies (Engin, 2003, Engin andSeyhan, 2009) on the species, but no data are available about morphometric characters. Recent recreational fishing activity particularly from spearfishing had a negative impact on its stocks on the Mediterranean Sea and Black Sea (Harmelin-Vivien et al., 2015). On the one hand, in recent years a large part of the coastal area of the Southern Black Sea has been filled up for the highway and airport construction as well as land acquisitions. It is thought that this development had a positive impact on S. umbra species in terms of population increase. Lately, a noticeable increase has been observed on such species (Aydın and Sözer, 2016). Since the habitat structure of the Black Sea is limited rocky areas, does not allow the shelter for small individuals. It is thought that filled coastal areas provide suitable habitats for these species' juveniles. Conclusion The scientists rarely provided samples for research this species because of high economic value, living in limited areas and fishing requires special skills. Consequently, there is few study on the species. Thus, this is the first documented of morphometric characteristics of the species. This paper is considered to contribute to fisheries biology and international scientific literature.
v3-fos-license
2021-10-14T05:18:10.193Z
2021-09-28T00:00:00.000
238741667
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2021.710386/pdf", "pdf_hash": "117671746153b176e7dea9834a896e9280e4844d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42713", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "117671746153b176e7dea9834a896e9280e4844d", "year": 2021 }
pes2o/s2orc
Clinicopathological Characteristics and Influencing Factors of Renal Vascular Lesions in Anti-neutrophil Cytoplasmic Autoantibody-Related Renal Vasculitis The purpose of this study was to evaluate the clinicopathological features of different degrees of extraglomerular renal vascular lesions (RVLs) in patients with anti-neutrophil cytoplasmic antibody (ANCA)-associated renal vasculitis and explore their clinical determinants. This is a retrospective study of 186 patients with ANCA-associated renal vasculitis diagnosed at the First Affiliated Hospital of Zhengzhou University from January 2014 to April 2019. The patients who met the inclusion criteria were divided into non-renal RVLs, mild RVLs, moderate RVLs, and severe RVLs. It was found that there were significant differences in serum creatinine (SCR), estimated glomerular filtration rate (eGFR), erythrocyte sedimentation rate (ESR), high-density lipoprotein (HDL), systolic blood pressure (SBP), the prevalence rate of hypertension, the proportion of normal glomeruli, and the proportion of sclerotic glomeruli and interstitial fibrosis integral. SCR and ESR are independent risk factors for RVLs. The participants were followed up for 1 year, and the progression to end-stage renal disease (ESRD) and death was defined as endpoint events. We found that the survival rate of patients without RVLs was significantly higher than that of patients with RVLs and that the RVLs were an independent risk factor for ESRD or death. Early intervention in the progression of RVLs can improve the prognosis. As for most individuals with kidney diseases, the vascular component plays a second significant role in the disease process (5). However, AAV involves systemic small vessels, and it is also important to assess the damage to extraglomerular vascular lesions. In this retrospective observational study, we indicated and graded the severity of vascular lesions using a semi-quantitative scoring system (6) and further assessed their associations with clinical and pathological indexes and their influencing factors in patients with ANCA-associated renal vasculitis, providing a reference for delaying the progression and improving prognosis. Study Participants and Data Collection This study retrospectively enrolled 186 patients with ANCAassociated renal vasculitis admitted to the First Affiliated Hospital of Zhengzhou University between January 2014 and April 2019. The inclusion criteria were: (1) the presence of hematuria (>10/mm 3 ) and/or proteinuria (>300 mg/day); (2) the detection of a positive ANCA by an antigen-specific immunoassay and/or indirect immunofluorescence; (3) the observation of more than 10 glomeruli in pathological sections, and (4) confirmation by renal biopsy of the presence of pauci-immune glomerulonephritis. The exclusion criteria were the following: (1) systemic diseases involving the kidneys, such as hepatitis B virus-associated glomerulonephritis, diabetic nephropathy, and systemic lupus erythematosus, and other primary glomerular diseases, such as membranous nephropathy and IgA nephropathy; (2) positive anti-glomerular basement membrane antibodies or renal pathological immunofluorescence showing a linear deposition of immunoglobulin or immunoglobulin deposition > 2+. The study was approved by the Committee of the First Affiliated Hospital of Zhengzhou University (Henan, China, No.2019-KY-015). Histological Examination of Renal Biopsy Specimens The presence of fibrous, cellular, and cellular fibrous crescents, fibrous necrosis, destruction of Bowman's capsule, and global glomerulosclerosis of each glomerulus was recorded and calculated as the percentage of the total number of glomeruli. The presence of microangiopathic lesions was calculated as the percentage of the number of renal specimens in the four groups. Interstitial fibrosis and tubular atrophy were evaluated according to the extent of interstitial fibrosis with tubular atrophy in the cortex: 0 = no or trivial interstitial fibrosis (<5% of unscarred parenchyma), 1 = 6-25% of interstitial fibrosis, 2 = 26-50% of interstitial fibrosis, and 3 = more than 50% of interstitial fibrosis. Interstitial inflammation was evaluated according to the extent of inflammatory cells in the cortex: 0 = no or trivial interstitial inflammation (<10% of unscarred parenchyma), 1 = 10-25% of parenchyma was inflamed, 2 = 26-50% of parenchyma was inflamed, 3 = more than 50% of parenchyma was inflamed (10). (1) Normal glomeruli are defined as slight changes caused by ischemia or a small amount of inflammatory cell infiltration (less than four monocytes, lymphocytes, or neutrophils) without vasculitis and glomerulosclerosis. (2) Global glomerulosclerosis means that more than 50% of glomerular clusters form scars in the sclerotic area. (3) Cellular crescent means that more than 50% of the crescent is occupied by cells. (4) Fibrous crescent means that more than 90% of the crescent is occupied by the extracellular matrix. (5) Cellular fibrous crescent means that <90% of the crescent is occupied by extracellular matrix and that <50% of the crescent is occupied by cells. (6) Interstitial inflammatory cell infiltration means excessive inflammatory cells in the interstitium of the renal cortex, excluding the subcapsular area and the area surrounding global glomerulosclerosis. (7) Interstitial fibrosis is defined as increased extracellular matrix separating tubules in the cortical area, excluding the subcapsular area (10). (8) Tubular atrophy is defined by a thick irregular tubule basement membrane with a decrease in the diameter of the tubules. It is scored according to the percent of the cortical area involved, excluding the subcapsular area (10). (9) Microangiopathic lesions include subintimal edema, endothelial cell swelling, small artery thrombosis, and/or fibrinoid necrosis that are considered acute. Arterial onion dermal lesions refer to the thickening of the fibrous intima, which appears as concentric circles and are considered chronic (11). Scores of Vascular Lesions Kidney biopsy specimens were examined by light microscopy, electron microscopy, and immunofluorescence. These specimens were then reviewed and scored by two independent and experienced pathologists who did not know the clinical indicators of the patients. If any disagreements arose during the evaluation, a third pathologist was consulted and reached a final consensus. RVLs in this study referred to extraglomerular vascular lesions, including arterial fibrotic intimal thickening and arteriolar hyaline, whose scores were evaluated based on the definition described in the Oxford classification, with some modifications (12). The rules for scoring RVLs were as follows: (1) arterial vitreous lesions: the presence of vitreous lesions on the wall of an artery or small artery was scored as 1, while their absence was scored as 0; (2) arterial fibrotic intimal thickening: fibrotic intimal thickening was scored by comparing the thickness of the intima with that of the media in the same segment of the vessel: 0 = normal; 1 = less than the thickness of the media; 2 = exceeding the thickness of the media. The arterial fibrotic intimal thickening score for this specimen was based on the highest arterial score. Adding them together, the patients were divided into four groups: 0 = without RVLs; 1 = with mild RVLs; 2 = with moderate RVLs; and 3 = with severe RVLs. The score only quantitatively evaluates extraglomerular vascular lesions, even though the two lesions have different pathogenetic mechanisms. Statistical Analyses Continuous variables that conformed to a normal distribution were expressed as mean ± SD, and those that were not normally distributed were expressed as median with interquartile range (IQR). Categorical variables were expressed as frequency and percentage. Comparisons of continuous variables among the four groups were determined by one-way ANOVA or Kruskal-Wallis H test. An χ 2 test was performed to compare categorical variables between the groups. A binary logistic regression analysis model was used to analyze the factors influencing RVLs in patients with ANCA-associated renal vasculitis. The results were expressed as the ratio (OR) and 95% confidence interval (CI). Survival curves were plotted using the Kaplan-Meier (K-M) method, and differences in survival curves were compared by a Log-rank test. Univariate and multivariate COX regression models were used to identify the predictors of end-stage renal disease (ESRD) or death. The results were expressed as hazard ratio (HR) (with 95% CI). p < 0.05 was considered statistically significant. SPSS statistical software version 26.0 (SPSS 26.0) was used for statistical analysis. Correlations Between Vascular Lesions and Clinical Data One hundred eighty-six patients with ANCA-associated renal vasculitis were enrolled in this study. According to different degrees of vascular disease, they were divided into four groups: Figure 1. Of the 186 patients, 95 were men and 91 were women (16-82 years old). There were 84 (45.16%) with hypertension, 16 (8.6%) with DM, and 23 (12.37%) with cardiovascular diseases. The prevalence of hypertension was significantly higher in patients with severe vascular lesions than in patients without RVLs and with mild RVLs (P < 0.05). The SBP of patients with severe lesions was significantly higher than that of patients with mild and moderate RVLs and without RVLs (P < 0.05). The SCR and ESR of the patients with severe, moderate, and mild lesions were significantly higher than those of the patients without RVLs (P < 0.05), and eGFR was lower (P < 0.05). Correlations Between Vascular Lesions and Pathological Data The proportion of normal glomeruli of the patients with severe RVLs was significantly lower than that of the patients with mild or moderate and without RVLs (P < 0.05). The proportion of global glomerulosclerosis of the patients with severe RVLs was significantly higher than that of the patients with mild RVLs and without RVLs (P < 0.05) ( Table 3). The renal interstitial fibrosis integral of the patients with severe RVLs was significantly higher than that of the patients with mild RVLs and without RVLs (P < 0.05) ( Table 4). In addition, there were two cases (8.7%) with microangiopathic lesions in the without RVL group, which presented with fibrinoid necrosis and arteriolar thrombosis, respectively. The four cases (5.56%) in the mild RVL group were presented with fibrinoid necrosis. There were 10 cases (15.38%) in the moderate RVL group, namely, eight cases of fibrinoid necrosis and two cases of arteriolar thrombosis. There were five cases (19.23%) in the severe RVL group that included four cases of fibrinoid necrosis and a case of arteriolar thrombosis. No significant difference was found among the four groups. Influencing Factors of RVLs in Patients With AAV-Related Renal Vasculitis The patients were divided into two groups according to the presence or absence of RVLs. Univariate analysis showed that SCR, eGFR, SBP, ESR, tubular atrophy integral, interstitial inflammatory cell infiltration integral, and interstitial fibrosis integral were all influencing factors of RVLs ( Table 5). Combined with the results of the univariate analysis and clinic practice, age, HB, SCR, HDL, 24 h-TP, CRP, ESR, BVAS score, normal glomeruli proportions, tubular atrophy integral, interstitial inflammatory cell infiltration integral, and interstitial fibrosis integral were included in multivariate binary logistic regression. Among the 186 patients with AAV, 23 (12.37%) had no RVLs, and 163 (87.63%) had RVLs. Likelihood ratio test (Wald χ 2 = 33.307, P = 0.001) and goodness of fit tests (Pearson χ 2 =3.644, P = 0.888), suggested that the binary logistic regression model was suitable for this study. The results showed that SCR and ESR had a significant positive correlation on RVLs in patients with AAV-related renal vasculitis (OR = 1.006, 95% CI: 1.001-1.011, P = 0.028; OR = 1.021, 95% CI: 1.005-1.038, P = 0.012) ( Table 6). During the 1-year follow-up, 54 of the patients were in complete remission, 9 were in partial remission, 10 relapsed after remission, 3 were in no remission, 42 were lost to followup, 39 underwent dialysis, and 29 died (Figure 2). We defined progression to ESRD and death or death as the endpoint event, and the K-M survival curve showed that RVLs were a predictor of poor prognosis in patients with AAV-related renal vasculitis (Figure 3). A univariate Cox proportional hazard Anti-neutrophil cytoplasmic antibody-associated vasculitis affects small vessels throughout the body and is associated with the presence of ANCA in serum. In addition to the traditional methods of phagocytosis, attacking and killing, neutrophil extracellular traps (NETs) are an important means of defense against pathogen invasion (13). Persistent and prolonged exposure of NETS to their contents disrupts tolerance to specific self-antigens, particularly myeloperoxidase and protease 3. These antigens are presented to CD4 + T cells via dendritic cells, resulting in ANCA production. ANCA induces the overactivation of neutrophils, leading to the production of abnormal cytokines, accompanied by the release of reactive oxygen species and lytic enzymes and further formation of NET, which damages vascular endothelial cells. NETs are not only involved in ANCA-mediated vascular injury but also in the production of ANCA itself. Therefore, a vicious cycle of NET formation and ANCA production is thought to be involved in the pathogenesis of AAV (14). Moreover, in this study, the patients with RVLs were found to have a worse prognosis than those without RVLs after the 1-year follow-up. Therefore, the assessment of vascular lesions is an important factor for the assessment of AAV. The SCR of patients with severe, moderate, and mild RVLs was significantly higher than that of patients without RVLs, and eGFR was lower. The results of the regression analysis showed that SCR was an independent risk factor for RVLs. It was probably that the AAV disease activity, rapid progression, and prolongation of the duration of AAV, autoimmune abnormality, inflammation, oxidative stress, hemodynamic changes, mechanical stretch, and other factors led to endothelial dysfunction, thus resulting in arterial intimal thickening, arteriolar hyaline, and thrombosis. The above processes interacted with each other, exacerbating the development of renal dysfunction, and renal function loss is accompanied by the intimal hyperplasia of renal arterioles (15). This study found that the higher the score of RVLs, the faster the ESR, and that ESR was an independent risk factor for the occurrence of RVLs. It is well-known that ESR reflects disease activity in many autoimmune diseases. The possible mechanism was that AAV disease activity activated the complement system, led to increased inflammatory response, and simultaneously resulted in the increase in fibrinogens and other inflammatory biomarkers, all of which contributed to the development of AAVrelated renal vasculitis. With the worsening of renal function, the metabolism and elimination of fibrinogen decreased gradually, leading to acceleration of ESR (16). On the other hand, renal dysfunction might directly cause an increase in inflammatory mediators via a mechanism of increasing oxidative stress that could lead to the accumulation of advanced glycation end products. Levels of these oxidation products increase as the glomerular filtration rate declines, which might lead to arterial intimal thickening and directly increased vascular permeability (17). In addition, inflammation also affects lipid levels, which has attracted more and more attention in atherosclerosis and other immune-mediated diseases (18,19). Therefore, ESR can indirectly reflect the situation of RVLs to a certain extent, control inflammation in the early stage of the disease as far as possible, delay the progression of vascular disease, and reduce the probability of death or entering the dialysis stage. As the main protective protein in serum, HDL is pivotal in the prevention of atherosclerosis and is also considered a good indicator of cardiovascular disease risk and AAV (18,20). This study showed that the lower the HDL, the higher the occurrence and the greater the severity of RVLs. Meanwhile, HDL is a protective factor for RVLs. It has been proved that MPO selectively targets and oxidizes HDL-bound APO-A1, which makes HDL lose its ability to control cholesterol outflow, acyltransferase activation, anti-inflammation, and anti-apoptosis. The decrease of serum HDL level will lead to the decrease in the regulation of complement pathway activation, and at the same time, the oxidation of MPO causes the complete or partial loss of the anti-inflammatory ability of HDL, which may be the potential mechanism of the decrease of HDL level in the pathogenesis of AAV (18,21). Recent findings indicated a harmful interaction between autoantibodies targeting HDL lipoproteins and their components and lipid profiles in rheumatoid arthritis and systemic lupus erythematosus, and it was closely related to disease activity, anti-HDL antibodies, lipoprotein functionality, lipid profiles, and antioxidant features (22)(23)(24). So, there might be a detrimental interaction between autoantibodies targeting HDL lipoproteins and their component sand lipid profiles in ANCA related renal vasculitis patients. The interaction is related to the HDL antioxidant dysfunction, lipid profiles, lipoprotein functional impairment, and ultimately vascular endothelial cells dysfunction. The HDL dysfunction associated with anti-HDL antibodies may not be a specific phenomenon of a single disease but rather a common feature of the immune-mediated disease (25). Hypertension has long been known to play an important role in the development of kidney damage, and because of its prevalence in the general population, it remains the second leading cause of end-stage renal disease, after diabetes (26). The prevalence of hypertension was prominently higher in patients with severe RVLs than in patients without RVLs and with mild RVLs. The SBP of patients with severe RVLs was prominently higher than that of patients with mild and moderate RVLs, and that of patients without RVLs, and was a risk factor for RVLs. ANCA attacks the kidney, resulting in glomerular capillary loop necrosis and crescent formation, leaving the glomerulus in a state of high hemodynamics. Hypertension in the glomerular blood pressure leads to hypertension of the glomerular capillaries stretching, endothelial injury, and dysfunction (even endothelial disintegration), and increased glomerular filtration protein, leading to glomerular collapse, segmental necrosis, and glomerular sclerosis. Simultaneously, high hemodynamics can significantly dilate the glomeruli and stretch mesangial cells, increase the synthesis of collagen and fibronectin, buffer glomerular hypertrophy to some extent, reduce glomerular pressure, and increase glomerular compliance. The interaction of the above factors leads to renal arteriosclerosis and renal parenchyma ischemia, aggravates renal parenchyma pathological changes and renal function damage, and affects the interstitial part of the kidney further. Adaptive structural changes in response to hypertension, such as increased medial wall thickness and reduced lumen diameter, lead to increased vascular wall stress and, ultimately, hypoxic-ischemic injury to the glomerular and tubule-interstitial structures due to decreased tissue perfusion. Hypertension and kidney damage interact with each other, forming a vicious circle (27). Therefore, the control of blood pressure, especially SBP, can delay the progression of renal vascular disease to a certain extent, thus improving the prognosis of patients. Kidney disease is common in AAV, and the typical renal presentation is rapidly progressive. Histologically, pauci-immune necrotizing crescentic glomerulonephritis is the most typical pathological characteristic, and histologic findings remain the gold standard for diagnosing patients within ANCA-associated renal vasculitis (4). Our results found that the more severe the RVLs, the lower the proportion of normal glomeruli, and that the higher the proportion of global glomerulosclerosis, the higher the scores of interstitial fibrosis. Meanwhile, tubular atrophy integral, interstitial inflammatory cell infiltration integral, and interstitial fibrosis integral were risk factors for RVLs. The possible reasons were as follows: in the progression of glomerular injury, inflammatory cells and mediators acted on glomeruli and tubulointerstitial tissue, leading to the infiltration of interstitial inflammatory cells and release of cytokines, thus causing renal tubular atrophy, interstitial fibrosis, and vascular disease in the end. The tubulointerstitial lesions led to ischemia and hypoxia in renal tissue and further aggravated the infiltration of interstitial inflammatory cells, released inflammatory cytokines and inflammatory mediators, and promoted fibroblast proliferation. All of these could cause the activation and apoptosis of renal tubular epithelial cells, tubular atrophy, and accelerate the progression of renal interstitial fibrosis. Therefore, the early control of vascular disease may delay the progression of ANCA-associated renal vasculitis to a certain degree. To the best of our knowledge, there are only few investigations on RVLs in patients with AAV. However, as this is a retrospective study, some limitations should be addressed. On the one hand, this is a single-center study. Because of the low incidence of AAV, poor prognosis, and difficult clinical diagnosis, the sample size of this study is relatively small. On the other hand, we focused more attention on the clinicopathological features and influencing factors of RVLs in AAV, not taking into consideration the effect of drug therapy on the patients. Therefore, multi-center large sample studies are needed to reveal the correlation further, evaluate the prognosis, and carry out basic experiments to explore the specific mechanism of disease. CONCLUSION In conclusion, the severity of RVLs could be reflected in clinical indexes such as SCR, eGFR, HDL, SBP, ESR, and the proportion of normal glomeruli, the proportion of global glomerulosclerosis, and the score of interstitial fibrosis in renal pathology. The probability of death or access to dialysis within 1 year in patients with RVLs was significantly higher than that in patients without RVLs, and the RVLs were an independent risk factor. Moreover, SCR, eGFR, SBP, ESR, tubular atrophy integral, interstitial inflammatory cell infiltration integral, and interstitial fibrosis integral were risk factors for the occurrence of RVLs. HDL was a protective factor, and SCR and ESR were independent risk factors for RVLs. Therefore, it is necessary to control blood pressure and inflammation, protect renal function, and regulate blood lipids, which could delay the progression of vascular disease to a certain extent and improve the prognosis of patients. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The studies involving human participants were reviewed and approved by The Ethics Committee of the First Affiliated Hospital of Zhengzhou University. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS RW conceived the study. RW and YW initially drafted the manuscript, performed the statistical analyses, and gave critical inputs to the discussion and interpretation of the data. DA, NG, and YG collected and interpreted the data. XZ and JW were responsible for reviewing and scoring the specimens. RW and LT prepared, reviewed, and revised the manuscript. LT supervised the study further. All the authors contributed to and approved the final manuscript. FUNDING Open access funding was provided by the 2016 Key Science and Technology Plan project of Henan province (162102310198).
v3-fos-license
2023-01-12T16:38:32.460Z
2023-01-01T00:00:00.000
255675734
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2077-0383/12/2/536/pdf?version=1673258618", "pdf_hash": "3c9d607211e5e503c26422a3ddd5e541a5327d46", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42714", "s2fieldsofstudy": [ "Medicine" ], "sha1": "ceeee958cb9870625fd8b9a3e54579125e4b5394", "year": 2023 }
pes2o/s2orc
The Evolution of Minimally Invasive Spine Tumor Resection and Stabilization: From K-Wires to Navigated One-Step Screws Minimization of the surgical approaches to spinal extradural metastases resection and stabilization was advocated by the 2012 Oncological Guidelines for Spinal Metastases Management. Minimally invasive approaches to spine oncology surgery (MISS) are continually advancing. This paper will describe the evolution of minimally invasive surgical techniques for the resection of metastatic spinal lesions and stabilization in a single institute. A retrospective analysis of patients who underwent minimally invasive extradural spinal metastases resection during the years 2013–2019 by a single surgeon was performed. Medical records, imaging studies, operative reports, rates of screw misplacement, operative time and estimated blood loss were reviewed. Detailed description of the surgical technique is provided. Of 138 patients operated for extradural spinal tumors during the study years, 19 patients were treated in a minimally invasive approach and met the inclusion criteria for this study. The mortality rate was significantly improved over the years with accordance of improve selection criteria to better prognosis patients. The surgical technique has evolved over the study years from fluoroscopy to intraoperative 3D imaging and navigation guidance and from k-wire screw insertion technique to one-step screws. Minimally invasive spinal tumor surgery is an evolving technique. The adoption of assistive devices such as intraoperative 3D imaging and one-step screw insertion systems was safe and efficient. Oncologic patients may particularly benefit from the minimization of surgical decompression and fusion in light of the frailty of this population and the mitigated postoperative outcomes associated with MIS oncological procedures. Introduction Over the past decades, earlier diagnosis of cancer and improved treatment modalities have resulted in a continuing rise in life expectancy of oncologic patients [1,2]. Consequently, the prevalence of spine metastases is increasing. Due to the frailty of this patient population and its increased risk for post-operative complications, the 2012 Oncological Guidelines for Spinal Metastases Management [3] have advocated the use of minimally invasive surgery (MIS) techniques for the treatment of spinal metastases. MIS techniques were associated with reduced blood loss, lower infection rates, reduced overall complication rates, improved post-operative pain control, faster recovery, and shorter length of hospital stays [4][5][6]. Our group has previously published our experience with MIS decompression and fusion for spinal metastases, assisted by fluoroscopy and K-wire guided screws [7]. Since then, the uses of intra-operative imaging and navigation devices have become prevalent in a plethora of surgical techniques, allowing for a more efficient surgical course. Recently, one-step screw systems were introduced, allowing for a simpler and faster percutaneous screw introduction. This paper will describe the evolution of techniques for the excision of spinal metastases and instrumentation that were implemented in our department. Methods A retrospective analysis of 19 patients who underwent minimally invasive extradural spinal tumors resection at a single institution by the senior author (RH) was performed. Patients operated in an open approach were excluded (n = 119). Patients were operated using the described MIS technique if they had a tumor confined to the vertebral body and pedicle, with or without canal involvement. Only focal disease confined to one or two vertebral bodies was included and if the surgeon considered unilateral approach adequate for tumor resection. Tumors involving the canal bilaterally, the lamina, spinous process, or posterior soft tissue were considered for open surgery. All cases were presented for discussion in a multi-disciplinary tumor board, and the surgical approach was agreed upon. Following the approval of the Institutional Review Board, the authors evaluated patient records and imaging studies of patients who were operated on between November 2013 and March 2019. In addition to patient demographics, the tumor location, number of spinal levels involved, and tumor pathology were gathered. Operation-related data were collected, including estimated blood loss (EBL), duration of operation, navigation use, and screw insertion method. The primary outcomes were perioperative and postoperative complications, discharge status, duration of hospitalization, recurrence rate, and mortality. Adjuvant radiation therapy was recorded. Statistical analysis was performed with SPSS v.22 software version 22 (IBM Corp, Armonk, NY, USA). For parametric variables, data are expressed as mean and range, and for nonparametric variables, data are expressed as frequency and percentage. The univariate analysis was done with the chi-square test to assess the statistical significance (p < 0.05). Surgical Technique The surgical technique for the minimally invasive resection of spinal extradural metastases used in the initial cases was described by the authors in 2015 [7]. To summarize briefly, the patient was positioned in the supine position, following preparation, draping and utilizing fluoroscopic guidance percutaneous K-wires were inserted to the level above and below the index vertebrae, as well as to the index vertebrae contralateral to the decompression side. A minimally invasive expandable tubular retractor was positioned under fluoroscopic guidance (X-tube, Metrix, Medtronic, MN, USA) over the facet, lamina, and transverse process on the decompression side ( Figure 1A). Using a high-speed drill and a transpedicular approach, the thecal sac was exposed and decompressed, and a partial corpectomy was performed ( Figure 1B, Video S1). The retractor was then recovered to be followed by serial dilatation over the previously inserted K-wires and placement of percutaneous screws (Sextant FNS or Longitude FNS screws, Medtronic, MN, USA). Under fluoroscopic guidance, PMMA was injected through the screws to the index level and to the level above and below. Finally, percutaneous rod insertion and locking took place. The wounds were irrigated and closed after fluoroscopic verification of hardware final position. The introduction of intraoperative 3D imaging modalities combined with intraoperative navigation allowed for modifications and improvement of the surgical technique. After positioning, the O-arm was introduced, and scanning and parking positions were determined and saved. A reference frame was attached to the iliac wing by a designated iliac pin (Medtronic, MN, USA). Following patient draping, an O-arm 3D scan was performed and transferred to the navigation station (S7, Medtronic, MN, USA). A navigated Jamshidi needle was used to insert K-wires to the levels above, below, and contralateral to the decompressed index side. A navigated dilator was used to position the X-tube retractor over the lamina and facet of the index level. Navigation-assisted drilling was performed to decompress the canal and remove extensive parts of the vertebral body. Once decompression was finalized, the retractor was removed, and navigated screws were inserted over the K-wires. A 3D scan was repeated to confirm hardware position. PMMA was injected into the instrumented levels under lateral 2D fluoroscopy using the O-arm device. The screws were connected with rods, and the rods were locked in place. Two-dimensional images confirmed the rod position. and calibrated. Following skin incision, the screws are advanced to the starting point with navigation guidance. The k-wire was advanced 2-3 mm, and a malate was used to anchor the screw. The screw handle was used to advance the screw with navigation guidance (C). The emergence of one-step screws enabled further progression of the described technique. Following the initial scan, the retractor was positioned, and decompression was performed. A second scan was acquired and transferred to the navigation system. Onestep screws (Viper-prime FNS, J&J, NJ, USA) were fitted with universal reference frames (Sure-trak, Medtronic, MN, USA) and calibrated accordingly ( Figure 1C,D). The internal K-wire was advanced to be 3-4 mm proud relative to the screw tip. Skin incision was performed according to the navigation proposed entry point, and the screw was advanced through the muscles to the transverse process. Once navigation confirmed the screw to be in an acceptable starting point, the internal sharp K-wire was hammered into the starting point while gradually advancing the K-wire. Once docked in the starting position, the handle was used to screw the self-drilling screw into the pedicle and vertebra. A third 3D scan verified the screw positions, followed by PMMA injection and rod insertion as previously described. Results Overall, 19 patients were included in this study. Table 1 summarizes the patients' demographics, neurological status, and pathological diagnoses. Nine operations (47%) took place in 2013-2014, six (31.6%)) in 2015-2016, and four (21%) in 2017-2019. Apart from one patient, all patients were treated for a single-level pathology. The most common pathological diagnosis was RCC (n = 5). All patients presented with spinal cord compression or nerve root impingement secondary to metastatic lesions. The intraoperative imaging techniques included C-arm fluoroscopy (68.4%) and O-arm (31.6%) ( Table 2). Sixteen patients underwent screw fixation, out of which 13 cases had short construct instrumentation using Longitude FNS screws (Medtronic, MN, USA), one (1) had percutaneous pedicle screw-rod fixation using the Sextant system (Medtronic, MN, USA), and two (2) had single-screw insertion with the Viper prime system (Johnson & Johnson, NJ, USA). The mean EBL was 368 mL (range: 0-1800). The mean operative time was 140.3 min. There were no intraoperative complications apart from one patient with excessive bleeding. The length of stay ranged between 1 and 14 days (mean 4.2 days). Eighteen patients were discharged to their homes and one to a rehabilitation facility. Recurrence was documented in six patients during a mean follow-up period of 14.3 months. The mortality rate during the follow-up period was 74% (n = 14). The 6-month mortality rate was 37% (n = 7). The postoperative complications rate was 16% (n = 3; respiratory infection, deep wound infection, and deep vein thrombosis). Four patients had neurological improvement following surgery, and one deteriorated neurologically. The adjuvant and neoadjuvant radiation treatments are presented in Table 2. Discussion Bone involvement is the third most common site for metastatic spread, following the pulmonary and hepatic systems [8]. Their prevalence is expected to rise further with improvements in systemic control and with technological advances that allow for earlier diagnoses and increased survival of the in oncological population [9]. About 90% of cancer patients with spinal metastases report bone and/or axial back pain, accompanied with radicular pain. Half of these patients have sensory and motor deficits and more than 50% have bladder and intestinal dysfunction [10]. Treatment options for symptomatic patients with metastatic epidural spinal cord compression (MESCC) include a combination of corticosteroids [11], radiotherapy [12] or neural elements decompression through resection of compressive tumors in selected patients with or without instrumentation [1,8,13]. Although most patients may benefit from non-surgical treatment options, patients with unstable spinal column or severe spinal cord compression are likely to improve their quality of life following surgery [14]. Patchel et al. [13] demonstrated that patients with spinal metastases that have undergone surgery and radiotherapy were ambulatory for longer periods than patients treated with radiotherapy alone. Even so, unclarities still exist regarding the selection criteria for surgery. In part, this may be attributed to the general medical status of oncologic patients, in whom neoplasm-related co-morbidities such as anemia, impaired immune system and malnourishment are common, rendering open surgery to be deemed high risk [14,15]. Open spine surgery may be complicated by significant blood loss, lengthy hospital stays, high rates of post-operative infections, severe back muscle injury and the need for intensive pain control [7]. The oncological guidelines for the management of malignant extradural spinal cord compression [3] state that since surgery is associated with significant morbidity, the patients' prognosis should be considered in the decision making process, so that patients with favorable prognosis who may be operated safely should be referred to surgery. Of note, the guidelines emphasized that every effort to minimize the surgical extent should be made in light of the advantages of the minimally invasive technique. In recent years, technological innovation has enabled a rising number of surgeons to routinely opt for minimally invasive spinal surgeries (MISS) for the treatment of various neoplastic lesions. Using tubular retractors and microscopic visualization, MISS enables decompression of the spinal cord and nerves via small incisions, while stabilizing the spine by inserting percutaneous screws [10,16,17]. MISS techniques have been associated with reducing the risks of open spinal surgery, since they were associated with decreased intraoperative blood loss, improved wound healing and shorter postoperative hospitalization [5,6]. Therefore, MIS approach has the potential to minimize open surgery related morbidity. Consequently, minimization of the surgical approach may allow for frail patients who were not considered operable in the past to undergo surgery with in a safe and efficient manner [17]. A pivotal drawback of surgery for metastatic patients is the discontinuation of chemotherapy and radiation therapy to avoid wound dehiscence and infections [18]. MISS for spinal metastases was associated with earlier post-operative radiation therapy [19] and chemotherapy [6]. A plethora of surgical techniques were developed to achieve the primary goal of surgery, namely decompression of the neural elements and stabilization of the spinal column. Until recently, open surgical approach was the mainstay of treatment. Mini-open approach has been introduced over the past decade allowing for the tumor resection to be performed through a familiar midline approach augmented by percutaneous screws [20]. Saddeh et al. [21], compared mini-open approach to open surgery for spinal metastases and concluded that the mini-open approach was found to reduce postoperative pain and length of recovery. Minimally invasive tumor resection has been described over the last decade either with the use of tubular retractors or expandable retractors [22,23]. the introduction of spinal radiosurgery over the recent years has raised the need for separation surgery with adjuvant SRS [2]. These are well performed by MIS epidural decompression. On 2015, Harel et al. [7] described their experience with MIS expandable tubular retractor approach instrumented with short segment fenestrated screws and PMMA augmentation. Since the description of this technique, two main technological advances were made available: intra-operative imaging and navigation, and one-step screw insertion systems. MIS surgery compromises the surgeon's ability to comprehend the anatomical structures by direct sight, and relies significantly on either 2D fluoroscopy or 3D navigation. The 3D imaging combined with intraoperative navigation provides better visualization of the anatomy and therefore improves surgeon's orientation during surgery and ameliorates screws localization and placement [24][25][26]. In the current series, the use of fluoroscopy was abandoned in favor of intraoperative 3D imaging and navigation. Intraoperative navigation for spine tumor resection was first described by Kalfas on 2001 [27], emphasizing the importance of surgeon's validation of the navigation system accuracy before relying on it. Fujibayashi et al. [28] described the use of navigation for osteotomies during en-bloc tumor resection. Intra-operative navigation was utilized for osteoid osteoma curettage, enabling the surgeon direct access to the tumor, thus limiting the extent of the exposure without compromise of the spine stability [29,30]. As earlier systems relied on pre-operative CT scans and registration was cumbersome and not accurate, intra-operative navigation was rarely used. The integration of intra-operative 3D imaging allowed for intra-operative imaging with more accurate registration, and increased the popularity of this technology among surgeons. A multicenter study described the results of 50 patients with spinal tumors that had undergone open surgery with intra-operative imaging and navigation [31]. The authors concluded that tumor dissection is targeted while minimizing the dissection. In addition, utilizing intra-operative imaging and navigation for MISS tumor resection reduces the surgical team's radiation exposure [31][32][33]. Intraoperative navigation was repeatedly shown to increase screw accuracy rate in multiple studies [26,[34][35][36]. None of the patients in the current study had experienced screw mispositioning with either fluoroscopic guidance or 3D imaging guidance. Metastatic cancer patients commonly suffer from poor bone quality, which is further exacerbated by the combination of radiation and chemotherapy, leading to lower rates of bone healing [37] and places the patients at high risk for screw pullout. Moussazadeh et al. [38] reported of 44 patients undergoing short segment instrumentation with cement augmented screws following pathological vertebral fractures, and described their practice of injecting cement to the vertebral body prior to screw placement. Differently, fenestrated screws allow cement injection through the screw after placement directly into the surrounding body, thus increasing pull-out strength while reducing the number of tools inserted into the vertebral pedicle [7,39,40]. We opted to insert fenestrated screws augmented with cement in all instrumented patients, to obtain maximal purchase in the level above and below the index vertebra. In addition, the contralateral side of the decompression in the involved vertebra was instrumented with fenestrated screw, allowing for augmentation of the residua of the vertebral body with cement to enable a three-point fixation construct. None of the constructs in this series had failed during the follow-up period. The recent development of one-step screws allowed the mentioned technique to further advance, by simplifying the screws insertion technique and render numerous operative steps redundant [41,42]. While the standard MIS screw insertion technique involved, the insertion and removal of cannulated instruments over a K-wire, the one-step insertion screws consist of one navigated screwdriver with an integrated K-wire that corresponds to the navigation system. The integrated wire eliminates the possible complications involved with wire techniques. The principals of conversion of open surgery to MISS includes maintaining the goals and outcomes of surgery while reducing complications rate and expediting recovery. In the current study, one-step screws have been used since 2017. Screw malposition was not observed with any of the screw type and no intraoperative complications involving screw insertion were observed, thus achieving the role of MISS of obtaining the operative goals while refraining from complications. The pivotal role of a methodological patient selection process is a key element in the treatment of oncologic patients. In light if the lower complication rated associated with minimally invasive approaches, patients in a lower functional status and a shorter life expectancy may be referred to surgery [43]. In our institution, efforts are made to implement this in the multidisciplinary decision making process. Wright el al. [44] collected data from 22 centers comparing 2001 patients that had undergone surgery for symptomatic spinal metastases between 1991-2016. The data analysis revealed that the long-term survival improved significantly over the time course of the study. The authors concluded that the change is due to earlier diagnosis, better adjuvant therapy and an improved understanding of spinal metastasis disease which results in selecting operable patients with better potential of long-term survival. This was further supported by another paper showing improved survival for patients who underwent surgery while being in a relatively good pre-operative physical condition [45]. Over the study years, the patient selection criteria have leaned towards patients with favorable prognosis. This trend may act as a two-way sword by reducing the number of patients eligible for oncological spine surgery on one hand, but significantly improving the 6-month survival rates on the other. Minimally invasive spine metastases surgery may broaden the spectrum of patients eligible for surgery compared to the open approach. Conclusions MISS technology is constantly evolving, allowing for improvement in surgical techniques and possibly better outcomes. MISS for spinal tumors has multiple advantages, and a wider adoption of new technology may be beneficial for these patients as a means to reduce complications and allow for faster post-operative initiation of adjuvant therapy. Limitations This is a retrospective study of a small cohort examining a new technology. The small numbers do not allow definite conclusions. The scope of this paper is limited to the description of the benefits of a new method. As the patient selection process for the MISS procedure is at the surgeon's discretion, selection bias is a main limitation for this study. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/jcm12020536/s1, Video S1: A L2 RCC metastases involving the left L2 vertebral body, pedicle, and invading into the spinal canal and foramina and compressing the thecal sac and left L2 nerve root. The video demonstrates lamina and facet exposure; drilling of the lamina; and facet, removal of the facet, and tumor removal from the pedicle and vertebral body. Navigation wand is introduced to assess resection borders. The lateral border of the dura is exposed and decompressed. (video speed is doubled). Data Availability Statement: The data presented in this study are available on request from the corresponding author.
v3-fos-license
2021-12-03T16:40:18.589Z
2021-11-28T00:00:00.000
244819424
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-6643/13/12/4294/pdf", "pdf_hash": "cd46e1b1c0ed022ead1685ab4c5df507d221a588", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42715", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "35be65e8ef011fd28d4c7eba5331c6822d39c8f5", "year": 2021 }
pes2o/s2orc
COVID-19 Pandemic and Eating Disorders among University Students An online cross-sectional study was conducted in May 2021 to identify factors, such as changes in food choices, lifestyle, risk and protective behavior, mental health, and social demographics, on eating disorders (ED) among students of a French university. Students were invited to fill out an online questionnaire. ED were identified using the French version of the five-item “Sick, Control, One stone, Fat, Food” (SCOFF) questionnaire. The Expali™-validated algorithmic tool, combining SCOFF and body mass index, was used to screen EDs into four diagnostic categories: bulimic ED, hyperphagic ED, restrictive ED and other ED. A total of 3508 students filled the online questionnaire, 67.3% female, mean age 20.7 years (SD = 2.3). The prevalence of ED was 51.6% in women and 31.9% in men (p < 0.0001). Lower food security scores were associated with a higher risk for all ED categories. Depression and academic stress due to COVID-19 were associated with ED regardless of category. Regarding health behaviors, a high adherence to the National nutrition recommendation was a protective factor for the risk of bulimic ED, hyperphagic ED and restrictive ED. A lower frequency of moderate and vigorous physical activity was associated with a higher risk of hyperphagic ED. Our study has shown a high screening of ED among the students of a French university fourteen months after the beginning of the COVID-19 pandemic. By disrupting academic learning, jobs and social life, the COVID-19 pandemic could have exacerbated existing ED or contributed to the onset of new ED. Introduction University students may constitute a particularly vulnerable population for mental health problems with the transition to adulthood, frequent economic difficulties and academic burden [1]. University years coincide with the typical age of onset of eating disorders (ED) with a significant concern among university students, worsened by academic stress [2]. To this context was suddenly added the Coronavirus Disease 2019 (COVID-19) pandemic on 11 March 2020 which led to multiple lockdowns in France and curfews [3]. University students have been affected by the lockdown periods with disruptions in teaching with the switch of online learning to continue teaching, leading to worries about adapting to new methods [4]. The implementation of public health measures (including gym closures and prohibited group sports) has caused a decrease in physical activity [5] and a modification of eating habits among university students [6,7]. A recent review of the literature concluded that quarantine measures and the pandemic COVID-19 could have negative psychological effects, including, stress, anxiety, and depression [8,9], higher among university students than non-students [10]. There are a variety of stressors that contribute to increased levels of anxiety, stress, and depressive thoughts as students live through the COVID-19 pandemic [11]. In addition, less socialization, as well as students' living conditions, may also contribute to lockdown stress [12]. According to Wang et al. the biggest contributor was stress associated with academics, followed by general uncertainty regarding the pandemic, health concerns, and concerns related to finances (loss of job) [13]. Salazar-Fernandez et al. described the role of emotional distress as a key mechanism to explain coping behaviors, such as eating comfort food, adopted as a consequence of the COVID-19 related stressors [14]. University students may have modified or stopped their paid activity [15] which could have led to food insecurity [16]. In the United States, studies showed that university students with food insecurity were more likely to screen positive for an ED [17], especially during the COVID-19 pandemic [18]. The COVID-19 pandemic has also led to anxiety and depression symptoms among university students [19,20] which are well known to be associated with ED [21,22]. The Lin study [23] highlights how the COVID-19 pandemic has hampered the ability to promptly identify and treat young people with eating disorders. Recently we highlighted a sharp increase in ED in 2021 compared to the previous 10 years [24]. However, few studies have evaluated the risk of onset or worsening of ED and associated factors among the student population [25,26]. This study aimed to investigate the associated factors, such as changes in food choices, lifestyle, risk and protective behavior, mental health, and social demographics, on four ED categories (bulimic ED, hyperphagic ED, restrictive ED and other ED) among students of a French university during the COVID-19 pandemic. Methods The recruitment strategy of university students followed a convenience sampling method. In May 2021, university-wide email distribution lists were used to invite students to participate in the study on the impact of the COVID-19 pandemic. If students were interested in participating, they were asked to follow a link to the survey website. This online anonymous cross-sectional observational study was approved by the local Institutional Review Board (E2020-22). Students were eligible for inclusion if they were currently enrolled at a higher education institution, aged 18 years or more, and accepted to answer the study questionnaire. No duplicates were found (checking on age, gender, cursus, and year of study). Students aged over 30 years were secondarily excluded. There was no missing data because filling in was mandatory. The study encompasses the pandemic period one month before May 2021 and for eating habits, food security, and physical activity, encompasses also the pre-pandemic period: i.e., the month before the COVID-19 measures (before March 2020). The same questions were asked indicating before the pandemic COVID 19 (before March 2020) and the month before. Socio-Demographic Characteristics Data were collected on age, gender; type of academic course: law and economics, social sciences, health sciences, and technology; year of the academic course, further categorized in year 1, years 2 and 3, years 4 and 5, and year 6 and more. Eating Habits The PNNS-GS2 (Programme National Nutrition Santé-Guidelines Score 2) is a predefined food-based dietary index designed to reflect healthy food groups [27]. The PNNS-GS2 recommends six food groups: fruit and vegetables, nuts, legumes, whole-grain food, milk and dairy products, fish, and seafood. A score reflecting adherence to the 2017 French nutritional guidelines was used with weighting according to the level of evidence of the association between food groups and health: a score of 3 for fruit and vegetables, a score of 2 for whole-grain food and fish and seafood, and a score of 1 for nuts, legumes and milk and dairy products [28]. PNNS-GS2 components and scoring (0 to 14) are presented in annexe 1. Food Security Food security was assessed using the six-item Food Security Scale [29] with a score of 0-1 indicating high food security, a score of 2-4 indicating low food security, and a score of 5-6 indicating very low food security. Physical Activity Students reported their frequency of moderate physical activity, including cycling or walking for at least 30 min, and vigorous physical activity, including lifting heavy weights, running, aerobics, or fast cycling for at least 30 min [30]. Frequency was categorized as (almost) never, less than once a week, once a week (categorized as occasional), more than once a week and (almost) daily (categorized as regular) [5]. Eating Disorders Students filled in the French version of the five-item "Sick, Control, One stone, Fat, Food" (SCOFF) questionnaire. A diagnostic threshold was fixed at two positive responses, with a sensitivity of 0.88 and a specificity of 0.93, using interviews as a diagnostic reference; therefore, data obtained with SCOFF gave a proxy of actual ED The Expali™-validated algorithmic tool, combining SCOFF and body mass index, was used to screen EDs into four diagnostic categories: bulimic ED, hyperphagic ED, restrictive ED and other ED (purging disorder, Night Eating Syndrome and any other ED) [31]. Mental Health Depression was assessed using the eight items of the CESD-8 (Center for Epidemiologic Studies-Depression) scale, which has shown adequate psychometric properties (a Cronbach alpha of 0.82) [32]. The response values were scored on a 4-point Likert scale (range 0 to 3) and CESD-8 on a scale from 0 to 24, with higher scores indicating a higher frequency of depressive complaints. Academic stress was assessed using a Likert scale (0 totally disagree to 4 totally agree) according to increased academic workload, the concern of not being able to validate the academic year, stress with changes in teaching methods, and difficulty in keeping up e-learning courses (insufficient equipment, weariness), then academic stress was scored from 0 to 16. The academic stress and academic satisfaction scale have sufficiently high internal consistency (Cronbach's alphas are greater than 0.6 [33]. Students' concern about COVID-19 was assessed on a Likert scale from 0 to 10 based on the following items: worry about becoming severely ill from a COVID-19 infection; worry about a relative becoming severely ill from a COVID-19 infection. Students were asked if they had visited a healthcare professional for support during the year 2021 and if yes, which category of healthcare professional: general practitioner, psychologist, psychiatrist, and nutritionist. Statistical Analysis Qualitative variables were summarized by percentage and compared using the Chi 2 test and continuous variables by mean with standard deviation (SD) and compared using the Student Test. Cross-sectional associations were estimated via multivariable polytomous logistic regression (no ED = reference category) providing adjusted ORs and 95% CIs. The principal outcome (dependent) variable was the 4-category ED measure (restrictive, bulimic, hyperphagic and other ED). Variables with p value < 0.20 were included in the logistic regression. A p value below 0.05 was considered to be significant. The analysis was conducted using XLSTAT by Addinsoft, Paris, France 1 March 2020. Results A total of 3508 students (response rate of 12%) filled the online questionnaire; 67.3% female, mean age 20.7 years (SD = 2.3). Among this sample of students, 10.7% were underweight, 13.5% were overweight and 5.4% were obese. The characteristics of the students are presented in Table 1. Eating Habits PNNS-G2 score was lower during the COVID-19 pandemic in students with ED (4.4 SD = 2.4) than in students with no ED (4.9 SD = 2.3). The score by category of ED was displayed in Table 1. PNNS-G2 score decreased for bulimic ED, hyperphagic ED, and restrictive ED between pre and pandemic periods (Figure 1). Food Security During the COVID-19 pandemic, 11.2% of students with no ED and 26.3% of students with ED had low and very low food security (p < 0.0001) (Table 1), with an increase in food insecurity between pre and pandemic COVID-19 period (Figure 2). Food Security During the COVID-19 pandemic, 11.2% of students with no ED and 26.3% of students with ED had low and very low food security (p < 0.0001) (Table 1), with an increase in food insecurity between pre and pandemic COVID-19 period (Figure 2). Food Security During the COVID-19 pandemic, 11.2% of students with no ED and 26.3% of students with ED had low and very low food security (p < 0.0001) (Table 1), with an increase in food insecurity between pre and pandemic COVID-19 period (Figure 2). Physical Activity Moderate and vigorous PA was lower in students with ED than in students without ED and the lowest PA among students with hyperphagic ED ( Table 1). The frequency of moderate physical activity and vigorous physical activity decreased in students with ED and in students with no ED (Figure 3A,B). Physical Activity Moderate and vigorous PA was lower in students with ED than in students without ED and the lowest PA among students with hyperphagic ED ( Table 1). The frequency of moderate physical activity and vigorous physical activity decreased in students with ED Eating Disorders The screening of ED was 51.6% in women and 31.9% in men (p < 0.0001); half of ED were bulimic ED. The distribution by category of ED according to gender is presented in Figure 4. Eating Disorders The screening of ED was 51.6% in women and 31.9% in men (p < 0.0001); half of ED were bulimic ED. The distribution by category of ED according to gender is presented in Figure 4. Mental Health CESD-8 score was higher among students with ED compared to students with no ED (13.4 SD = 5.6 and 12.0 SD = 3.3; p < 0.0001) as academic stress score (9.0 SD = 5.4 and 10.2 SD = 3.8; p < 0.0001). Scores for the items "worry about becoming severely ill" and "worry about a relative becoming severely ill" were higher among students with ED compared to students with no ED (3.8 SD = 3.3 and 3.4 SD = 31; p < 0.001) and (8.1 SD = 2.5 and 7.5 SD = 2.7; p < 0.0001), respectively. Students with ED consulted more frequently a healthcare professional (21.7%) than students with no ED (11.3%) (p < 0.0001). These results are detailed by ED category in Table 1. Psychologists were the most frequently consulted healthcare professionals. There was no difference in the category of healthcare professionals consulted according to the presence or absence of an ED ( Figure 5). Results after multivariate analysis were displayed in Table 2. The significant associated factors (p < 0.05) were the following. Female gender, social humanities and law/economic curricula were risk factors for all ED categories. Students in years 1 and 2 had a higher risk of bulimic and restrictive ED than students in years 3 or more. Lower food security scores were associated with a higher risk for all ED categories. Depression, academic stress and consultation of a healthcare professional were associated with ED regardless of category. Regarding health behaviors, a high PNNS G2 score was a protective factor for the risk of bulimic, hyperphagic and restrictive ED. A low frequency of moderate physical activity was associated with an increased risk of bulimic ED and hyperphagic ED and a low frequency of vigorous physical activity was associated with an increased risk of hyperphagic ED. Mental Health CESD-8 score was higher among students with ED compared to students with no ED (13.4 SD = 5.6 and 12.0 SD = 3.3; p < 0.0001) as academic stress score (9.0 SD = 5.4 and 10.2 SD = 3.8; p < 0.0001). Scores for the items "worry about becoming severely ill" and "worry about a relative becoming severely ill" were higher among students with ED compared to students with no ED (3.8 SD = 3.3 and 3.4 SD = 31; p < 0.001) and (8.1 SD = 2.5 and 7.5 SD = 2.7; p < 0.0001), respectively. Students with ED consulted more frequently a healthcare professional (21.7%) than students with no ED (11.3%) (p < 0.0001). These results are detailed by ED category in Table 1. Psychologists were the most frequently consulted healthcare professionals. There was no difference in the category of healthcare professionals consulted according to the presence or absence of an ED ( Figure 5). Results after multivariate analysis were displayed in Table 2. The significant associated factors (p < 0.05) were the following. Female gender, social humanities and law/economic curricula were risk factors for all ED categories. Students in years 1 and 2 had a higher risk of bulimic and restrictive ED than students in years 3 or more. Lower food security scores were associated with a higher risk for all ED categories. Depression, academic stress and consultation of a healthcare professional were associated with ED regardless of category. Regarding health behaviors, a high PNNS G2 score was a protective factor for the risk of bulimic, hyperphagic and restrictive ED. A low frequency of moderate physical activity was associated with an increased risk of bulimic ED and hyperphagic ED and a low frequency of vigorous physical activity was associated with an increased risk of hyperphagic ED. Discussion This cross-sectional study provides the screening of ED fourteen months after the beginning of the COVID-19 pandemic declared on 11 March 2020 [3] and shows ED affected one in two female students and one in three male students in May 2021. These results highlight an increase in ED, and support the results of a recent study conducted among students of the same university in the past decade years which showed that one in three women and one in seven men had an ED [24]. Lin et al. [23] demonstrated the increasing volumes of inpatient and outpatient young adults with eating disorders since the COVID-19 pandemic began. Bulimia remained the most prevalent ED as before the COVID-19 pandemic [24]. In our study, students with bulimia were especially worried about becoming severely ill. Among a population of patients, more severe COVID-19related post-traumatic symptomatology was reported in patients with bulimia than in patients with anorexia and hyperphagia [34]. Severe anxiety was also associated with an increase in hunger, emotional over-eating, and a decrease in enjoyment of food. Our study allowed to identify depression and added stress related to COVID-19 (academic disruption and fear of infection) associated with a risk of ED that may also explain the increased prevalence of ED [16,18]. We found an increase in the food insecurity and an association with each category of ED, it was already found to be associated with increased binge-eating disorders in the general population before the COVID-19 pandemic [35]. One possible explanation is that individuals who lack adequate resources to regularly purchase enough food to meet their nutritional needs undergo cycles of food restriction. These bouts of restriction may increase the risk of binge eating via food cravings or the biological effects of starvation. Another, possibly complementary, explanation is that economic strain creates stress, which in turn may promote binge eating [36]. At the beginning of the COVID-19 pandemic, patients with anorexia nervosa had increased restrictions, and feared being able to find foods consistent with their meal plan while patients with bulimia nervosa or with a binge eating disorder had increased binges [37]. Our study shows that students with ED consulted a healthcare professional twice more than students with no ED during the COVID-19 pandemic, apparently not foregoing care. Direct care was often replaced by tele-medicine in order to continue providing care while minimizing the risk of transmission of COVID-19 [38]. However, by disrupting typical modes of service delivery such as in-person office visits with a health-care provider, the COVID-19 pandemic may have exacerbated the already pervasive problem of unmet treatment needs among individuals with an eating disorder [39]. Tele-medicine education should be more developed in the curriculum of healthcare students to be effective beyond the pandemic COVID-19 [40]. Regarding health behavior, a decrease in physical activity was observed among university students during the pandemic with a greater risk of absence of physical activity among university students with hyperphagic ED and bulimic ED. In clinical populations, patients with anorexia are known for physical hyperactivity [41] and patients with bulimic/hyperphagic ED for sedentarity [42]. Fernandez-Aranda et al. reported changes in eating habits in patients with ED linked to an increase in restrictive diets due to concerns about weight and shape, and phases of binge eating seem to be: greater sedentary lifestyle, restrictions on outdoor activities, reduction of physical exercise, alterations in the sleep-wake rhythm and fear of contagion [43]. Limitation Caution is advised when generalizing these findings, for the following reasons: first, investigating health behaviors may lead to errors in self-reporting, particularly shifts in how body weight is perceived, reflecting cognitive distortions that could increase the risk for disordered eating in some individuals [44]; second, this was a convenience sample, and voluntary participation could have led to representativeness and self-selection bias as our sample had more women and healthcare students, third the study was cross sectional and therefore, does not allow causal interpretation between risk factors and ED. The study being anonymous and by self-questionnaire limits the bias of desirability. Conclusions Protecting the mental health of students is a public health issue that appears even more critical in the context of a pandemic. It also appears important that students can maintain physical activity social ties, and financial resources. This is also an opportune time to rethink prevention, early identification, in the post-COVID-19 era and in the future, should another pandemic hit. Students with self-stigma could endorse e-therapy which was effective in reducing eating disorder symptoms and comorbid depressive or anxiety symptoms.
v3-fos-license
2021-03-07T06:16:23.803Z
2021-02-28T00:00:00.000
232129936
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-4418/11/3/415/pdf", "pdf_hash": "9bb20f249870bed0fe0c274cca7cb207263c4279", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42716", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2c0d26d9d5a0976509135589950922b16f5f49eb", "year": 2021 }
pes2o/s2orc
Prenatal Sonographic Features of CHARGE Syndrome CHARGE syndrome is a rare autosomal dominant disorder, associated with coloboma (C), heart defects (H), choanal atresia (A), retardation of growth and/or central nervous system (R), genitourinary anomalies (G) and ear abnormalities (E). Prenatal diagnosis of the syndrome is very rare but may be suspected when a combination of such abnormalities is identified. We describe a prenatally suspected case of CHARGE syndrome due to unique findings of cardiac defects (DORV) in combination with minor clues, including a structurally malformed ear with persistent non-response to an acoustic stimulation (which has never been prenatally described elsewhere), renal malrotation and growth restriction. Postnatal diagnosis was made based on confirmation of the prenatal findings and additional specific findings of bilateral coloboma, choanal atresia and ear canal stenosis. Finally, molecular genetic testing by whole exome sequencing of the neonate and her parents revealed a novel de novo heterozygous frameshift c.3506_3509dup variant in the CHD7 gene, confirming the clinical diagnosis of CHARGE syndrome. In conclusion, we describe unique prenatal features of CHARGE syndrome. Educationally, this is one of the rare examples of CHARGE syndrome, comprising all of the six specific anomalies as originally described; it is also supported by the identification of a specific genetic mutation. The identified genetic variant has never been previously reported, thereby expanding the mutational spectrum of CHD7. Finally, this case can inspire prenatal sonographers to increase awareness of subtle or minor abnormalities as genetic sonomarkers. Introduction CHARGE syndrome is a rare, autosomal dominant genetic disorder with an incidence of approximately 1 in 10,000 births [1]. The majority of cases (65-70%) are caused by the loss of function pathogenic variants in the CHD7 gene [2]. CHARGE refers to a disorder associated with ocular coloboma (C), heart defects (H), choanal atresia (A), retardation of growth and/or central nervous system development (R), genito-urinary anomalies (G) and ear abnormalities (E). This syndrome was first reported in 1979 by Hittner et al. [3] and Hall [4]. The term CHARGE was introduced by Pagon et al. [5], in 1981. Conventionally, diagnosis is usually made by clinical criteria. Though there is no consensus, most people refer to the criteria proposed by Blake et al. [6] and further modified by Verloes [7] who highlighted the importance of the 3C major criteria (coloboma, choanal atresia, and hypoplasia of the semicircular ear canals). Other minor criteria include rhombencephalic dysfunction (brainstem and cranial nerves III to XII, including sensorineural deafness) malformation of the ear, malformation of the mediastinum (heart, esophagus) and mental retardation. Verloes [7] proposed the criteria for typical CHARGE: three majors (3C triad) OR two majors plus two minors; partial CHARGE: two majors plus one minor; and atypical CHARGE: two majors but no minors OR one major plus two minors. After identification of mutations in the CHD7 gene resulting in several phenotypic abnormalities, Hale et al. [8] recently proposed the inclusion of results of CHD7 testing in the clinical criteria. Most cases were diagnosed in neonatal or childhood period. Prenatal diagnosis has been rarely reported [9][10][11][12][13]. The syndrome is usually associated with severe morbidity and may even be life threatening. Therefore, early prenatal diagnosis is critical, either by clinical criteria or molecular genetic testing, because it can be informative for the prognosis, guidance of delivery preparation, postnatal care plan for the parents and even the option of termination of pregnancy, especially in cases of diagnosis in the pre-viable stage. Nevertheless, the number of cases with prenatal diagnosis is very limited. This may be due to the fact that most abnormalities of the syndrome are prenatally subtle or presented as an isolated abnormality and phenotypic diversity. Accordingly, more new prenatal cases are needed to enhance the literature for future analysis, especially on prenatal findings and management. The aim of this study is to describe unique prenatal features of CHARGE syndrome and also to educationally illustrate an intriguing case of CHARGE syndrome, comprising all of the six specific anomalies as originally described; it is also confirmed by the identification of a pathogenic genetic variant in the CHD7 gene. Case Presentation A 32-year-old pregnant woman, G4 P2102, attended her first visit of antenatal care at 23 +3 weeks of gestation. Ultrasound examination for anomaly screening demonstrated cardiac defects, including double-outlet right ventricle: DORV-TOF type, with severe pulmonary stenosis ( Figure 1) and left superior vena cava. Detailed ultrasound showed no associated abnormalities. Fetal biometry was consistent with gestational age except abdominal circumference and estimated fetal weight, which were relatively low (at 10th percentile), reflecting some degree of growth restriction. However, detailed ultrasound on the follow-up scans at 28 weeks of gestation showed subtle abnormalities, including malrotation of both kidneys, the hilum or renal pelvis facing posteriorly to the abdominal wall ( Figure 2). Furthermore, 3D-ultrasound revealed abnormal external ear structure (markedly prominent crus of anti-helix) ( Figure 3). Non-stress tests (NST) showed spontaneous fetal heart rate (FHR) accelerations (normal reactive tests) (23 +3 weeks). Interestingly, the fetus showed persistent non-response to acoustic stimulation tests at 26, 30, 32, 36 and 38 weeks (no FHR accelerations as well as no quickening perceived by ultrasound) (Figure 4), probably reflective of auditory dysfunction. Based on the findings of heart defect, ear defect, renal defect and growth restriction, several differential diagnoses were listed, including CHARGE syndrome. Theoretically, fetal blood sampling for molecular genetic tests should be performed. Nevertheless, since no lethal condition was identified and the couple wanted to continue pregnancy regardless of investigation results, prenatal invasive diagnosis was avoided, and we waited for postnatal work-up instead. She had no significant underlying disease and no familial history of hereditary diseases. Her pregnancy was uneventful except that she developed gestational diabetes (GDM) at 28 weeks of gestation, which was well-controlled with diabetic diet. At 38 +4 weeks, she had vaginal delivery, giving birth to a female baby, with birthweight of 2580 g (9th percentile of WHO growth chart); APGAR score 8 and 9 at 1 and 5 min, respectively, with excessive secretion in both nostrils. The left ear showed a clipped off helix ( Figure 5). Otoscopic examination showed bilateral ear canal stenosis with cerumen. Funduscopic examination revealed optic disc coloboma on the right eye and choroidal coloboma on the left eye ( Figure 6). Abdominal ultrasound revealed mal-rotation of both kidneys but with normal size and echogenicity of the renal parenchyma; no dilatation of the pelvicalyceal system was observed. Neonatal echocardiography confirmed the prenatal findings. Because of inability to pass nasogastric tube, bilateral choanal atresia was suspected and confirmed by CT scans of the head (Figure 7). At 38 +4 weeks, she had vaginal delivery, giving birth to a female baby, with birthweight of 2580 g (9th percentile of WHO growth chart); APGAR score 8 and 9 at 1 and 5 min, respectively, with excessive secretion in both nostrils. The left ear showed a clipped off helix ( Figure 5). Otoscopic examination showed bilateral ear canal stenosis with cerumen. Funduscopic examination revealed optic disc coloboma on the right eye and choroidal coloboma on the left eye ( Figure 6). Abdominal ultrasound revealed mal-rotation of both kidneys but with normal size and echogenicity of the renal parenchyma; no dilatation of the pelvicalyceal system was observed. Neonatal echocardiography confirmed the prenatal findings. Because of inability to pass nasogastric tube, bilateral choanal atresia was suspected and confirmed by CT scans of the head (Figure 7). At 38 +4 weeks, she had vaginal delivery, giving birth to a female baby, with birthweight of 2580 g (9th percentile of WHO growth chart); APGAR score 8 and 9 at 1 and 5 min, respectively, with excessive secretion in both nostrils. The left ear showed a clipped off helix ( Figure 5). Otoscopic examination showed bilateral ear canal stenosis with cerumen. Funduscopic examination revealed optic disc coloboma on the right eye and choroidal coloboma on the left eye ( Figure 6). Abdominal ultrasound revealed mal-rotation of both kidneys but with normal size and echogenicity of the renal parenchyma; no dilatation of the pelvicalyceal system was observed. Neonatal echocardiography confirmed the prenatal findings. Because of inability to pass nasogastric tube, bilateral choanal atresia was suspected and confirmed by CT scans of the head (Figure 7). Prenatally, this case was suspected of CHARGE syndrome based on the findings of cardiac defects, ear abnormalities, growth restriction and renal abnormality. The diagnosis was postnatally confirmed by additional specific findings of coloboma, choanal atresia and ear canal stenosis. Finally, the diagnosis was confirmed by molecular genetic testing, as follows: Molecular genetic study: Trio WES-analysis was performed for the neonate and her parents. Genomic DNA was isolated from peripheral blood leukocytes. The DNA samples were enriched by SureSelect Human All Exon V7 kit, Agilent: Santa Clara, CA, USA and sequenced onto Illumina HiSeq 4000 Sequencer Illumina: San Diego, CA, USA. A novel de novo heterozygous frameshift c.3506_3509dup variant (chr8:61741348 C > CTAAA, p.K1170Nfs*39) in the CHD7 gene was identified in the neonate (Figure 8), but not her parents. This variant has not been identified in the ExAc, Gnomad, Clinvar and the in-house 3206 Thai Exome databases. The baby underwent a left modified Blalock-Taussig shunt (MBTS) as palliative surgery at one month of age, with a successful outcome resulting in a well-oxygenated condition (oxygen saturation 98-100%). At the time of writing this report, the definitive corrective surgery has not been performed. The neonate was taken care of and followed-up by ophthalmologists and ENT specialists (for auditory brainstem response test). Discussion Lessons gained from this study are as follows: (1) CHARGE syndrome may have some sonographic clues that enhance the possibility of prenatal diagnosis. Though several abnormalities of the syndrome are difficult to detect by routine prenatal ultrasound, with careful examinations, subtle anomalies may be disclosed. For example, in our case, at first only cardiac defect was appreciated, but minor findings, such as ear abnormalities and kidney mal-rotation, were visualized on follow-up scans. Therefore, in case of an isolated anomaly at the first scan, we emphasize follow-up scans to disclose minor late-appearing sonomarkers to enhance pattern recognition. (2) To the best of our knowledge, this is the first report on the detection of fetal auditory dysfunction using simple tools in daily practice to support the diagnosis of CHARGE syndrome. (3) Educationally, the case presented here is one of the rare and intriguing examples of CHARGE syndrome, comprising all of the six specific anomalies as originally described; it is also supported by the most solid evidence of a de novo heterozygous frameshift mutation in the CHD7 gene. Accordingly, the diagnosis in this case is absolutely certain. Prenatal diagnosis of CHARGE syndrome is critical but rarely made. This is due to the fact that prenatal ultrasound has some limitations in diagnosis of the syndrome since it has a wide spectrum of abnormalities and most of them, especially the primary specific defects, such as coloboma, choanal atresia and auditory deficit, cannot be visualized by prenatal ultrasound. Nevertheless, several minor criteria may be first appreciated, such as DORV in this case, leading to further disclosure of the subtle abnormalities with more careful examination or serial scans for late-appearing markers, fetal MRI in some selected cases, especially in cases suspected of CNS abnormalities [14], or prenatal genetic testing when the pattern is mixed-up. Pattern recognition of the syndrome is useful as seen in this case. Nevertheless, the patterns of anomalies of several syndromes largely overlap, which must be taken into account in differential diagnosis; for example, isolated congenital heart defects, Kabuki syndrome (typical facial gestalt, postnatal growth deficiency, congenital heart defects, hearing loss and intellectual disability, skeletal, dermatoglyphic, genitourinary, and ophthalmologic anomalies, including coloboma), VACTREL association (Vertebral abnormalities, anal atresia, cardiac, renal/adial ray, esophageal and limb defects), Smith-Lemli-Opitz syndrome (IUGR, genital and cardiac anomalies), Joubert spectrum (ventriculomegaly, polydactyly, renal abnormalities, Dandy-Walker variant, cephalocele) and 22q11.2 deletion syndrome [1]. For example, some cases of CHARGE syndrome have thymus agenesis and conotruncal heart defects, highlighting the clinical overlap with a 22q11.2 deletion. In cases of normal 22q11.2, CHARGE syndrome should be highly considered. Then, thorough evaluation of the ears with auditory function, and choana (if possible), as well as other subtle clues (mild ventriculomegaly, thymus hypoplasia, arhinencephaly, etc.) should be performed. Fetal brain MRI may be useful. When the diagnosis is highly suspected, DNA analysis of CHD7 should be strongly considered. Ear structural abnormality in our case was first subjectively diagnosed, but it seems to be more significant when combined with its function. As already known, fetuses are very sensitive to acoustic stimulation, routinely used in antenatal surveillance, leading to FHR accelerations and fetal quickening. We took advantage of this stimulation to test the auditory perception. Because the fetus persistently showed no response to acoustic stimulation, either by FHR acceleration or quickening in spite of having spontaneous FHR acceleration and quickening, it may reasonably be concluded that the fetus was likely to have auditory dysfunction, probably associated with structural abnormality of the ear canal and external ear abnormality, as seen on 3D-ultrasound. Interestingly, DORV, which is rarely described as a first clue, was the first clue in this case, leading to final diagnosis of CHARGE syndrome. DORV could be isolated or a part of several syndromes, especially trisomy 18. In this case, there were no other typical findings commonly seen in trisomy 18, such as abnormal hand posture, cleft lip/palate, omphalocele, etc. Furthermore, this is the first report on bilateral renal mal-rotation (without abnormality of renal structures) as a part of the prenatal features of CHARGE syndrome. This minor disorder could be simply overlooked in daily practice, but this report shows that it may be a genetic sonomarker. The insertion of the four nucleotides, TAAA, leading to a frameshift has never been previously reported. This expands the mutational spectrum of the CHD7 gene. Since the genetic variant is de novo, as evidenced by its absence in the leukocytes of the proband's parents, the recurrence risk for the next child is low. As mentioned earlier, prenatal diagnosis is very limited. Fetal MRI may be more useful in identification of subtle abnormalities of CNS. However, fetal MRI is not a primary tool for fetal anomaly screening. Therefore, the first clue for prenatal diagnosis is usually initiated by some abnormalities on ultrasound screening. Unfortunately, most anomalies are subtle and difficult to identify by standard ultrasound examination, for example, coloboma and choanal atresia. A literature review indicates that CHARGE syndrome has been reported prenatally only a very limited number of times [9][10][11][12][13]15,16]. Prenatal sonographic findings that may be helpful are summarized in Table 1. Conclusions In conclusion, the case presented here is unique and educational. DORV was the first clue, leading to the detection of associated subtle abnormalities, e.g., ear abnormalities with persistent non-response to acoustic stimulation, renal mal-rotation and growth restriction. Postnatal diagnosis was made based on the confirmation of the prenatal findings and additional specific findings of coloboma, choanal atresia and auditory canal stenosis. Finally, the diagnosis was confirmed by genetic testing (a de novo heterozygous frameshift c.3506_3509dup variant in CHD7). Educationally, this is a rare and interesting case of CHARGE syndrome, comprising all of the six specific anomalies as originally described. It is also supported by the identification of a specific genetic mutation. Finally, this case can inspire prenatal sonographers to increase awareness of subtle or minor abnormalities as genetic sonomarkers. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Written informed consent has been obtained from the patient to publish this paper. Data Availability Statement: The data of this report are available from the corresponding authors upon request.
v3-fos-license
2022-11-27T16:07:21.006Z
2022-11-25T00:00:00.000
254012401
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2227-9059/10/12/3042/pdf?version=1669364689", "pdf_hash": "3dce42e84e799b0b5a6ec7edd95c835415c70b14", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42718", "s2fieldsofstudy": [ "Biology" ], "sha1": "bc55b905f5159468965c348de2752e6f459f464b", "year": 2022 }
pes2o/s2orc
Engineered Extracellular Vesicles in Treatment of Type 1 Diabetes Mellitus: A Prospective Review Insulin replacement is an available treatment for autoimmune type 1 diabetes mellitus (T1DM). There are multiple limitations in the treatment of autoimmune diseases such as T1DM by immunosuppression using drugs and chemicals. The advent of extracellular vesicle (EV)-based therapies for the treatment of various diseases has attracted much attention to the field of bio-nanomedicine. Tolerogenic nanoparticles can induce immune tolerance, especially in autoimmune diseases. EVs can deliver cargo to specific cells without restrictions. Accordingly, EVs can be used to deliver tolerogenic nanoparticles, including iron oxide-peptide-major histocompatibility complex, polyethylene glycol-silver-2-(1′H-indole-3′-carbonyl)-thiazole-4-carboxylic acid methyl ester, and carboxylated poly (lactic-co-glycolic acid) nanoparticles coupled with or encapsulating an antigen, to effectively treat autoimmune T1DM. The present work highlights the advances in exosome-based delivery of tolerogenic nanoparticles for the treatment of autoimmune T1DM. Introduction Type 1 diabetes mellitus (TIDM), also known as juvenile diabetes or insulin-dependent diabetes, is an autoimmune disorder driven by T cell-mediated destruction of pancreatic β-cells that is often initiated during childhood [1]. The pathogenesis of T1DM is driven by the autoreactivity of T cells activated by alleles encoding human leukocyte antigens (HLAs) [2]. In a recently published meta-analysis, the incidence of T1DM was 15 per 100,000 people and its global prevalence was 9.5% (95% CI: 0.07 to 0.12) [3]. T1DM affects 3-10% of people with HLA risk alleles, with little environmental contribution [4]. In patients with T1DM, the seroconversion phase is characterized by the formation of autoantibodies against insulin-producing cells [5]. In a previously published study, it was found that some of the HLA class II haplotypes were associated with a higher risk of T1DM [6]. A prior study indicated the contribution of "natural apoptosis" or spontaneous cell death of the β-cell population and islet amyloid polypeptide aggregates and viral infection to β-cell death due to auto-antigen production, dendritic cell (DC) activation, and antigen presentation [7]. The discomfort associated with regular subcutaneous injections, episodes of hypoglycemia, and lack of knowledge on the management of T1DM can sometimes result in severe hypoglycemia that can have fatal consequences. Improved and more efficient therapeutic approaches for the management of T1DM that are free of limitations are required. Nanoparticles (NPs) and extracellular vesicles (EVs) have evolved as breakthrough therapies for the treatment of various diseases. NPs and EVs provide a cell-free approach to disease amelioration with advantages that include reduced harmful effects, site-specific delivery, and stability. EVs and engineered/tailored nanoparticles derived from gold, carbon, metal oxides, or degradable polymers have been demonstrated to have beneficial effects on target cells. Tailored NP and EVs are being exploited for the immunomodulation and treatment of various immune-related diseases [8]. Tolerogenic NPs (TNps) provide immune tolerance by modulating the normal immune response through three approaches: (i) modulating natural tolerogenic processes, (ii) targeting pro-tolerogenic receptors, and (iii) using pharmacological immunomodulators to inhibit tolerogenic immune responses. EVs play a critical role in the modulation of immune responses mediated by their cargo of cytokines, growth factors, and functional microRNAs (miRNAs) via the paracrine effect. EVs derived from neutrophils and macrophages have immunosuppressive and immunostimulatory activities by spreading alloantigens and modulating antigen presentation to T lymphocytes [9]. In the present review, we discuss the therapeutic advances and applications of EVs for the treatment of T1DM and EV-based delivery of TNps for the treatment of autoimmune T1DM. Our aim is to provide current research updates on TNp-based therapies in immunomodulation and to reveal different treatment strategies for T1DM. Islet Autoimmunity T1DM patients have autoantibodies in their serum. These include insulin autoantibodies (IAA), glutamic acid decarboxylase 65 (GADA), islet antigen-2 antigen (IA-2A), zinc transporter 8 (ZnT8A), Imogen 38, pancreatic duodenal homeobox factor 1 (PDX1), chromogranin A (CHGA), heat shock protein 60 (hsp60), pro-pre-insulin, and islet cell antigen-69 (ICA-69) [10,11]. The presence of one major autoantibody in the serum of T1DM patients since childhood poses a lower risk to the patient than the presence of two autoantibodies. Maternal transmission of islet-specific autoantibodies increases the risk of autoimmune diabetes [12]. However, according to another study, the transfer of islet autoantibodies from the mother to fetus does not contribute to fetal cell damage [13]. Genome-wide association study findings revealed that HLA genes account for 50% of the genetic risk of developing T1DM, demonstrating that specific auto-antigen peptides contribute to the pathogenesis of T1DM [14]. In a meta-analysis, non-HLA factors, including polymorphism within insulin variable number of tandem repeats (INS-VNTR), protein tyrosine phosphatase non-receptor type 22 (PTPN22), cytotoxic T-lymphocyte associated protein 4 (CTLA4), interleukin-2 receptor subunit alpha (IL2RA), and increased T cell activation and proliferation contribute to the pathogenesis of T1DM [15][16][17][18]. Postmortem pancreas investigations of T1DM patients at different time intervals revealed the infiltration of CD8+ and CD4+ T cells, macrophages, and B cells, thereby indicating damage to β-cells [19]. CD4+ cells specific for the C-peptide of the proinsulin precursor of insulin have been detected in the pancreas of T1DM patients. Islet autoantibodies are diagnostic biomarkers that differentiate between T1DM and T2DM, which evolve from autoreactive B cells and CD4+ T cell associations. T1DM alleles DR4, DQ8, and DQ2 increase the genetic risk of developing T1DM in humans [20]. CD4+ T cells stimulate antibody production mediated by B cells, as well as CD8+ T cell-mediated responses, which stimulate islet-resident macrophages [21,22]. CD8+ T cells play an important role, as revealed by Faridi et al. [23], who focused on the generation of novel targets against CD8+ T cells. In another study, IAA in T1DM patients was predominantly composed of IgG1 (83%), IgG4 (42%), and IgG2 (17%) [24]. The detailed mechanism for evading tolerance by CD4+ and CD8+ T cells, which modulate the immunological response in T1DM, remains unclear. Clarifying the mechanism may enable the discovery of novel auto-antigenic targets. The primary factors for autoimmune responses are also unclear. However, specific autoantigens processed by antigen-presenting cells (APCs) may be contributing factors in T1DM. These APCs include DCs, macrophages, and B cells present in pancreatic islets. The presentation of these antigens to naïve T cells mediated by HLA generates autoreactive CD4+ T cells that become active and generate cytokines following activation of β-cellspecific cytotoxic CD8+ T cells. This immunological cascade attracts activated T cells to the pancreatic islets and stimulates macrophages and other T cells, which initiate islet cell destruction [25]. Treatment of Autoimmune T1DM Based on clinical trials, autoimmune T1DM can be managed by supplementation with immunosuppressive drugs, such as cyclosporine, azathioprine, and prednisone, during the initial phase of T1D onset, which helps in better management. However, these immunosuppressive drugs bear several limitations and adverse effects, including an increased risk of infection, malignancies, and other clinical complications [26]. To overcome these limitations, safer treatment approaches that are currently being used include teplizumab, rituximab, abatacept, and otelixizumab monoclonal antibodies [26]. Another approach involves the use of antibiotics, such as vancomycin, for the treatment of T1DM-related autoimmune disorder [27]. Notably, immunosuppressive drugs and antibiotics do not treat the underlying autoimmunity and require a specific antigen-specific approach to tackle autoimmune disorders, such as T1DM. T1DM can be treated using currently available therapies. A previously published study in this direction described the delivery of soluble antigens with GAD, insulin, and proinsulin using different routes of administration, including intraperitoneal, intravenous (i.v.), intranasal, subcutaneous (s.c.), and oral, for immunotherapy in a T1DM murine model [28]. Another study involving non-obese diabetic (NOD) and transgenic mice revealed that the infusion of BDC2.5 via s.c. and i.v. injection effectively protected against autoimmune T1DM [29]. Previously published studies [30,31] also highlighted cell-based treatment approaches for T1DM. The current paradigm of cell-free therapies is based on paracrine factors, including exosomes derived from different cell lineages. The exact mechanism underlying the modulation of the immune system by these exosomes is still not fully understood. However, these exosomes are hypothesized to regulate the immune function of cells, including macrophages, natural killer (NK) cells, B cells, and T lymphocytes. Biology of EVs EVs are membrane-bound nanovesicles of endosomal origin inherited from cargo loading characteristics that are released into the extracellular fluids by cells [32]. According to the Minimal Information for Studies of Extracellular Vesicles 2018 (MISEV 2018) guidelines, EVs can be defined as components of the complete secretome secreted by the cell without any specific distinguishable marker to differentiate EV subtypes and their subcellular origin [33]. According to the MISEV 2018 criteria, two types of EVs are secreted by cells: exosomes and microvesicles. These types are differentiated based on the mode of biogenesis, rather than size (Table 1). EV biogenesis is a housekeeping phenomenon of cells that is evident as the inward invagination of the plasma membrane within the cytosol, forming early and late endosomes (LEs). These LEs fuse to form multivesicular bodies (MVBs) that undergo further invagination to form intraluminal vesicles (ILVs) [34]. These ILVs fuse with the plasma membrane of the cell to release exosomes into the extracellular space via exocytosis [28]. Sorting of EVs (exosomes) follows one of two pathways. The first pathway is ESCRT-I (endosomal sorting complex required for transport I)-dependent cargo sorting. This pathway includes the identification and sequestration of ubiquitinated proteins to specific sites of the endosomal membranes. This enables an association between ESCRT subunits I, II, and III, which further initiate the budding process. Budding is terminated by the Vps-4 protein factor, which is involved in the detachment of the ESCRT-III complex from the MVB membrane [34]. The ESCRT-independent mechanism of exosome sorting involves proteins and lipids, such as tetraspanins (CD81) and ceramides [34]. EVs are enveloped in a lipid bilayer anchored with functional proteins on their surface. The proteins include surface proteins, such as cluster of differentiation (CD) and major histocompatibility complex (MHC). The protein content within exosomes depends on the source cells of EVs and the cell stimulus (i.e., the microenvironment) [35]. These cell-derived EVs are a rich source of proteins, including heat shock proteins, cell adhesion proteins, trafficking membrane fusion proteins, tetraspanin membrane proteins, cell signaling proteins, and transcription proteins [34]. Moreover, several lipids and RNAs bearing therapeutic and diagnostic value are also present in EVs [34]. Tolerogenic Role of EVs EVs play a vital role in the regulation of the immune system, which may help prevent immune responses to various diseases [35]. The immune modulation of EVs may be attributed to the processing of antigen peptides and their presentation on their surfaces, followed by antigenic peptide transfer. As previously discussed [36], EVs contribute to antigen presentation via three primary mechanisms ( Figure 1). The first mechanism involves the direct presentation of antigens, such as EVs derived from DCs, which bind to T cells directly via MHC-peptide complexes and costimulatory adhesion molecules. The second mechanism relies on indirect presentation of the antigen, in which EVs carrying antigenic peptides are transferred to MHC molecules of APCs, followed by T cell activation. The last mechanism is referred to as "cross-dressing," in which the apprehended EVs are transferred to the surface of APCs and present their MHC-peptide conjugate directly to T cells for activation [36]. EVs enable the exploration of cell-free properties for modulating immune responses via MHC-complexed antigens to include tolerance of β-cells and autoreactive T cells. APCs secrete MHC-peptide conjugates that present transmembrane glycoprotein intercellular adhesion molecule-1 (ICAM-1) to EVs, which modulates the immune system via T cell activation. EVs secreted by APCs harbor functional peptide-MHC II and MHC I complexes as well as CD-80 and CD-86; these EVs might present antigens and initiate the activation of T lymphocytes [37]. EVs enhance T-cell-mediated responses and also contribute to the improvement of humoral responses, as they present native antigens that initiate the activation of B-lymphocytes [38]. In an immune-related exosome study using an animal model, EVs extracted from ovalbumin-loaded exosomes stimulated interferon-gamma (IFN-γ), which in turn activated humoral responses [39]. In another study, exosomes derived from B cells harbored peptide-MHC-II complexes that could enable prolonged antigen presentation to T cells [40]. A study involving mesothelioma patients reported that exosomes derived from pleural effusions containing transforming growth factor-beta (TGF-β) and NK group 2-member D (NKG2D) ligands inhibited CD8+ T cells and NK cells by downregulating the expression of NKG2D receptors [41]. In a similar study, placental cell-derived exosomes that possessed NKG2D ligands, such as UL-16 binding protein 1-5 (ULBP1-5) further modulated the surface expression of NKG2D on NK, CD8+, and γδ T cells, thereby downregulating their cytotoxic activity [35]. In another study, placental cell-derived exosomes containing the first apoptosis signal-ligand (FAS-L) could mediate apoptosis of CD4+ T cells [41]. EVs enable the exploration of cell-free properties for modulating immune responses via MHC-complexed antigens to include tolerance of β-cells and autoreactive T cells. APCs secrete MHC-peptide conjugates that present transmembrane glycoprotein intercellular adhesion molecule-1 (ICAM-1) to EVs, which modulates the immune system via T cell activation. EVs secreted by APCs harbor functional peptide-MHC II and MHC I complexes as well as CD-80 and CD-86; these EVs might present antigens and initiate the activation of T lymphocytes [37]. EVs enhance T-cell-mediated responses and also contribute to the improvement of humoral responses, as they present native antigens that initiate the activation of B-lymphocytes [38]. In an immune-related exosome study using an animal model, EVs extracted from ovalbumin-loaded exosomes stimulated interferon-gamma (IFN-γ), which in turn activated humoral responses [39]. In another study, exosomes derived from B cells harbored peptide-MHC-II complexes that could enable prolonged antigen presentation to T cells [40]. A study involving mesothelioma patients reported that exosomes derived from pleural effusions containing transforming growth factor-beta (TGF-β) and NK group 2-member D (NKG2D) ligands inhibited CD8+ T cells and NK cells by downregulating the expression of NKG2D receptors [41]. In a similar study, placental cell-derived exosomes that possessed NKG2D ligands, such as UL-16 binding protein 1-5 (ULBP1-5) further modulated the surface expression of NKG2D on NK, CD8+, and γδ T cells, thereby downregulating their cytotoxic activity [35]. In another study, placental cell-derived exosomes containing the first apoptosis signal-ligand (FAS-L) could mediate apoptosis of CD4+ T cells [41]. Tolerogenic EVs play an important role in immune system modulation, which can be helpful in the management of autoimmune diseases. A recently published murinemodel-based study reported that exosomes derived from DCs converted Th1/Th17 to Th2/Treg responses through miRNA-146a and potentiated the suppression of autoimmune myasthenia gravis (MG) [42]. Inflammatory bowel disease represents another autoimmune disease. It has been reported that DC-derived exosomes pretreated with soluble egg antigen can promote antigen tolerance and epithelial barrier function, thereby facilitating Treg expansion and Th1 cell proliferation inhibition [43]. Other authors reported that alpha-fetoprotein (AFP)-expressing, DC-derived exosomes possess anti-tumor activity mediated by the activation of IFN-γ-expressing CD8+ T cells with a simultaneous decrease in Tregs [44]. Exosomes derived from a unique group of CD4+CD25+Tregs were reported to protect against allograft rejection and aid in the prolonged survival of kidney transplant patients by suppressing T cell proliferation [45]. The mechanism of suppression by Treg-derived exosomes is still not fully understood; this effect may occur via the transfer of exosomal miRNAs to recipient cells [46]. In contrast, Treg-derived exosomes harboring CD73 can mediate the suppression of T cell proliferation [47]. Thus, immune-cell-derived exosomes can be engineered using various modification methods. Popular methods, including freezethaw, co-incubation, microfluidics, electroporation, and click chemistry, may permit the efficient modulation of the immune system, thereby targeting autoimmune diseases such as T1DM. Data from preclinical and clinical trials are required to support this hypothesis. TNps and T1DM The development of tolerance therapies requires precise identification and screening of antigenic targets of autoreactive immune cells, such as T cells. Small interfering ribonucleic acid (siRNA) has been used to inhibit the expression of chemokine receptor 2 (CCR2) [48]. Moreover, inhibition of chemokine receptors has been identified as a master regulator in the pathogenesis of diabetes mellitus, especially T1DM. In a clinical study, the CCR1/2 allosteric inhibitor reparixin improved outcomes during allogenic islet infusion and regulated islet damage [49]. An in vitro fabricated TNps of dextran-coated, iron oxide conjugated with siRNA downregulated MHC class I expression mediated by β2 microglobulin [50]. T1DM is a T-cell-mediated disease associated with MHC alleles. The primary role of MHC molecules is immune regulation through antigen presentation, especially in autoimmune diseases that include T1DM. Therefore, modulation of MHC-1 function using TNps has a significant therapeutic impact on the treatment approaches for T1DM. The immunosuppressive and anti-inflammatory characteristics of fabricated NPs in the regulation of the immune system are summarized in Table 2. Shah et al. [51] fabricated a conjugate system with diblock polymer-based and rapamycin, and evaluated the system's performance with an autoimmune disease. Rapamycin improves hepatic insulin sensitivity in patients with T1DM [52]. Therefore, engineering approaches using rapamycin are important for the fabrication of TNps for the treatment of autoimmune diseases, such as T1DM. Furthermore, iron oxide NPs with peptide conjugate systems are tolerogenic in MHC I and II modulation in autoimmune disorders [53]. Iron oxide NPs are not toxic to human health because upon degradation of these NPs, the contents are processed via natural iron metabolism pathways. There are few side effects and negligible suppression of immune function [53]. In another tolerogenic approach, gold NPs (AuNPs) were used with polyethylene glycol (PEG) to modulate T cell epitopes via the uptake of these NPs by DCs. This mechanism was demonstrated to expand Foxp3+ Tregs, further reducing the severity of experimental autoimmune encephalomyelitis [54]. These NPs were further evaluated in a murine NOD model for the treatment of T1DM [55]. The possible mechanism of these AuNPs in T1DM relies on the induction of tolerogenic responses in DC by AuNPs via induction of the suppressor of cytokine signaling 2 (Socs2), which results in the inhibition of nuclear factor κB (NF-κB) activation and proinflammatory cytokine production [55]. T cells can be evaluated to determine their potential use in the treatment of autoimmune diseases, particularly T1DM. Serr et al. suggested the development of targeted therapy for T1DM islet autoimmunity using miRNA181a and/or nuclear factor of activated T cells-5 (NFAT5) signaling [56]. Increasing miRNA181a activity boosts NFAT5 activity, while inhibiting FOXP3+ Treg induction [56]. Tolerogenic iron oxide NPs surface engineered with the proinsulin auto-antigen and 2-(1 H-indole-3 -carbonyl)-thiazole-4-carboxylic acid methyl ester (ITE) can be used for the early diagnosis of T1DM when employed with magnetic resonance imaging combined with magnetic quantification [57]. Use of antisense oligonucleotides to CD40, CD80, and CD86 suppresses DC activation [58]. The infusion of autologous DCs pretreated with antisense nucleotides can significantly delay the progression of T1DM [58]. However, no significant changes in immunological function were observed in phase I clinical trials of T1DM patients receiving these pretreated DC infusions [58]. Phillips et al. [59] encapsulated antisense oligonucleotides into synthetic microspheres (microparticles) and injected them s.c. The microparticles were taken up by DCs, which exhibited a suppressive phenotype downregulating the expression of T-cell-activating modulators. The downregulation significantly reverted hyperglycemia in a murine NOD mouse model. T1DM is mediated by the CD8+ T cell immunological mechanism; thus, the delivery of autoantigens and simultaneous release of TNps may promote the immunomodulation of CD8+ T cells. In another study, the delivery of cognate peptide antigen encapsulated within poly(lactic-co-glycolic acid) NPs could re-educate immunity toward a tolerant state, as demonstrated in a transgenic T1D mouse model [60]. Spontaneous death of β-cells in T1DM is attributed to DC activation [7]. The application of such TNps for the modulation of DCs has several advantages relative to the conventional approach of DC modulation. These include: (i) providing immunity to antigen cargo from protease action, (ii) enabling co-delivery of several NPs using EVs as vehicles, (iii) providing controlled release and delivery, and (iv) reducing non-specific target recognition. These tolerogenic biological NPs provide a safer therapeutic approach and are potentially valuable in the treatment of T1DM. Methods to Fabricate Engineered Tolerogenic EVs with NPs Tolerogenic EVs exhibit lower loading efficiency due to their small size. However, these EVs have low toxicity and tend to diffuse through the basement membrane. Thus, tolerogenic EVs could be a promising therapeutic approach [61,62]. EVs are composed of several biological molecules, including membrane proteins and lipid bilayers. The modification potential of EVs may potentially be exploited for the delivery of drugs and other TNps for the treatment of various autoimmune diseases. There are two engineeringbased approaches available for the modification of EVs: engineering of EV-secreting cells and post-isolation engineering. Production of EVs by Cell Engineering Cells secreting EVs can be engineered by two approaches: culture of cells in media/environments that impose stresses including hypoxia, serum starvation, and inflammation [63][64][65] and transfection of cultured cells with modulators that include plasmid DNA, miRNAs, miRNA antagonists, and Y RNA. In a cardio-protective study, EVs derived from progenitor cells grown under hypoxic conditions displayed an increased tendency to form tube-like entities compared to EVs derived under normoxic conditions [66]. Cells secreting EVs can be modulated by changing the culture medium. EVs derived from human adipose stem cells cultured in differential endothelial medium showed increased levels of miRNA-31 [67]. Modulated cells by external agents via transfection is also possible. For example, in one study miRNA181a was transfected into human mesenchymal stem cells, which led to a significant pro-reparative state in peripheral blood mononuclear cells [68]. The advantages and associated limitations of EV modification are summarized in Table 3. A genetic engineering approach to perform modifications to enable EV delivery of novel payloads is also a popular method among researchers. Alvarez-Erviti et al. [69] used genetic engineering to perform delivery of siRNA using EVs. The main advantage of using this approach is that it always generates a homogenous population of EVs without toxicity. The versatile method allows the loading of RNA, DNA, and peptides of choice into exosomes. Limitations of this method include the choice of donor cells for harvesting exosome populations. Tumor-derived exosomes may interact in the pre-metastatic niche and initiate negative effects. The current number of studies thus far are insufficient to determine the negative effects of exosomes derived from tumor cells. The use of adenoviral genes is also a limiting factor, since in some cases humans have developed an immune response that limits gene expression. In another study, curcumin-loaded mouse lymphoma EL-4 exosomes were synthesized by simple mixing of curcumin and exosomes. Thereafter, the anti-inflammatory activity of the synthesized exosomes was investigated in a murine model of lipopolysaccharide (LPS)-mediated septic shock [70]. Post-Isolation Engineering Post-isolation engineering provides a better approach for modulating EVs and preserving their biological content and function. This method incorporates bioengineering of EV loading, targeting, and delivery into target cells. EV-modulating strategies include passive loading (incubation with EVs and donor cells) and active loading (extrusion, freezethawing, electroporation, sonication, chemical transfection, Click Chemistry, and antibody binding) [71]. EVs have been modified and loaded with curcumin using an incubation approach; modified versions of these anti-inflammatory EVs were used to treat LPS-induced septic shock [70]. In this study, physical entrapment and chemical conjugation were used to load NPs with curcumin [70]. The hydrophobic nature of EVs due to the presence of a lipid bilayer enabled the easy incorporation of hydrophobic curcumin into NPs that self-reassembled, with size-dependent selective distribution evident among tissues [70]. Kim et al. used the simplest reported co-incubation method to incorporate drugs into EVs [72]. EVs can be enriched using an electroporation approach for a short period of time. This approach enables drugs or NPs to easily penetrate the double-layered lipid membrane of EVs under the application of an electric field. Drug loading of EVs using electroporation has been described [72]. In another study, EVs derived from DCs were loaded with doxycycline using an electroporation approach to inhibit tumor growth [73]. In another approach for the modification of EVs, the sonication method (6 cycles of 30 on/off for 3 min, followed by a 2 min cooling period) was used to load drugs into EVs [72]. EVs derived from cardiac progenitor cells were also enriched with miRNA-322 by electroporation to treat myocardial infarction [74]. Furthermore, the EV surface was modified using streptavidin and peptides via linkers attached to the carboxylic and amine groups following copper-free click chemistry [75]. Engineering of cells and post-isolation engineering of EVs have advantages and disadvantages. The genetic engineering approach produces standardized controlled EVs with desired traits. However, limitations include alterations in the biological activities of EVs due to the desired gene transfection and uncontrollable density of modification (number of epitopes attached per surface area of the EV). Limitations of post-isolation engineering include low yield, use of harsh chemicals that disrupt the EV composition, heat produced by the electric field, and sonication. These approaches should be used in a controlled manner to preserve the structure and function of the particular EVs. Role of Post-Engineered EVs with TNps in T1DM Several engineering-based strategies have been developed to fabricate tolerogenic EVs containing NPs for the treatment of T1DM. Immunomodulatory NPs or microparticles carrying antisense oligonucleotides to CD40, CD80, and CD86 were delivered to NOD mice, and T1DM was prevented by augmentation of Foxp3+ Treg cells [59]. In a murine model of diabetes mellitus, co-culture of islets and bone marrow stem cells increased survival and functionality of islet β-cells mediated by exosomes through a paracrine effect [76]. A clinical study also revealed the role of exosomes derived from mesenchymal stem cells in the suppression of immune targeting in allogeneic grafts [77]. The findings indicate that EVs are beneficial in islet β-cell restoration because of their regenerative, anti-apoptotic, immunomodulatory, and angiogenic properties. The delivery of antisense oligonucleotides against the primary transcripts of APC costimulatory molecules (CD40, 80, and 86) can inhibit DC activation and has been implicated as a preventive therapy for diabetes [78]. This approach can easily be translated further via the delivery of antisense oligonucleotides mediated by EVs using a genetic engineering approach. Another study revealed delayed progression of T1DM via the delivery of antisense nucleotide-treated autologous DCs [79]. The findings can also be further translated into a genetic engineering approach via the modification of autologous DCs with antisense nucleotides in the culture medium. Subcutaneous injection of microparticles incorporated with antisense oligonucleotides reversed hyperglycemia in a diabetic murine model [79,80]. These oligonucleotides could be further explored via EV-based delivery using post-isolation engineering approaches for the treatment of autoimmune T1DM. TNps of poly (lactic-coglycolic acid) have also been functionalized with anti-CD4 and interleukin-2 for targeting the T cell response [81]. Tolerogenic NPs targeting specific antigens cause specific immunosuppression, which may effectively treat T1DM by restoring T cell immune tolerance. The methyl ester of 2-(1Hindole-3-carbonyl)-thiazole-4-carboxylic acid (HCTCAME) has been used as a tolerogenic agent to increase DC activation [82]. Similarly, the β-cell antigen proinsulin was adsorbed on the surface of AuNPs for T1DM management [83]. Alternatively, β-cell antigen, proinsulin, and HCTCAME can be effectively explored via their incorporation into EVs using post-isolation engineering methods that may produce multiple therapeutic effects in T1DM. EVs can provide an infinite supply of cellular NPs that can modulate immune functions in an immunostimulatory or immunoregulatory manner, and induce antigen-specific tolerance of β-cell autoreactive T cells, particularly in T1DM. A recently published study related to T1DM showed that EVs derived from mesenchymal stem cells prevent T1DM onset and activate Th1 and Th17 cells [84]. Several risk factors and immune components are involved in the pathogenesis of T1DM. Therefore, anti-aging biomaterials, TNps, and EVs that modulate immune functions would be beneficial for the prevention and treatment of T1DM. Factors Affecting Tuning of TNps to Boost Immunomodulation Boosting targeted immunomodulation using TNps requires the optimization of several physicochemical measures. These factors are useful for boosting immune responses and are also responsible for maintaining the quality of responses within the cells. The use of engineered nanoparticles presents a better approach for cargo loading [85,86]. Numerous potential materials can be used in the fabrication of immune cell-targeting NPs to modulate immune responses. These materials must meet the primary requirements of biocompatibility, non-toxicity, and ease of modification of shape, surface chemistry, and size to enable effective results. Liposomes as an organic starting material have been employed in two published studies for the fabrication of nano-formulation [87,88]. In one of these studies, phosphatidylserine-liposomes (PL) loaded with insulin peptides were fabricated to stimulate apoptotic cells for detection by APCs. The PL were used in a spontaneous mouse model of autoimmune diabetes [88]. The authors reported that PL containing insulin peptides activated tolerogenic DCs, thereby impairing autoreactive T cell proliferation. The optimum size of NPs plays a significant role in boosting and tuning the immune response. The decreased surface-to-volume ratio of larger TNps affects their interactions with immune cells. NP uptake includes four endocytic aspects: pinocytosis, macropinocytosis, phagocytosis, and clathrin/caveolar-mediated endocytosis; immune cells commonly adopt pinocytosis and micropinocytosis mechanisms [89]. Table 4 summarizes the importance of the material and its size in the reversal of T1DM. Moreover, the shape of engineered NPs significantly affects the immune modulation mechanism [90]. Gold-based nanorods reportedly display efficient uptake by macrophage cells compared to nanospheres [91]. Furthermore, the size and shape of the engineered TNps contribute to the tuning and boosting of immune modulation during T1DM therapy. Several factors associated with NPs affect their uptake and interaction with immune components. Adjusting these factors is crucial for fabricating engineered NPs for immune modulation. The surface chemistry of NPs, such as charge and hydrophilicity/hydrophobicity, is another avenue for significant interaction with immune cells. In a previously published study, it was demonstrated that nanoparticles coated with peptides upon systemic delivery showed binding with MHC II that further activated the expansion of CD4(+) T cell type 1 in rodent models and proved the positive modulating role in autoimmune mechanism [95]. Similarly, another study reported the utility of T-lymphocytederived exosomal micro-RNAs (miRNAs) including miR-142-3p, miR-142-5p, and miR-155 in initiating the β cells apoptosis, which can be further used as therapeutic targets [96,97]. T1DM is mostly not associated with comorbidities such as micro-and macrovascular complications including diabetic retinopathy, nephropathy, neuropathy and cardiovascular diseases. However, T2DM is frequently presented with such complications [97]. EVs have not only demonstrated a therapeutic immunomodulatory role in T1D, but also showed promising treatment regime to patients with T2DM having such complications. On the contrary, one previously published study reported that EVs can induce insulin resistance, thereby contributing to development of T2DM through uncontrolled hyperglycemia [97]. However, a similar study also proved the therapeutic role of EVs in T2DM. The authors of previously published studies showed that EVs can act as therapeutic targets in T2DM patients with cardiovascular disease [98,99]. The pathophysiology of such therapeutic effects is mediated by mi-RNAs as suggested by such studies [100]. NPs with highly dense positive or negative charges exhibit colloidal stabilization owing to the electrostatic repulsive forces. NPs with a positive surface charge were efficiently internalized by cells and exhibited high immunogenic potential [101]. Some of the relevant studies also establish the role of immune cells derived EVs as therapeutics [102]. Notably, these factors contribute to immune response modulation during the fabrication of engineered TNps. Conclusions Cellular uptake of engineered EVs containing TNps could provide the basis of a treatment strategy for T1DM. The characteristics of these TNps and EVs, including size, shape, and ease of systemic circulation, significantly modulate immune cell function. Numerous engineering approaches for EV modification, including genetic and post-isolation approaches, provide an opportunity for their translation into tolerogenic EVs without the limitations associated with drug-based or cell-based therapies in T1DM. EVs may present a safer approach for the treatment of T1DM than chemical-based interventions. However, more preclinical and clinical trials are required to support the statements mentioned in this review.
v3-fos-license
2018-12-02T16:55:10.193Z
2018-11-08T00:00:00.000
53768697
{ "extfieldsofstudy": [ "Medicine", "Geography" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://zookeys.pensoft.net/article/28958/download/pdf/", "pdf_hash": "4506f30cfe0925745733230d4ccf086cb4921147", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42719", "s2fieldsofstudy": [ "Biology" ], "sha1": "4506f30cfe0925745733230d4ccf086cb4921147", "year": 2018 }
pes2o/s2orc
Two new species of Satsuma A. Adams, 1868 from Taiwan (Pulmonata, Camaenidae) Abstract Two new sinistral species of the genus Satsuma A. Adams, 1868, Satsumasquamigerasp. n. and Satsumaadiriensissp. n., from southern Taiwan are described. Satsumasquamigerasp. n. is characterized by a microsculpture comprising coarse, irregularly-spaced ridges and dense, easily-dislodged triangular scales on its sinistral shell, an angulated periphery, and partly-opened umbilicus. This species inhabits secondary forests in lowland hills. Satsumaadiriensissp. n. is characterized by a thin, fragile smooth shell with microsculpture of coarse, loose ridges, a rounded periphery, completely-opened umbilicus, and elongated penial verge formed by two main pilasters. This new species was collected in a mountainous, mid-elevation, broad-leafed forest. Introduction The family Camaenidae, which includes the confamilial Bradybaeninae, is widely distributed in Asia and Australasia (Wade et al. 2007). Recent studies have elucidated the systematics of this family by means of molecular tools (e.g., Wade et al. 2007, Hoso et al. 2010, Criscione and Köhler 2014, however significant gaps persist in the documentation of local faunas, such as in the genus Satsuma A. Adams, 1868. This genus is distributed in East Asia (Schileyko 2004), containing more than 100 species inhabiting Japan, China, Philippines, and Taiwan (Minato 1988, Wang et al. 2014, Adams and Reeve 1850. Some Vietnamese species, currently assigned to other genera, are likely part of genus Satsuma as well (Schileyko 2011). Species of Satsuma are characterized by conical, brownish shells varying in shape, size, color, chirality and banding (Schileyko 2004). The reproductive system of this genus features an epiphallic flagellum and a penial caecum, while dart sac, accessory sac, and mucous glands are absent (Kuroda andHabe 1949, Schileyko 2004). To date, 46 species have been described from Taiwan; most of them are endemic to Taiwan and narrowly distributed (Hsieh et al. 2013, Wu and Tsai 2014, 2015, 2016, Wu and Wu 2017a, 2017b, Hwang et al. 2017. Previous studies have suggested that there are potentially undescribed species in Taiwan, especially in mountainous areas (Wu et al. 2007, 2008, Hwang et al. 2017. In this study, we describe two new Taiwanese species from mountainous areas of lowland and mid-elevation, based on shell morphology and genital anatomy. Materials and methods Specimens of the new species were collected in southern Taiwan (Figure 1). Live adults were drowned in water for 12 hours, then boiled briefly in hot water at 95 °C. Whole snails were fixed and preserved in 95% ethanol. Immediately before dissection, the snails' tissues were softened with warm water, and the body was removed from the shell. Empty shells were then cleaned, oven-dried, and stored at room temperature. Reproductive systems were dissected under a stereomicroscope (Leica MZ7.5). Drawings were made using a camera lucida attachment. We used the methods described by Kerney and Cameron (1979) to measure shell characteristics to 0.1 mm and to count the number of whorls to 0.25 whorls. Measurements of genitalia were obtained from the digital images using ImageJ 1.48k (Schneider et al. 2012). We followed Gómez's (2001) terminology in describing the reproductive system. The WGS84 coordinates of localities were recorded. A distribution map was created using the open-source software Quantum GIS 2.18.1 (QGIS Development Team 2016) with topographic databases ASTER GDEM V2 released by NASA and METI (downloadable from https://asterweb.jpl.nasa.gov) and GADM 2.8 released by Global Administrative Areas (downloadable from http://gadm.org/). The type specimens have been deposited in the National Museum of Natural Science, Taichung, Taiwan (NMNS). NMNS National Museum of Natural Science, Taichung, Taiwan. Figure 1A). Diagnosis. Shell sinistral with coarse and irregularly ridged and fine striations; surfaces with dense, fine, erected, triangular scales falling off easily; periphery angulated, umbilicus partly opened; penial caecum short, internally with elongated verge formed by two main pilasters. External morphology. Light brown with irregular, small, dark brown spots and a distinct yellowish line running from head between tentacles to collar. Tentacles dark brown. Etymology. From squamigera (Latin, adjective in the nominative feminine singular case) meaning scale-bearing, for the scaly shell surface. Distribution. This species was found in southern Pingtung County, including the type locality, Da-han-shan forest road (22°24.20'N; 120°45.31'E, alt-1555 m), Ecology. All specimens were collected in mountainous, lowland, broad-leafed forest. Mature adults were collected in mid-May and February, from ground, rocks or fallen tree trunks. This species is sympatric with the congeners Satsuma bacca (Pfeiffer, 1866), Satsuma batanica pancala (Schmacker & Boettger, 1891) and Satsuma longkiauwensis Wu, Lin & Hwang, 2007. Remarks. Satsuma squamigera sp. n. is distinguished from all other sinistral species by having dense and curved scales on the whole shell surface. When fully matured, the scales typically fall off, leaving crescent-shaped granules. Some intact scales may remain beside sutures, on the base of the last whorl or inside the umbilicus. The new species is similar to S. pekanensis (Rolle, 1911) and S. submeridionalis (Zilch, 1951) in shape of shell and angulated periphery. In comparison to S. pekanensis, the new species has a shortened spire and an extended flagellum (Chang 1989). The new species differs from S. submeridionalis in having a slender base of pedunculus of bursa copulatrix and a regularly thickened proximal vagina (Wang et al. 2014). External morphology. Light brown with dense, irregular, dark brown to black spots and a distinct yellowish line running from head between tentacles to collar. Tentacles dark brown. Etymology. For Adiri, the indigenous Rukai name of the type locality, adjective of feminine gender. Distribution. Known from mid-elevation forest of Kaohsiung, Tainan and Pingtung ( Figure 1E-H). Ecology. All specimens were collected in mountainous, mid-elevation, broad-leaf forest. The single live adult was collected in July, from a tree trunk. This species is sym- patric with congeneric species S. albida (Adams, 1870) andS. friesiana (Moellendorff, 1884) at Shan-ping, S. amblytropis (Pilsbry, 1901) at Mt. Fan-bao-jian and an unknown Satsuma at the type locality A-li. Despite wide distribution in the mountainous areas of southwestern Taiwan, this species is quite rare. Remarks. Satsuma adiriensis sp. n. is similar to S. contraria (Pilsbry & Hirase, 1909), distributed in Kenting, Pingtung, in having a sinistral, semi-transparent shell with completely open umbilicus. The new species, however, has smaller shell width, round periphery on the final 1/4 of the last whorl, a sub-vertical columellar lip, a sinuous upper lip, coarse ridges on the surface, a slender pedunculus of bursa copulatrix, and a longer penial caecum and flagellum and shorter penis than the latter species (Hwang and Ger 2018). The new species shares a sinistral and depressed conic shell with Satsuma formosensis (Pfeiffer, 1866) and S. yaeyamensis (Pilsbry, 1894), which are found in northern Taiwan and the Ryukyu Islands. Satsuma adiriensis differs from these two species by its thin, semi-transparent shell with loose, coarse surface ridges, a sub-vertical columellar lip joining basal lip in a weak angle, and a bluntly angulated periphery on the first 3/4 of the last whorl. Discussion In this study, two new species of sinistral Satsuma were described based on shell and reproductive system characteristics. This work has brought the number of known sinistral Satsuma species to seventeen. Among these seventeen species, eleven are distributed in Taiwan, three in the Ryukyu Islands, two in southern China, and one in Batan Island, Philippines. The diversification of Satsuma has been explained by allopatric speciation (Kameda et al. 2007), prey-predator coevolution and chirality (Hoso et al. 2010), and arboreal behavior (Wu et al. 2008). Periostracal ornamentations such as granules and hairs are commonly seen in confamilial genera, e.g., Chloritis Beck, 1837, Moellendorffia Ancey, 1887, Aegista Albers, 1850 and many genera from Australia (Solem 1984, Hirano et al. 2014, Criscione and Köhler 2016. In the genus Satsuma, granules on embryonic whorls are commonly seen (personal observations), but rarely reported. This under-reporting may be due to the ease with which these granules wear off, or their simply being so small as to evade observation. Three sinistral species, S. perversa (Pilsbry, 1931), S. yaeyamensis and S. batanica pancala have been observed to have granulate embryonic whorls (Azuma 1995, personal observations), however these species do not have scales covering the whole shell surface, as does S. squamigera sp. n. Short, hooked hairs have been observed over the entire shell surface of the sinistral species S. uncopila (Heude, 1882). Granules on the entire shell surface are also reported in some dextral species, e.g., S. ferruginea (Pilsbry, 1900), S. textilis (Pilsbry & Hirase, 1904), S. japonica granulosa (Pilsbry, 1902), S. j. heteroglypta (Pilsbry, 1900), S. okiensis (Pilsbry & Hirase, 1908) and S. cristata (Pilsbry, 1902). The hairs are thought to promote the snails' adherence to leaves when humidity levels are high (Pfenninger et al. 2005). The evolutionary significance of these varying ornamentations of size, shape, and position remains questionable. This question will not be adequately answered until more complete phylogeny and comparative studies of the Satsuma genus become available. Author contributions CC Hwang performed the anatomical studies, executed this study, and wrote the manuscript; SP Wu helped with the data collecting and paper writing.
v3-fos-license
2020-10-19T18:11:18.852Z
2020-09-21T00:00:00.000
224960060
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1073/13/18/4947/pdf", "pdf_hash": "240f0e49c433cd7710c8bbfa777f12b82acc6728", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42720", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "sha1": "ed710f84c1739bd6815cd23be137e674952d9142", "year": 2020 }
pes2o/s2orc
An Air Terminal Device with a Changing Geometry to Improve Indoor Air Quality for VAV Ventilation Systems This study aimed to develop a new concept for an air terminal device for a VAV (variable air volume) ventilation system that would improve overall ventilation efficiency under a varying air supply volume. In VAV systems, air volume is modified according to the thermal load in each ventilated zone. However, lowering the airflow may cause a lack of proper air distribution and lead to the degradation of hygienic conditions. To combat this phenomenon, an air terminal device with an adapting geometry to stabilize the air throw, such that it remains constant despite the changing air volume supplied through the ventilation system, was designed and studied. Simulations that were performed using the RNG k–ε model in the ANSYS Fluent application were later validated on a laboratory stand. The results of the study show that, when using the newly proposed terminal device with an adaptive geometry, it is possible to stabilize the air throw. The thermal comfort parameters such as the PMV (predicted mean vote) and PPD (predicted percentage of dissatisfied) proved that thermal comfort was maintained in a person-occupied area regardless of changing airflow though the ventilation system. Introduction The building sector accounts for 40% of primary energy use, and a considerable fraction of this energy is utilized to construct a desirable indoor environment for occupants [1,2]. At the same time, we spend more than 90% of our time indoors and have higher requirements for indoor thermal and environment comfort [3,4]. Good indoor air quality, energy-saving performance, and flexible area control have made VAV (variable air volume) air-conditioning systems widely popular in office, commercial, and industrial buildings [5][6][7]. To save energy, VAV systems regulate airflow according to the current needs in a ventilated zone [8,9]. When less air is needed, less energy is consumed by the system. These changes result in lowering the power needed to supply the fan in the air-handling unit and, as a consequence, in saving energy [8,10]. Equation (1) shows that, by lowering the air flow by 20%, it is possible to lower the energy consumption of the ventilators by almost 50% [11][12][13][14][15][16]. where P 1 , P 2 is the electric power consumed by the fan, n 1 , n 2 is the rotational speed of the fan, and . V 1 , . V 2 is the air flow volume through the fan. Energies 2020, 13 Equation (1) shows that VAV systems have energy-saving potential. However, when lowering the airflow, the system may not be able to remove all the contaminants [9] and may cause their accumulation within buildings, leading to dead zones [9]. The recommended parameters for systems with variable air flow, including the fresh air rate and criteria for the indoor environment (thermal, air quality, noise, light), are provided by standards EN 15251:2012 [17], ISO 7730 [18], and ANSI/ASHRAE-62.1 [19]. Despite these regulations, the problem of high contaminant concentration and the lack of thermal comfort have been proven in both buildings built in the classic standard [20][21][22] and airtight buildings (passive and zero-energy) [23,24]. This problem occurs in households, sports facilities [25,26], schools [27][28][29][30], kindergartens [31], office environments [32] etc., which means it is a concern in the entire building sector. When regarding VAV systems, studies have shown that, when they are not properly regulated, there is a risk of contaminant accumulation [8,13,33,34]. To counteract the problems concerning VAV systems, a laboratory stand with an adaptive ATD (air terminal device) was created and CFD (computational fluid dynamics) simulations were carried out. The aim was to test the possibilities of a device that had a steady air throw, meaning that the range of air leaving the ATD would remain constant despite the changing flow in the VAV system. This could prevent the accumulation of contaminants in specific zones when the airflow was lowered. Thanks to the new design of the ATD and the research carried out, a target zone could be continually ventilated despite the changes in the flow, and fresh air would be constantly supplied to the occupants. Laboratory tests of air terminal devices were carried out by various researchers, proving that their geometry influences airflow. Kalmar used different types of air terminal devices to evaluate their influence on the comfort of occupants [35], while Rabani et al. assessed heating in an office cubicle using an active supply diffuser in a cold climate [36]. Both found that the adaptation of ATD geometry improved the conditions in the ventilated zone. Nielsen studied the influence of wall-mounted air terminal devices used in displacement ventilation and its influence on velocity distribution close to the floor [37]. The study showed that openings between obstacles placed directly on the floor generate a flow similar to the air movement in front of a diffuser. Similar studies were done by Hurnik who discussed the difference in the geometry of a cleaning inlet of a VAV system that had a changing geometry to keep the fresh air from centering directly below the air supply and not dispersing throughout the ventilated zone [38]. Additionally, CFD simulations are a powerful tool for estimating the airflow patterns and thermal environment of various HVAC (heating, ventilation, and air conditioning) systems. They were used for the estimation and control of indoor environment and space ventilation when using a VAV system by Sun and Wang [39], Du et al. [40], Gangisetti et al. [41], and Nada et al. [42], among others. Furthermore, they were used to analyze the influence of structures on airflow, for example, by Mu et al. [43]. In their research, they designed a novel damper torque airflow sensor for VAV terminals and used CFD methodology to analyze the airflow characteristics at different speeds and positionings of the damper. Similar studies using CFD to show the influence of an element's geometry with VAV systems were done by Hurnik [44], Liu et al. [45], and Pasut et al. [46]. According to the evidence that applying laboratory measurements and CFD analysis are valid choices in analyzing VAV systems, it was decided to use these methods in the study. The aim of the new ATD was to maintain a steady air throw under changing conditions so that a desired zone would be properly ventilated. The air throw is the distance from the ATD in the center of the penetrating air current to the point where a minimal air speed is measured. For the purpose of this study, the air speed that marked the end of the ventilated zone was assumed to be 0.5 m/s. It was chosen as a boundary value, after which the air velocity was assumed so low that it would not flow further into the ventilated room. The design of the ATD, as well as the analyses carried out in the study, shows that there is a possibility to improve the air distribution for VAV systems. Experimental Study The construction of the air terminal device was based on the change in its diameter. The design concept is shown in Figures 1 and 2. The goal was to adapt the ATD geometry to change the inlet diameter as the flow of the system lowered, allowing the air throw to remain constant. The air flow was changed in steps and the diameter of the ATD was changed by installing a gasket between the elements. The detailed geometry of the device can be found in [47]. Experimental Study The construction of the air terminal device was based on the change in its diameter. The design concept is shown in Figures 1 and 2. The goal was to adapt the ATD geometry to change the inlet diameter as the flow of the system lowered, allowing the air throw to remain constant. The air flow was changed in steps and the diameter of the ATD was changed by installing a gasket between the elements. The detailed geometry of the device can be found in [47]. To test the ATD, a laboratory stand was constructed and installed in a space with controlled environmental conditions. It was designed according to European standard EN12238: 2002 [48], and the concept of the stand is shown in Figure 3. Experimental Study The construction of the air terminal device was based on the change in its diameter. The design concept is shown in Figures 1 and 2. The goal was to adapt the ATD geometry to change the inlet diameter as the flow of the system lowered, allowing the air throw to remain constant. The air flow was changed in steps and the diameter of the ATD was changed by installing a gasket between the elements. The detailed geometry of the device can be found in [47]. To test the ATD, a laboratory stand was constructed and installed in a space with controlled environmental conditions. It was designed according to European standard EN12238: 2002 [48], and the concept of the stand is shown in Figure 3. To test the ATD, a laboratory stand was constructed and installed in a space with controlled environmental conditions. It was designed according to European standard EN12238: 2002 [48], and the concept of the stand is shown in Figure 3. The temperature and humidity of the controlled lab space where the experiments were conducted were equal to 20 °C and 48%, respectively. The ambient air velocity was equal to zero as the laboratory was a closed room. A VAV system with a frequency inverter attached to the fan was installed, which allowed alteration of the airflow. The airflow itself was calculated according to the current standards and regulations [49] by using an orifice plate. The laboratory set up in shown in Figure 4. The pressure drop on the orifice was measured using a micromanometer with the range of ±3500 Pa and the accuracy of ±1% at temperature equal to 20 °C. After the orifice, the air flew into the equalizing chamber where the flow was evened out by a series of grilles to eliminate turbulence. The air stream then flew into the ATD; turbulence from ducts and bends did not influence the flow into the test room thanks to the equalizing chamber. After the air flew into the test zone, a thermal-resistant anemometer was used to measure its velocity and temperature. It had the range of 0.08 m/s to 20 m/s and an accuracy of ±2%. The summarized accuracy of the instruments used in the analysis are shown in Table 1. Velocity measurements were conducted every 30 cm from the air terminal device ( Figure 5). The position of the anemometer was established for each measuring point by laser beam guidance. Thanks to the small diameter of the probe (6 mm), disruption of the air stream was minimalized. The anemometer can be seen in Figure 4 along with a cross laser beam. Velocity measurements were carried out for a period of one minute. The sampling frequency of the anemometer was 6 s, meaning that the final result was the average of 10 partial measurements. The temperature and humidity of the controlled lab space where the experiments were conducted were equal to 20 • C and 48%, respectively. The ambient air velocity was equal to zero as the laboratory was a closed room. A VAV system with a frequency inverter attached to the fan was installed, which allowed alteration of the airflow. The airflow itself was calculated according to the current standards and regulations [49] by using an orifice plate. The laboratory set up in shown in Figure 4. According to EN ISO 5167-1 [50], the mass flow was calculated by defining the correlation between the flow and the pressure drop on an orifice using the following equation: The pressure drop on the orifice was measured using a micromanometer with the range of ±3500 Pa and the accuracy of ±1% at temperature equal to 20 • C. After the orifice, the air flew into the equalizing chamber where the flow was evened out by a series of grilles to eliminate turbulence. The air stream then flew into the ATD; turbulence from ducts and bends did not influence the flow into the test room thanks to the equalizing chamber. After the air flew into the test zone, a thermal-resistant anemometer was used to measure its velocity and temperature. It had the range of 0.08 m/s to 20 m/s and an accuracy of ±2%. The summarized accuracy of the instruments used in the analysis are shown in Table 1. Velocity measurements were conducted every 30 cm from the air terminal device ( Figure 5). The position of the anemometer was established for each measuring point by laser beam guidance. Thanks to the small diameter of the probe (6 mm), disruption of the air stream was minimalized. The anemometer can be seen in Figure 4 Energies 2020, 13, 4947 5 of 20 along with a cross laser beam. Velocity measurements were carried out for a period of one minute. The sampling frequency of the anemometer was 6 s, meaning that the final result was the average of 10 partial measurements. According to EN ISO 5167-1 [50], the mass flow was calculated by defining the correlation between the flow and the pressure drop on an orifice using the following equation: where C is the flow coefficient, is the ratio of the diameter of the duct to the diameter of the orifice, is the expansion number, is the diameter of the orifice (m) with its uncertainty equal to 0.0005 m, ∆ is the measured pressure drop (Pa), and is the air density (kg/m 3 ). When the flow is calculated according to Equation (2), the measurement uncertainty is defined according to ISO 5167 [50], and the error propagation rule as follows: (3) According to EN ISO 5167-1 [50], the mass flow was calculated by defining the correlation between the flow and the pressure drop on an orifice using the following equation: where C is the flow coefficient, β is the ratio of the diameter of the duct to the diameter of the orifice, ε t is the expansion number, d is the diameter of the orifice (m) with its uncertainty equal to 0.0005 m, ∆p is the measured pressure drop (Pa), and ρ 1 is the air density (kg/m 3 ). When the flow is calculated according to Equation (2), the measurement uncertainty is defined according to ISO 5167 [50], and the error propagation rule as follows: To calculate the uncertainty shown in Equation (3), the uncertainty of each individual element must be defined. The uncertainty was calculated for the maximum flow as it differed the most from the simulations. In Equations (2) and (3), the flow coefficient when using an orifice plate is defined as follows: Energies 2020, 13, 4947 6 of 20 where D is the diameter of the duct, equal to 0.315 m, and d is the diameter of the orifice, equal to 0.09 m. The calculation of the uncertainty of the flow coefficient is shown below in Equation (7). and The value of coefficient A was equal to 3.6477 with an uncertainty of 1.95 × 10 −5 . The calculation of the uncertainty of β (ratio of the diameter of the duct to the diameter of the orifice) is shown below in Equation (9). where D is the diameter of the duct, equal to 0.315 m, and d is the diameter of the orifice, equal to 0.09 m. Consequently, ∆β was calculated to be equal to 1.9 × 10 −6 and ∆C was calculated to be equal to 1.898 × 10 −7 . The expansion number ε t , presented in Equation (2), can be shown as where p 1 and p 2 are the pressure upstream and downstream from the orifice, respectively, with their uncertainties equal to 0.1 Pa. The uncertainty of ε t can be calculated as shown below. The calculations gave the results of the expansion number ε t equal to 0.9993 with its uncertainty ∆ε t equal to 3.52 × 10 −6 . Additionally, the density of the air in Equation (2) can be defined as shown below. where p 1 is the air pressure in the duct before the orifice (Pa), with its uncertainty equal to 0.1 Pa, θ 1 is the temperature of the air inside the duct (K), with an uncertainty of 1 K, and R w is the gas constant, calculated using Equation (13). where p a is the atmospheric air pressure (Pa), with its uncertainty equal to 0.1 Pa, and p v is the partial pressure for water vapor at temperature θ 1 (Pa). The values of air density and gas constant were equal to 1.192 kg/m 3 and 288.15 J/(kg·K), respectively. The uncertainty of partial pressure for water vapor can be calculated from the following formula: where p sat is the saturation pressure of water vapor according to the dry-bulb thermometer. The uncertainty of the gas constant can be calculated as follows: In order to obtain ∆ρ 1 from Equation (3), the following equation should be used: The calculation results were ∆ρ v = 6.42 Pa, ∆R w = 0.019 J/(kg·K), and ∆ρ 1 = 0.004 kg/m 3 . After the calculation of each individual component's uncertainty, it was possible to calculate the uncertainty for Equation (3), which was ∆ . q m = 0.00026 kg/s, giving a relative uncertainty of mass flow measurement equal to 0.25%. Such a small value indicates the very high quality of the measurements and the measurement stand. Numerical Simulation Air distribution has been extensively studied with CFD methods. CFD was first introduced in the ventilation industry in the 1970s, and it is widely used today to assist in the design of ventilation systems [51]. The purpose of the CFD study was to develop and validate a computer model that could be used for accurate airflow assessment, considering different strategies, as well as different structures. CFD methods have been used by researchers before to evaluate air distribution methods [40,44,[52][53][54]. The program ANSYS Fluent version 17.0 was chosen for the study as it provides comprehensive modeling capabilities for a wide range of incompressible and compressible, laminar, and turbulent fluid flow problems [55] where steady-state or transient analyses can be performed. The CFD simulations were carried out for the same conditions as the laboratory measurements. This allowed the simulation to be evaluated and used for future research. It was also used for the investigation of the thermal confront conditions. To test how the turbulence models available in the ANSYS Fluent application preformed in this study, simulations were carried out to compare the k-ε and k-ω models, which are widely used for turbulent flow simulations [56,57]. For all cases, the ATD had the maximum diameter and maximum flow. Numerical studies were performed by selecting different turbulence models to determine the flow characteristics. The experimental and numerical results of the average velocities along the axis of the flow in the occupancy zone are compared in Table 2. The numerical results were compared with the experimental results, and the RNG k-ε turbulence model gave the best results. For the geometry of the experiment, an axisymmetric model was used. The geometry of the case is shown in Figure 6 and was adapted to reflect the conditions in the laboratory stand. An equalizing chamber was designed, which served as the air inlet boundary condition. The outlet boundary conditions were located along the edges of the outlet area ( Figure 6) and were 15 m long, deliberately much larger than the air throw to not influence the simulation results. Table 2. The numerical results were compared with the experimental results, and the RNG k-ε turbulence model gave the best results For the geometry of the experiment, an axisymmetric model was used. The geometry of the case is shown in Figure 6 and was adapted to reflect the conditions in the laboratory stand. An equalizing chamber was designed, which served as the air inlet boundary condition. The outlet boundary conditions were located along the edges of the outlet area ( Figure 6) and were 15 m long, deliberately much larger than the air throw to not influence the simulation results. A mech independence analysis was conducted to check how the number of elements influenced the results of the simulation. The results are shown in Table 3. The mesh with 8,799,416 elements was used in the simulations as it had suitable parameters and the number of elements was optimal for the simulation to converge. As shown in Figure 6, cells with different element sizes were created in different parts of the model for a better mesh structure. Smaller cell sizes were created in the regions near the ATD and A mech independence analysis was conducted to check how the number of elements influenced the results of the simulation. The results are shown in Table 3. The mesh with 8,799,416 elements was used in the simulations as it had suitable parameters and the number of elements was optimal for the simulation to converge. As shown in Figure 6, cells with different element sizes were created in different parts of the model for a better mesh structure. Smaller cell sizes were created in the regions near the ATD and equalizing chamber, resulting in a better-quality mesh structure. The dimensional properties of these regions are given in Table 4. Additionally, the y+ parameter was calculated as it is an important parameter concerning the wall function and is the nondimensional distance from the wall to the first node from the wall [55]. Ideally, while using the enhanced wall treatment option, the wall y+ should be on the order of 1 (at least less than 5) to resolve the viscous sublayer [55]. In this study, the value of the parameter was below 1 for all the wall boundaries. After conducting the above analyses, it was decided that the simulations would be carried out using the RNG k-ε model with enhanced wall treatment and took into account gravity working in the Y-direction. The solution method settings are displayed in Table 5. The convergence criterion was set to 10 −6 which is adequate according to the literature [58,59]. Table 4. Geometric properties of mesh structure. Inlet Section Equalizing Chamber ATD Outlet Section Maximum element size 20 mm 10 mm 5 mm 20 mm Growth rate 1.2 Cell geometry Quadrilateral To study how adapting the ATD changed the air distribution, three cases were taken under consideration for three different airflows. The flow was assessed by previous measurements done in a typical office building. The air magnitude in the cases was equal to the following: The air terminal device settings were as follows: • ATD setting 1-all three rings are opened; ATD diameter D ATDef = 200 mm, ATD area A ATD = 30,961 mm 2 ; • ATD setting 2-the largest ring is closed and two smaller are opened; ATD diameter D ATDef = 160 mm, ATD area A ATD = 19,745 mm 2 ; • ATD setting 3-only the smallest ring is opened; ATD diameter D ATD = 100 mm, ATD area A ATD = 7631 mm 2 . Results To determine if a change in the construction of the ATD improved the conditions of the VAV system, the first step was to see how the air throw changed without it as a basis for comparison. The device was Energies 2020, 13, 4947 10 of 20 fixed to setting ATD 1-all three rings opened and diameter D ATDef = 200 mm. This ATD setting was chosen as the basis for comparison as it does not use the new elements that interfere with its geometry. This is also shown in Figure 7, which presents the results without a change in the air terminal device geometry but with changing airflow. This shows how the system reacts without a geometry change of the ATD. Energies 2020, 13, x FOR PEER REVIEW 10 of 21 First, the maximum airflow was supplied, and the air throw was measured. Afterward, the flow was changed to the minimum without changing the diameter of the ATD. As suspected, when lowering from the maximum (330 m 3 /h) to the minimum flow (150 m 3 /h) with the ATD constant setting 1, the throw lowered. It changed from 8 m to around 4.5 m. The results are shown in Figure 7. To countereffect the lowering of the air throw shown in Figure 7, the ATD with adaptive geometry was used. During the tests with changing geometry, the diameter was altered according to the design shown in Figures 1-3. The airflow was changed in steps from the maximum (330 m 3 /h) to the medium (220 m 3 /h) and minimum (150 m 3 /h). While lowering the airflow, the diameter of the air terminal device was also altered from ATD setting 1 to ATD setting 2 and ATD setting 3 for the medium and minimal flow, respectively. The results are shown in Figure 8. First, the maximum airflow was supplied, and the air throw was measured. Afterward, the flow was changed to the minimum without changing the diameter of the ATD. As suspected, when lowering from the maximum (330 m 3 /h) to the minimum flow (150 m 3 /h) with the ATD constant setting 1, the throw lowered. It changed from 8 m to around 4.5 m. The results are shown in Figure 7. To countereffect the lowering of the air throw shown in Figure 7, the ATD with adaptive geometry was used. During the tests with changing geometry, the diameter was altered according to the design shown in Figures 1-3. The airflow was changed in steps from the maximum (330 m 3 /h) to the medium (220 m 3 /h) and minimum (150 m 3 /h). While lowering the airflow, the diameter of the air terminal device was also altered from ATD setting 1 to ATD setting 2 and ATD setting 3 for the medium and minimal flow, respectively. The results are shown in Figure 8. The results and the comparison between the laboratory tests and simulations are shown in Figure 8 where the velocity along the axis is compared for all three cases. This figure clearly proves that, in both the simulations and the measurements, the air throw in the test zone could be evened out by changing the geometry of the ATD. By adapting the air terminal device geometry, it was possible to stabilize the air throw when changing the air supply volume from 330 m 3 /h to 150 m 3 /h. The results and the comparison between the laboratory tests and simulations are shown in Figure 8 where the velocity along the axis is compared for all three cases. This figure clearly proves that, in both the simulations and the measurements, the air throw in the test zone could be evened out by changing the geometry of the ATD. By adapting the air terminal device geometry, it was possible to stabilize the air throw when changing the air supply volume from 330 m 3 /h to 150 m 3 /h. Figure 9 shows how the flow pattern changed in the cross-section of the airflow in the distances from the ATD equal to 0.5 m, 1.5 m, 3 m, and 4.5 m. The figure shows that the geometry of the airflow remained concentrated and slowly dispersed as the flow continued. The case shown in the figure was for the medium velocity and ATD setting 2. It presents the change in dispersion of the airflow within the test area during the simulations. The results and the comparison between the laboratory tests and simulations are shown in Figure 8 where the velocity along the axis is compared for all three cases. This figure clearly proves that, in both the simulations and the measurements, the air throw in the test zone could be evened out by changing the geometry of the ATD. By adapting the air terminal device geometry, it was possible to stabilize the air throw when changing the air supply volume from 330 m 3 /h to 150 m 3 /h. As the main interest of this study was focused on the air throw, another series of tests was conducted to present the air flow spread. This was done to analyze the air throw not only along the axis as in Figure 8, but in the entire test area. The distance from the ATD was measured both horizontally and vertically in the place where the velocity reached 0.5 m/s. Once again, the measurements were done every 30 cm. As the main interest of this study was focused on the air throw, another series of tests was conducted to present the air flow spread. This was done to analyze the air throw not only along the axis as in Figure 8, but in the entire test area. The distance from the ATD was measured both horizontally and vertically in the place where the velocity reached 0.5 m/s. Once again, the measurements were done every 30 cm. The tests were conducted as there are a series of fluid dynamic effects that can influence the flow pattern of the air, especially when its velocity lowers. The most important are [50] the velocity profile, flow pulsations, mechanical effects, and the surrounding atmosphere, including the thermal effects. In this study, the surrounding atmosphere was not an issue as the test zone was kept in a stable environment and tests were performed under isothermal conditions. Figures 10-12 show the individual cases for each airflow spread comparing the simulation results to the measurements. Point zero on the vertical axis represents the center of the ATD where the air flew into the test room. The thee figures were used to not only compare how the simulations reflected the measurements in the axis of the flow but also to assess how the dispersal of the fresh air into the room would change with the different ATD settings. (c) (d) As the main interest of this study was focused on the air throw, another series of tests was conducted to present the air flow spread. This was done to analyze the air throw not only along the axis as in Figure 8, but in the entire test area. The distance from the ATD was measured both horizontally and vertically in the place where the velocity reached 0.5 m/s. Once again, the measurements were done every 30 cm. The tests were conducted as there are a series of fluid dynamic effects that can influence the flow pattern of the air, especially when its velocity lowers. The most important are [50] the velocity profile, flow pulsations, mechanical effects, and the surrounding atmosphere, including the thermal effects. In this study, the surrounding atmosphere was not an issue as the test zone was kept in a stable environment and tests were performed under isothermal conditions. Figures 10-12 show the individual cases for each airflow spread comparing the simulation results to the measurements. Point zero on the vertical axis represents the center of the ATD where the air flew into the test room. The thee figures were used to not only compare how the simulations reflected the measurements in the axis of the flow but also to assess how the dispersal of the fresh air into the room would change with the different ATD settings. The air flow spread was quite concentrated as shown in Figures 10-12, and further studies should be taken under consideration to widen the airflow. The standard deviation of the simulations from the measurements was calculated for each air spread and is presented in Figure 13a-c. In this figure, the measured air spread is represented by the continuous line, while the calculated spread is represented by the scattered points. Both the x-axis and the y-axis represent the vertical distance from the axis of the ATD (above or below the axis of the flow). The highest discrepancy between the simulations and the measurements was registered for the maximum airflow. The lowest was registered for the minimum airflow. These results prove the good convergence of the CFD model as more than 75% of the results had a discrepancy lower than 8% [59,60]. The air flow spread was quite concentrated as shown in Figures 10-12, and further studies should be taken under consideration to widen the airflow. The standard deviation of the simulations from the measurements was calculated for each air spread and is presented in Figure 13a-c. In this figure, the measured air spread is represented by the continuous line, while the calculated spread is represented by the scattered points. Both the x-axis and the y-axis represent the vertical distance from the axis of the ATD (above or below the axis of the flow). The highest discrepancy between the simulations and the measurements was registered for the maximum airflow. The lowest was registered for the minimum airflow. These results prove the good convergence of the CFD model as more than 75% of the results had a discrepancy lower than 8% [59,60]. Thermal Comfort While maintaining a steady air throw may be the answer to removing contaminants, it may not be enough to achieve proper thermal comfort conditions. It is essential for potential occupants that comfort is maintained, as it ensures the appropriate quality of the indoor environment [61,62]. There are many thermal comfort models and indices that help to define the thermal comfort, each with its advantages and disadvantages [63]. However, the most advanced indices are the PMV (predicted mean vote) and the PPD (predicted percentage of dissatisfied). The PMV/PPD model was developed by Fanger [64] using heat-balance equations and empirical studies of skin temperature to define comfort. The calculation of the parameters can be found in EN ISO 7730 [18]. The PMV index is used to predict the average values of the votes of a large group of people using a seven-point thermal sensation scale on the basis of the heat balance of the human body [18]. The best case is when the PMV is equal to zero, meaning that the comfort level is neutral, and no one feels uncomfortable. PMV is a function of many environmental factors, including the metabolic rate, effective mechanical power, sensitive heat loss, heat exchange by evaporation on the skin, and air velocity. The detailed equations can be found in in EN ISO 7730 [18]. PPD is a function of PMV and is calculated on its basis. The thermal conditions provided by the tested ATD were evaluated by conducting a PMV and PPD analysis using the thermal comfort application suitable for ANSYS Fluent v 17. This application allows the adjustment of different parameters including the metabolic rate and clothing resistance value, as well as the conditions for various thermal scenarios. Because the airflow patterns in the simulations were evaluated on the laboratory stand in the previous sections, the application is a valid tool for conducting thermal comfort analysis. The following parameters were used in all the simulations to represent a situation that could occur in the summer season: • The results for the maximum airflow are shown in Figure 14 and Table 6. The results in the table show that both thermal comfort parameters ranged from complete comfort to major discomfort. PMV and PPD contours presented in Figure 14 show a detailed layout of both parameters. The PMV was Energies 2020, 13, 4947 15 of 20 equal to −1.4 just as the air flowed out of the air terminal device meaning that the occupants would feel a sense of cold. The zone in which the occupants would have a lack of comfort continued up to 6 m from the ATD. After that length, the area from 6 to 8 m is where the occupants would feel thermal comfort. Similar results were seen in the cases for the other airflow settings. The PPD results show a similar pattern, where, in the first 6 m of the air stream, the occupants would feel a cooling sensation. However, after this area, the occupants would be in a zone of comfort. Similar results can be observed for the medium and minimum airflows. The results can be seen in Figures 15 and 16. Table 6 presents the extreme PMV and PPD results in the test zone for all cases (not including the equalizing chamber and ATD). Similar results can be observed for the medium and minimum airflows. The results can be seen in Figures 15 and 16. Table 6 presents the extreme PMV and PPD results in the test zone for all cases (not including the equalizing chamber and ATD). Similar results can be observed for the medium and minimum airflows. The results can be seen in Figures 15 and 16. Table 6 presents the extreme PMV and PPD results in the test zone for all cases (not including the equalizing chamber and ATD). Considering the PMV and PPD results, while using the ATD, there is a risk of a draught and/or cool sensation close to the element. However, in the area between 6 and 8 m from the ATD, thermal comfort is maintained, and fresh air is sufficiently supplied to that area in all three ATD settings. The ATD would not be suitable in cases such as a ceiling element for office or residential buildings, in which the floor height is lower than 6 m. It may be applied as a wall-mounted element for installations in large rooms that use mixing ventilation, as well as a ceiling-mounted element for objects such as industrial production halls, which are much taller than a standard building. Conclusions A new type of ATD with an adaptive geometry was proposed to maintain a steady air throw for VAV ventilation systems. A prototype of the element was built and analyzed through laboratory tests and CFD simulations. The geometry of the device was altered according to the airflow changes in the ventilation system. CFD simulations and laboratory tests were conducted for three different ATD settings and three different airflows. Both concluded that, with the changing geometry, the air throw was stable despite the flow changing from 330 m 3 /h to 150 m 3 /h. Without the change in the geometry of the ATD, air throw lowered from 8 m to under 4 m, meaning that, if occupants were stationed 8 m away from the element, the system would not provide them with fresh air when the conditions changed. With the adaptive air terminal device, it was possible to maintain a steady air throw into the ventilated zone. The air flow spread, however, was quite concentrated, as shown in Figures 10-12, and further research should be taken under consideration to widen the airflow. This could be a limitation in the use of the element. Additionally, thermal comfort conditions were calculated and represented by the PMV and PPD. In each case, thermal comfort was maintained at a distance between 6 m and 8 m. However, in the area closer to the ATD, there was a decrease in comfort and risk of draught, meaning that this prototype should not be used for small spaces or as a ceiling device in office or residential buildings. The presented air terminal device could be used for VAV systems that use wall-mounted elements to distribute air or in large buildings such as production halls that have the average height over 6 m. In these cases, the risk of draught would be eliminated, and the system with the ATD could improve the air quality and maintain the thermal comfort for occupants. Further studies should be undertaken to eliminate the possibility of draught close to the ATD so that it can be used in a broader spectrum of VAV systems. Additionally, when applying the ATD for different applications, it should be adapted to the conditions in the installation as they may vary from those in this study. Considering the PMV and PPD results, while using the ATD, there is a risk of a draught and/or cool sensation close to the element. However, in the area between 6 and 8 m from the ATD, thermal comfort is maintained, and fresh air is sufficiently supplied to that area in all three ATD settings. The ATD would not be suitable in cases such as a ceiling element for office or residential buildings, in which the floor height is lower than 6 m. It may be applied as a wall-mounted element for installations in large rooms that use mixing ventilation, as well as a ceiling-mounted element for objects such as industrial production halls, which are much taller than a standard building. Conclusions A new type of ATD with an adaptive geometry was proposed to maintain a steady air throw for VAV ventilation systems. A prototype of the element was built and analyzed through laboratory tests and CFD simulations. The geometry of the device was altered according to the airflow changes in the ventilation system. CFD simulations and laboratory tests were conducted for three different ATD settings and three different airflows. Both concluded that, with the changing geometry, the air throw was stable despite the flow changing from 330 m 3 /h to 150 m 3 /h. Without the change in the geometry of the ATD, air throw lowered from 8 m to under 4 m, meaning that, if occupants were stationed 8 m away from the element, the system would not provide them with fresh air when the conditions changed. With the adaptive air terminal device, it was possible to maintain a steady air throw into the ventilated zone. The air flow spread, however, was quite concentrated, as shown in Figures 10-12, and further research should be taken under consideration to widen the airflow. This could be a limitation in the use of the element. Additionally, thermal comfort conditions were calculated and represented by the PMV and PPD. In each case, thermal comfort was maintained at a distance between 6 m and 8 m. However, in the area closer to the ATD, there was a decrease in comfort and risk of draught, meaning that this prototype should not be used for small spaces or as a ceiling device in office or residential buildings. The presented air terminal device could be used for VAV systems that use wall-mounted elements to distribute air or in large buildings such as production halls that have the average height over 6 m. In these cases, the risk of draught would be eliminated, and the system with the ATD could improve the air quality and maintain the thermal comfort for occupants. Further studies should be undertaken to eliminate the possibility of draught close to the ATD so that it can be used in a broader spectrum of VAV systems. Additionally, when applying the ATD for different applications, it should be adapted to the conditions in the installation as they may vary from those in this study.
v3-fos-license
2023-03-15T06:18:13.410Z
2023-03-14T00:00:00.000
257506870
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cas.15785", "pdf_hash": "4a24cd6a981b73c8e686c0b169c393eee4fa3e67", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42721", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "sha1": "1e5de30ba7fd4038d1531794c84a7305c51d4633", "year": 2023 }
pes2o/s2orc
TIP60 is required for tumorigenesis in non‐small cell lung cancer Abstract Histone modifications play crucial roles in transcriptional activation, and aberrant epigenetic changes are associated with oncogenesis. Lysine (K) acetyltransferases 5 (TIP60, also known as KAT5) is reportedly implicated in cancer development and maintenance, although its function in lung cancer remains controversial. Here we demonstrate that TIP60 knockdown in non‐small cell lung cancer cell lines decreased tumor cell growth, migration, and invasion. Furthermore, analysis of a mouse lung cancer model with lung‐specific conditional Tip60 knockout revealed suppressed tumor formation relative to controls, but no apparent effects on normal lung homeostasis. RNA‐seq and ChIP‐seq analyses of inducible TIP60 knockdown H1975 cells relative to controls revealed transglutaminase enzyme (TGM5) as downstream of TIP60. Investigation of a connectivity map database identified several candidate compounds that decrease TIP60 mRNA, one that suppressed tumor growth in cell culture and in vivo. In addition, TH1834, a TIP60 acetyltransferase inhibitor, showed comparable antitumor effects in cell culture and in vivo. Taken together, suppression of TIP60 activity shows tumor‐specific efficacy against lung cancer, with no overt effect on normal tissues. Our work suggests that targeting TIP60 could be a promising approach to treating lung cancer. | INTRODUC TI ON Lung cancer is the leading cause of cancer-related deaths in the USA and worldwide. Despite the marked development of treatments such as targeted therapy or immunotherapy in the past two decades, more than 100,000 Americans died from lung cancer in 2021. 1 Conversely, TIP60 also functions in p53 activation, contributing to the induction of apoptosis [5][6][7] and is required for expression of KAI1, a tumor suppressor in prostate cancer. 8 Thus, TIP60 activity appears to be context dependent, and aberrant lysine acetyltransferase activity can either promote or suppress tumorigenesis in colon, breast, and prostate cancers. [8][9][10][11] Here, given that a function for TIP60 in lung cancer is also controversial, we investigated TIP60 function using in vitro cell culture and in vivo lung cancer models and searched for TIP60 effectors. We found that TIP60 serves as a coactivator to promote tumorigenesis in the context of lung cancer and that targeting TIP60 could serve as a treatment for lung cancer. | MATERIAL S AND ME THODS The materials and methods are described in Data S1. | Low levels of TIP60 expression are required for tumorigenesis in lung cancer First, we examined the TIP60 expression level in various human organs using GTEx datasets and found that TIP60 is ubiquitously expressed at various levels in multiple organ systems, including the lung ( Figure 1A). We then investigated TIP60 expression in various NSCLC lines: three (H1975, HCC827, and PC9) that harbor activating EGFR mutations, three (H358, H460, and A549) that harbor KRAS mutations, one each that harbors BRAF mutations (H1395), the EML4-ALK fusion gene (H3122), and a ROS1 fusion gene (HCC78), as well as BEAS-2B, an immortalized human bronchial epithelial cell line. Western blot analysis showed that all lines expressed four TIP60 splice variant isoforms at various TIP60 levels, although six out of nine NSCLC lines expressed lower levels of TIP60 than did BEAS-2B cells ( Figure 1B). To confirm these findings in clinical samples, we analyzed the data from TCGA lung adenocarcinoma dataset. Comparative analysis revealed significantly lower TIP60 expression in lung adenocarcinoma relative to normal lung tissues ( Figure 1C), suggesting that TIP60 serves as a tumor suppressor in lung cancers. To test this hypothesis, we selected H1975 and A549 (each harboring EGFR or KRAS mutations and both expressing lower TIP60) cells, and established the TIP60 overexpressing (OE) cells. Immunoblot analysis and qPCR confirmed TIP60 overexpression in H1975-OE cells and A549-OE cells ( Figure 2A, and Figure S1A); however, contrary to our hypothesis, TIP60 OE did not decrease cell growth of either line ( Figure 2B,C). Furthermore, TIP60 OE had no significant effect on migration and invasion in H1975 ( Figure 2D,F) and A549 cells ( Figure 2E,G). Different splice variant isoforms also had no significant effects on tumor growth (Figure S1B-F), suggesting that TIP60 does not function as a tumor suppressor in lung cancer. Next, to investigate the effects of TIP60 silencing in cell culture, we tried to generate TIP60 knockout cells in H1975 using CRISPR-Cas9 but obtained no homozygous knockout cells (data not shown). Thus we generated H1975 or A549 cells that harbored DOXinducible small hairpin RNA (shRNA) targeting TIP60. Doxycycline treatment of both lines specifically suppressed TIP60 expression at both mRNA ( Figure S2A,B) and protein levels ( Figure 3A,B) in two different shRNA clones of each (shTIP60-1 and shTIP60-2). We then comparable antitumor effects in cell culture and in vivo. Taken together, suppression of TIP60 activity shows tumor-specific efficacy against lung cancer, with no overt effect on normal tissues. Our work suggests that targeting TIP60 could be a promising approach to treating lung cancer. K E Y W O R D S artemisinin, KAT5, lung cancer, TGM5, TIP60 observed that, following DOX treatment and consequent TIP60 suppression, H1975 and A549 cells (each harboring shTIP60-2) significantly decreased cell growth relative to controls not treated with DOX ( Figure 3C,D). To assess migration activity in these lines, we performed a wound healing assay and found that TIP60 knockdown significantly decreased H1975 cell migration by day 1 ( Figure 3E) and A549 cell migration by day 3 ( Figure 3F). Furthermore, TIP60 knockdown significantly suppressed invasive activities of H1975 and A549 cells relative to controls not treated with DOX ( Figure 3G,H). Analysis in both lines using corresponding shTIP60-1 clones showed comparable cell growth, migration, and invasion activities ( Figure S2C-E). To confirm that enzymatic activity of TIP60 is required for tumorigenesis, constructs of wild-type (WT) TIP60, TIP60-G380A (inactive mutation, mut) 12,13 or an empty vector were transiently transfected into TIP60 knockdown cells (H1975 shTIP60-2 under DOX treatment). Immunoblot analysis confirmed continuous suppression of TIP60 with DOX treatment in empty transfected cells, and TIP60 overexpression in TIP60-WT, or TIP60-mut transfected cells ( Figure S3A). Wild-type TIP60 increased the acetylation of Histone H4 ( Figure S3A), resulting in a significant increase in tumor cell growth, migration, and invasive activities, although TIP60 carrying the inactive mutation had no effect on acetyltransferase activity nor tumor growth ( Figure S3A-D). Interestingly, the knockdown of TIP60 expression in BEAS-2B cells showed no effect on cell growth and migration activities ( Figure S4A-C). Taken together, these results indicated that lung cancers showed relatively low TIP60 expression, which is essential for their survival, and that further suppressing TIP60 acetyltransferase activity antagonizes their tumorigenicity. | Tip60 knockout inhibits tumorigenesis in mouse lung cancer To assess the effects of TIP60 silencing in vivo, we generated Figure 4E). These results indicate that TIP60 expression is required for lung tumorigenesis. | Identification of candidate TIP60 targets by RNA-seq and ChIP-seq Next, to identify TIP60 effectors, we performed RNA-seq and ChIPseq in shTIP60-1, shTIP60-2, and control H1975 cells. Analysis for DEGs revealed a large number of both downregulated and upregulated genes in H1975-shTIP60 cells relative to control cells, with more genes being downregulated ( Figure 5A,B). Furthermore, hierarchical clustering analysis of DEGs showed consistent expression changes in H1975-shTIP60-1 and shTIP60-2 relative to control cells ( Figure 5C). TIP60 reportedly acetylates multiple lysine residues on histone H4 4 ; thus we used an antipan-acetyl H4 antibody for ChIPseq. As shown in Figure 5D and Figure S6A, we identified 13 overlapping genes whose expression and histone H4 acetylation signal were both reduced by TIP60 suppression. Among them, we first focused on TGM2, a member of the transglutaminase family, which is reportedly associated with tumor growth or patient poor prognosis in colorectal carcinoma, glioblastoma, and pancreatic cancer [17][18][19] ; however, TGM2 knockdown had no effect on tumor growth in H1975 cells (data not shown). Thus we next focused on TGM5, which is also a member of the transglutaminase family. H4 acetylation levels at the TGM5 gene significantly decreased in both H1975-shTIP60-1 and -2 cells relative to controls ( Figure 5E), suggesting that TGM5 is a target of TIP60. We next examined TCGA database to investigate Kaplan-Meier survival analysis of lung adenocarcinoma patients. The median overall survival of the patients in the high TGM5 score group was significantly shorter than that of the patients in the low TGM5 group | Targeting TIP60 suppresses tumor progression in lung cancer To identify compounds that might inhibit TIP60 expression, we used a connectivity map 20 and detected five high-scoring compounds ( Figure 6A). We investigated TIP60 inhibiting efficacy among these compounds, and found that artemether inhibited TIP60 expression relative to other compounds in both H1975 and A549 cells ( Figure S7A). Artemether is a derivative of artemisinin, which is a natural product derived from the Chinese herb Artemisia annua L. and widely used as an antimalarial drug. 21 Artemisinin downregulated TIP60 expression at lower concentrations than artemether in H1975 and A549 cells ( Figure 6B and Figure S7B); thus we selected artemisinin for further experiments. Dosedependent downregulation of TIP60 expression was observed in H1975 and A549 cells but less potent TIP60 downregulation in BEAS-2B cells ( Figure 6B). Interestingly, BEAS-2B cells showed significantly a higher IC 50 for artemisinin than did H1975 or A549 F I G U R E 4 Tip60 knockout lung cancer model mice show no tumor formation in the lung. (A) PCR of genomic DNA to confirm the genotypes of homozygous Tip60 knockout (F/F), heterozygous Tip60 knockout (F/wt), or wild-type (wt/wt) mice, as well as the EGFR transgene: I, CCSP-rtTA/Cre/Tip60 F/wt , II, EGFR TL /CCSP-rtTA/Cre/Tip60 wt/wt , III, EGFR TL /CCSP-rtTA/Cre/Tip60 F/wt , and IV, EGFR TL /CCSP-rtTA/ Cre/Tip60 F/F . (B, C) Appearance and weight of lungs in each mouse group: I (n = 14), II (n = 13), III (n = 24), IV (n = 14). Data are the mean ± SD. The p-value was calculated using one-way ANOVA followed by the Tukey-Kramer multiple-comparison test. **p < 0.005. ns; not significant. (D, E) Hematoxylin and eosin staining (D) and magnetic resonance imaging (E) in mouse groups indicated above. Note that no tumors are seen in Tip60 knockout mice. Images are representative, and white arrows indicate tumors. Scale bars: 200 μm. cells ( Figure 6C). Accordingly, the MTS assay revealed significant inhibition of H1975 and A549 cell viability by artemisinin treatment compared with those of BEAS-2B cells ( Figure 6D). Furthermore, artemisinin treatment significantly inhibited cell growth, migration, and invasive activities relative to cells treated with DMSO in H1975 and A549 cells ( Figure S7C-H). Next, to determine whether artemisinin treatment induces cell death by suppressing TIP60, we treated H1975-OE, A549-OE, or corresponding control cells with artemisinin and assayed Caspase-3/7 activity as an apoptotic marker. Artemisinin treatment decreased TIP60 expression in both control and OE cells, although TGM5 expression was inhibited only in controls ( Figure 6E). Caspase-3/7 activity was increased by artemisinin treatment in H1975 and A549 control cells ( Figure 6F), while upregulation of Caspase-3/7 activity in response to artemisinin treatment seen in control cells was significantly attenuated in H1975-OE and A549-OE cells ( Figure 6F). These results suggest that artemisinin treatment induces NSCLC cell apoptosis. Next, we generated mouse xenograft models by injecting A549-TIP60 OE or control (Cntl) cells subcutaneously into nude mice. After tumors reached optimal volume (100-200 mm 3 ), we administered artemisinin at 200 mg/kg or vehicle orally once daily for 4 weeks. Artemisinin treatment did not alter body weight in either mouse line ( Figure S8A). However, artemisinin treatment significantly inhibited tumor growth in mice bearing A549-Cntl xenografts relative to vehicle-treated mice, whereas the antitumor efficacy of artemisinin treatment was less potent in mice implanted with A549-OE cells ( Figure 6G,H). To demonstrate that TIP60 acetyltransferase activity is required for tumorigenesis, we treated lung tumors with TH1834, a TIP60 acetyltransferase inhibitor, 22 in cell culture and in vivo. TH1834 treatment suppressed histone H4 acetylation and TGM5 expression in a dose-dependent manner in H1975 and A549 cells ( Figure 7A). We next treated H1975 and A549 cells with TH1834 and observed a significant inhibition of cell growth, migration, and invasive activities in both cell lines ( Figure 7B-F). Last, to investigate the effects of TH1834 in vivo, we used xenograft mouse models injected with A549 cells. After tumors reached optimal volume (100-200 mm 3 ), we administered TH1834 at 10 mg/kg or vehicle intraperitoneally five times per week for 3 weeks. 23 Body weight remained unchanged by TH1834 treatment ( Figure S8B). TH1834 treatment significantly inhibited tumor growth in mice bearing A549 tumors relative to vehicle-treated mice ( Figure 7G,H). Taken together, these findings suggest that targeting TIP60 could have an antitumor effect in the context of lung cancer. | DISCUSS ION In this study, we showed that TIP60 knockdown in an in vivo model of lung cancer inhibits lung tumor formation and progression. We also reveal that TGM5 is downstream of TIP60 and contributes to tumor progression in lung cancer. Furthermore, artemisinin or TH1834 treatment suppressed tumor growth in cell culture and in vivo, suggesting that targeting TIP60 might be a novel treatment for lung cancer (Figure 8). 24 TIP60 has a sequence distinct from that of GCN5 and acetylates different histones, 26 suggesting that TIP60 rather than GCN5 is crucial for lung tumorigenesis. Here, we showed that TIP60 was expressed at low levels in multiple lung cancer lines and in clinical lung tumor tissues relative to expression seen in normal tissues, initially suggesting that TIP60 serves as a tumor suppressor. However, TIP60 overexpression neither inhibited tumor progression nor induced tumor cell apoptosis. In fact, TIP60 knockdown in H1975 and A549 cells inhibited cell growth, migration, and invasion activities, which is consistent with a previous report. 27 In contrast, another paper reported the antitumor activity of TIP60 using lung cancer cells. 28 Mice were monitored for changes in tumor volume. Data are presented as the mean percentage change in tumor volume ± SEM. The p-value was calculated using the unpaired two-tailed Welch's t-test. *p < 0.05 and **p < 0.005. ns; not significant. findings are consistent with previous studies suggesting that some cancer cells require low TIP60 levels for survival and the observation that eliminating already low TIP60 protein levels induces cancer cell death. 27,29 Interestingly, homozygous Tip60 deletion had no overt effect on normal lung tissues, although Tip60 loss is lethal for embryogenesis and hematopoietic stem cell maintenance. 15,30 These results suggest that TIP60 function is context dependent and critical for lung cancer cells but not for normal lung cells. Furthermore, despite evidence that TIP60 serves as a coactivator, TIP60 overexpression did not promote tumor progression. Assuming that the minimal expression required for TIP60 activity is set at a lower point in lung cancer and has already reached a plateau, TIP60 overexpression might have no effect on tumor progression. Indeed, 50% loss of TIP60 (via heterozygous knockout) still exceeded the threshold level for TIP60 activity, resulting in tumorigenesis, whereas loss of all TIP60 expression (via homozygous knockout) completely inhibited tumor formation in mouse lungs. It has been previously reported that reducing PU.1 expression to 20% but not 50% of normal levels promoted the development of acute myeloid leukemia, 31 indicating graded control of tumor formation by specific molecules. Of note, the relative difference in TIP60 expression levels between normal (higher) and tumor (lower) cells indicates that TIP60 suppression could have tumorspecific efficacy against lung cancer. Our ChIP-seq and RNA-seq analyses revealed TGM5 to be downstream of TIP60, and a potential contributor to tumor progression and migration capacity. In addition, involucrin, which is cross-linked to membrane proteins by transglutaminase, and TGM2 are also among the 13 genes found to be downstream of TIP60, suggesting that transglutaminase plays specific roles in TIP60 regulatory activities. Transglutaminase family members are cross-linking enzymes that catalyze the formation of isopeptide bonds between the γ-carboxamide group of protein-bound glutamine and the ε-amino group of lysine residues, 32 and they function in cell adhesion, differentiation, and signal transduction. 33,34 However, TGM5 function in cancer remains unclear, whereas TGM2 is known to promote tumor cell differentiation, mobility, invasion, and survival. 33 To efficiently inhibit TIP60, we used a connectivity map database and selected artemisinin. We report that artemisinin treatment inhibited TIP60 expression and tumor progression. TIP60 regulates transcription of various genes, including NF-κB, MYC, and CCND1, [39][40][41] all also blocked by artemisinin and associated with antitumor activities of artemisinin, [42][43][44] suggesting that artemisinin exerts multiple anticancer effects via TIP60 regulation. However, artemisinin is not an ideal and specific compound for clinical suppression of TIP60 in part because high concentrations are required to inhibit TIP60 expression. To improve its bioavailability and efficacy, several derivatives have been synthesized, among them dihydroartemisinin and artesunate. In early phase clinical trials, combination therapies, including artemisinin derivatives, with chemotherapy showed safety and high efficacy, 45,46 although the data were limited and larger scale phase clinical trials are needed. Artemisinin may also have antitumor activity independent of TIP60 inhibition. Accordingly, we utilized TH1834, a TIP60 acetyltransferase inhibitor, to treat lung tumors in cell culture and in vivo. Tumor growth was suppressed, although a high concentration was still required for treatment. Therefore, a novel drug that specifically targets TIP60 is urgently needed. Recently, the degradation of targeted proteins using PROTACs has emerged as a promising therapeutic modality. PROTACs consist of three parts-a ligand of the protein of interest (POI), a ligand of an E3 ubiquitin ligase, and a linker-induce ubiquitylation and subsequent proteasomal degradation of the POI. 47 As PROTACs can target epigenetic proteins 48 or undruggable targets such as DNA-binding proteins (e.g., transcription factors), 49 the generation of a PROTAC targeting TIP60 could be an attractive approach to lung cancer treatment. In summary, we show here that TIP60 is required for the malignant transformation of pulmonary epithelial cells. Understanding this mechanism is significant, as it could lead to novel and muchneeded therapy for lung cancer. ACK N OWLED G M ENTS We thank all members of the Kobayashi laboratory for helpful discussions. FU N D I N G I N FO R M ATI O N This work was supported by NIH CA240257 (HW, SSK), CA197697 (DGT), and CA218707 (DBC). DATA AVA I L A B I L I T Y S TAT E M E N T The RNA-seq and ChIP-seq data are available from the NCBI GEO database (GSE207202 and GSE207201). All other data are available in the main text or the supplementary materials.
v3-fos-license
2019-08-23T08:16:49.470Z
2019-01-01T00:00:00.000
201214111
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://digitalcommons.uri.edu/cgi/viewcontent.cgi?article=1899&context=oa_diss", "pdf_hash": "d8c7c6d30a237434c60d77b710a8d85644f43a54", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42722", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "1fcc2a2bfbf5c578c5226477521ee2d3ab6e8979", "year": 2019 }
pes2o/s2orc
INVESTIGATION OF IRON MEDIATED MOLECULAR TRANSFORMATIONS AND SYNTHESIS OF 2-AMINO-α-CARBOLINE AND ITS ANALOGUES RELEVANT FOR DNA ADDUCT FORMATION The discovery of new synthetic methods and routes for making new bonds is one of the highest incentives to an organic chemist. Innovative, efficient, effective, economical and exceptionally environmentally friendly synthetic methods are being continuously discovered. Transition metal catalyzed C-H activation has been unearthed as one of these methods, especially iron catalyzed direct C-H activation. Fundamental to making more discoveries is understanding the mechanism behind a specific reaction. To this end, the mechanism behind iron catalyzed C-H activation was investigated. Results indicate the intermediary transition state for the iron species involves an Fe(I) species but the attempt to synthesize this Fe(I) species failed. We also confirmed that Fe(II) is definitely not the active species for this reaction. The nitrogen-based ligand might not have afforded us the Fe(I) complexes but there was evidence to show the reduction of the Fe(acac)3 due to the formation of the biphenyl product In the same vein of discovery, we identified a mild, one-pot, FeCl2 mediated procedure to produce 3-substituted allylic alcohols from α,β-unsaturated ketones. The addition of an organolithium nucleophile produced tertiary allylic alcohol as an intermediate, which underwent a 1,3-OH migration assisted by FeCl2. The proposed mechanism indicates that a syn-facial migration occurs for the significant product and we obtained yield as high as 98% from this one-pot reaction. New synthetic methods are also very beneficial to the world outside chemistry. The study of the carcinogen 2-amino--carboline (AC) and its interaction with DNA involved the synthesis of the AC-DNA adduct. We report the synthesis of AC and 2-nitro--carboline (AC-NO2) which also facilitate the synthesis of DNA adducts for biophysical studies. An attempt was made to synthesize the fluorinated 2-nitro--carboline which will enable the use of F-NMR for monitoring of these studies. Cyclization of the coupled product for the fluorinated analogue could not be achieved due to the electron deficiency in the ring systems. AC was obtained in good yields of 88%, and the AC-NO2 was obtained in 60% yield. LIST OF TABLES Understanding how iron works is important because it has been established as a very versatile metal, able to be used in various chemical processes. A chemical process of interest is tertiary allylic alcohol rearrangement, which is an applied method where certain molecules, especially drugs, have been synthesized. 3 Iron, an inexpensive first row transition metal has not been used for tertiary allylic alcohol rearrangement, though other rare and first row transitions metals like chromium 4 and rhenium 5 have been used. This prompted the investigation in using iron for this important rearrangement. Finding new synthetic routes is also beneficial to the biological world. The effect of carcinogens on human cells have been studied extensively and these studies have focused on the interactions of the carcinogen with DNA. To facilitate these studies, there is the need for the synthesis of DNA adducts of these carcinogen --a research frontier identified through previous studies . 6 It was found to be cost effective to synthesize the carcinogen of interest to be used for synthesis of the DNA adducts used for these biophysical studies hence the need to find quick, efficient and effective methods to synthesize these carcinogens. WHY C-H ACTIVATION? Synthetically the direct functionalization of a C-H bond into either a C-C or C-heteroatom will be of great advantage to the synthesis world, especially the pharmaceutical and fine chemical industry. Over a long period of time, organic chemists have explored ways to make molecular transformations. The use of nucleophiles, such as Grignard reagents 7-12 , organocuprates 13 In recent times, the innovative methods being explored involving direct functionalization of the C-H bond 1 . The need for direct C-H activation arises primarily to eliminate the need for prefunctionalization of the C-H bond into halides or other leaving groups. By exploring the direct C-H functionalization, the organic chemist increases the sturdy and reliable methods already available. In the discussion of green and sustainable chemistry, direct C-H activation also eliminates a number of problems related to waste. Direct C-H activation also saves time, money and efforts and this is a key advantageous component for industries. 1,2 Iron Catalyzed C-H Activation Until recently the most widely used transition metals for direct C-H activation had been limited to the 4d transition metals like palladium 47,48 , rhodium 49 , iridium 50 and ruthenium. 51 Most recent interest has been in iron 52 , cobalt 1 and nickel 53 mainly due to availability and relatively low cost of these metals as compared to the other precious metals. Iron compared to nickel and cobalt is also very inexpensive and non-toxic. The advantages in the use of iron speared off an intense investigation into its versatility. It is interesting to note that these iron catalyzed reactions are sometimes of better yields than their palladium counterparts 54 as shown in scheme 1. The mechanism for iron cross-coupled reactions has been studied extensively compared to direct C-H activation. 11,[55][56][57][58] The multifaceted nature of the electronic structure of iron makes the study of its intermediate in reaction cycles quite an exasperating task to achieve. Even though, there has been a number of physical methods that can be used to study or monitor these iron catalyzed C-H bonds transformation, the intrinsic mechanism of the iron species in the cycle has not been identified yet. There has been a couple of theories in working now but without a direct probing of the iron species generated in situ, there are restrictions on how these theories can be applied. So far, two general mechanisms of how the iron activates the substrate have been proposed, 1. a low valent iron complex undergoes an oxidative addition into a C-H bond of the substrate to form the organoiron complex; 2. the iron complex deprotonates the substrates before the formation of the Fe-C bond, also known as direct deprotonation metalation. 59 The pioneering works done by Nakamura et al 54 In the study to extend the scope of their reaction conditions to form biaryl heterocycles since these heterocycles form the basis for most of biological and pharmaceutical molecules, the iron catalyzed ortho directed arylation of the heterocycles such as pyridines, furans and thiophenes was investigated with good yields as shown in scheme 1.4. The reaction gave a complete conversion of starting material in 15 minutes and yields as high as 88%. Investigations led to findings that questioned the mechanism proposed by Nakamura. We observed that the Fe(II) species proposed by Nakamura et al was not present because the use of Fe(acac)2 did not yield any product. A reduced imine as a byproduct was detected, this implied a hydride was being formed during the reaction which can be only be introduced into the reaction if there were an Fe(I)H being formed. This led an investigation into finding the intermediary species for this reaction. This will be discussed further in manuscript one. Scheme 1.4: Iron catalyzed direct arylation of heterocycles 54 1,3-TERTIARY ALLYLIC ALCOHOL REARRANGEMENT Iron as a metal is very versatile 56 . It has been shown to be comparable to palladium and in some cases, a better metal catalyst. 68 Iron complexes have been found to be better Lewis acids in the Mukaiyama aldol condensation reaction of silyl ketene acetals and aldehydes. The iron complexes had better turnovers and stability to moisture and air and they were able to give good enantioselective aldol additions. 69 Iron catalyzed Negishi reactions were also found to be robust, highly reproducible and made studies for reaction mechanism manageable. 41 In view of these trends can iron outperform other known systems? Allylic rearrangements involve the movement of a double bond in an allylic system usually facilitated by the addition of a nucleophile or an oxidant. The most practically known ones are the Cope and Claisen rearrangements, which are shown in scheme 1.5. 70-73 3',3-disubstituted allylic alcohol is often times a difficult system to synthesize though these systems are crucial in the synthesis of a lot of natural products. Scheme 1.5: Cope and Claisen rearrangement The most common methods of synthesizing these disubstituted molecules involve nucleophilic additions using organometallic reagents to propargyl alcohols and conjugated addition of nucleophiles to ynones. [74][75][76][77][78] . Findings have shown that migration of the hydroxy group in tertiary allylic alcohol can help synthesize some of these complex and difficult systems but, until recent, not much attention had been given to this use of the tertiary alcohol 1,3-rearrangement. Uses of Tertiary Allylic Alcohol Iron has been involved in a few substitution reactions via -allylic system 99-102 but the use of iron for 1,3-rearrangement has not been explored yet. We explored iron for this allylic rearrangement considering the fact that transition metals have been used for over long periods of time for this same type of rearrangement. It was interesting to discover that iron can break and functionalize strong C-O bonds, a unique synthetic characteristic that most conventional transition metals cannot do. We reported a one pot iron mediated 1,3-rearrangement of tertiary allylic alcohols This involved the addition of organolithium reagents to α,β-unsaturated ketones, which then undergo a 1,3rearrangement in the presence of FeCl2 (scheme 1.8) This mechanism we believe for this novel reaction involves a cleavage of the C-O bond of the tertiary allylic alcohol and the intermediacy of an allylic cation. This is discussed further in manuscript 2. FOR DNA ADDUCT BIOPHYSICAL STUDIES A greater number of HAAs have been found to be more carcinogenic, higher mutagenicity than the usual carcinogens like nitrosamines, aflatoxins B1 and benzo[α]pyrene. 108 Studies have also shown that these HAAs are principal causes of most cancers like lung, breast, colon, stomach and prostate. [109][110][111][112] Alpha carboline, a subgroup of HAAs has been found in the core structure for some natural products 113,114 as shown in figure 1.2. They have a similar structure, relative to indoles and carbazoles and have served as a platform as a building block structure in medicinal chemistry 115 and optoelectronic materials. 116 They were isolated and identified several years ago and were found to be mutagenic against Salmonella typhimurium TA 98 (TA 98). They were different from its isomers, β-carboline and γ-carboline 117 in structure and activity. The latter carbolines had been identified to exhibit anticancer, antimalarial and antidopamine 118 effects, thus interest to find the therapeutic or mutagenicity of -carboline intensified. show a direct correlation between cancerous cells developed and the consumption of cooked meat. 105 AαC has also shown antitumor activity towards Glioblastoma multiforme but unfortunately, the understanding of exactly how the HAA interact s with DNA leading to mutations is understudied. It has already been reported on how HAAs are metabolized in vivo. 104 The reactive intermediate is the N-acetoxyamine, which is easily transferred to DNA by the N-acetyltransferases found in vivo. Synthetically the best way is to convert an amine to N-acetoxyamine is by first converting the amine to nitro group, then partially reduce the nitro group to hydroxylamine, before acetylating the hydroxylamine. There has been a report though, of the use of a polyaniline nanofiber supported FeCl3 to acetylate an alcohol or amine to an acetoxy group. 122 To study the biophysical interaction between AαC and DNA and how this leads to mutation, we looked into the synthesis of AαC and then 2-nitro-α-carboline (AαC-NO2). Synthesis of Alpha-carboline There have been various methods report for the synthesis of AαC (scheme 1.9). These include a modified Graebe-Ullmann reaction, intramolecular Diels Alder, annulation of substituted benzene and pyridines, photocyclization and transition metal catalyzed cross coupling. a. Graebe-Ullmann reaction This coupling reaction was discovered in the early 1900s but it was modified to aid the Total synthesis Total synthesis involves reacting the most reactive form of the AC with guanine, one of the nucleic acid base, before it is reacted with a sugar, before the addition of a phosphate group. This resultant nucleotide then undergoes an automated oligopeptide synthesis. The advantage with this route is the selectivity in terms of building a specific nucleotide sequence. It eliminates the worry of having to find exactly where the AC bounded to the DNA. The disadvantage with this method is that it is expensive, time-consuming and involves a lot of steps. Biomimetic Biomimetic involves the reaction of the reactive form of the AC with a nucleotide sequence. The advantage with this route that it is quick, fewer resources needed and fewer steps needed. The disadvantage however is knowing the exact location of the adduct. Should there be more than one guanine in the sequence the AC will react with all of them making detection and monitoring quite cumbersome and difficult. The best method to suit our aim was the Graebe-Ullmann modified method since it has fewer steps and the starting materials are readily available and this is discussed further in manuscript three. REFERENCES ( The use of iron for catalysis dates as far back as 1940 2 and 1970 3 and these iron-catalyzed reaction are ideal because iron is economically sustainable due to its abundance, 4 biologically nontoxic 5 nature and its inexpensiveness. The reactions have also been proven to be efficient and easily scalable. Iron catalyzed reactions have also been found to be robust, highly reproducible and are known to be 'green' considering that it is easily processed and its environmentally benign waste 6 is appealing to the pharmaceutical and fine chemical industries Even though this discovery of iron's reactivity and it use for the cross-coupled reactions was made before the use of palladium [7][8][9] and nickel based catalysis, 10 Nakamura and co-workers were the first to report a direct C-H bond functionalization at low temperature (0 o C) which they discovered during a cross coupling reaction between diphenylzinc and 2-bromopyridine. 16 Optimization of this reaction conditions resulted in arylation of the ortho hydrogen on phenylpyridines, phenylpyrazoles, phenylpyrimidines and benzoquinoline. These ortho hydrogens were susceptible to the transformation regardless of the electronic effect of the substituents. 17 Other groups then researched into the mechanism for iron catalyzed C-H activation. [18][19][20] It should also be noted that iron has been found to activate not just C(sp 2 )-H, but also C(sp)-H and C(sp 3 )-H bonds using Fe complexes to form Fe-C bonds in the presence or absence of directing groups. 21,22 The proposed mechanism for this iron catalyzed ortho directed arylation as shown in scheme 2.2 Scheme 2.2: Nakamura's proposed mechanism for iron catalyzed ortho directed arylation As the interest in iron catalyzed direct C-H activation increased, the inquiry into the mechanism also began. In respect of this, two general mechanisms of how the iron activates the activation have been proposed, a low valent iron complex undergoing an oxidative addition into a C-H bond to form the organoiron complex, or, the iron complex deprotonates the substrates before forming the Fe-C bond, also known as direct deprotonation metalation. 23 These proposed mechanisms involves either a double electron transfer or a single electron transfer between the substrate and the iron. Unfortunately, the active Fe species involved in these reactions is dependent on many conditions, such as ligand, type of Grignard (nucleophile) used and but not limited to the presence or absence of a β hydride on the nucleophile. 24 The key step of interest was the reduction of Fe(acac)3 by a Grignard reagent. The active Fe species is dependent on the type of Grignard reagent used. Nakamura used an aryl Grignard reagent which has been shown to form clusters of anionic Ph3Fe(II) or Ph4Fe(III) in THF. 25 This implies the presence of both Fe(II) and Fe(III) species in solution. Computational studies also confirmed the Fe(II)/Fe(III) system but added the additional information of the presence of an Fe(I) species after the C-C bond formation through reductive elimination. 26 Nakamura also proposed that the main intermediary species to be Fe(II)/Fe(III) system for a similar C-H activation using organoborons. 27 Though the Fe(II)/Fe(III) system seems to explain a lot of these iron reactions, the fast rates of reactions and side products had led others to propose other mechanisms which include an Fe(I)/Fe(III) system, 28,29 Fe oxo species 30,31 and Fe -1 species 32 for other Fe-catalyzed processes. Our group's previous work 33 expanded Nakamura's ortho directed arylation of arenes containing directing groups to include the azoles and thiophenes heterocyclic substrates (scheme 2.3). The investigation led to the conclusion that Fe (III) catalysts, preferably Fe(acac)3, were required for the reaction. Importantly, The Fe(II) catalyst precursor, Fe(acac)2, did not catalyze the reaction. The by-product obtained was the reduced imine (7) implying a hydride was being produced in the catalytic cycle. The use of radical scavenger, TEMPO, did not affect the reaction implying the catalysis did not proceed through radicals, eliminating the single electron transfer method of activation. High amounts of Grignard reagents had to be used to account for the high amounts of biphenyl produced even though the additive KF was added to reduce the homo-coupling as shown in scheme 2.3. Scheme 2.3: Iron catalyzed direct ortho arylation of arylamine. RESULT AND DISCUSSION This led us to propose a mechanism that involves an Fe(III)/Fe(I) system (scheme 2.4a) or an The deprotonation of hydrogen happens as the Fe binds to the C to form species (13), which then undergoes reductive elimination to give product (14). The ligand dtbpy is added to maintain the Fe(I) species (15) The imine((E)-N,1-diphenylethan-1-imine and arylamine product ((E)-N-([1,1'-biphenyl]-2-yl)-1phenylethan-1-imine) were synthesized using the methods from previous work 31 Synthesis of Imine To an oven dried 50 mL RBF with a stir bar was added 20 g of 3 Å molecular sieves and 30 mL of toluene. The aniline (60 mmol) and acetophenone (50 mmol to yield an orange-red color. Synthesis of Fe (I) species (4) The Fe(II) complex (0.02 mmol) was dissolved in 1.5 mL of toluene(anhydrous) and cooled to -40 degrees, then the Grignard, tolyl MgBr (0.06 mmol) was added dropwise and the mixture was stirred for 20 minutes, then the mixture was allowed to warm to room temperature for 40 minutes under nitrogen. The solvent was removed to 2/3 its volume then the mixture was cooled to -20 degrees in order to obtain red crystals. Investigation of Iron Catalysts: For GC/MS analysis An oven dried Schleck vial was evacuated using vacuum three times, the tube was filled with nitrogen intermittently. Iron catalyst (0.055 mmol) and 1,2-bis(diphenylphosphino)benzene (dppbz, 0.118 mmol) were added to the Schleck vial. Then PhMgBr Grignard (0.168 mmol) was added over 15mins. After 15 mins, 0.1 mL of the reaction mixture was taken into a GC vial and 990 mL of EtOAc was added for GC/MS analysis. Investigation of Iron Complexes for Direct C-H ortho Arylation The imine (0.55 mmol), chlorobenzene (2 mL), Iron complex (0.055 mmol) and the dppbz (0.118 mmol) were added to an oven dried Schleck vial sequentially. The vial was evacuated using vacuum three times, the tube was filled with nitrogen intermittently before the addition was done. The mixture was then cooled to -78 o C for 15 minutes. The Grignard (0.168 mmol) was added over 15mins. The reaction was then allowed to warm to room temperature. 0.1 mL of the reaction mixture was taken into a GC vial containing 990 mL of EtOAc and used for GC/MS analysis . than their mono-substituted counterparts. Though methods such as the conjugate addition of nucleophiles to ynones and the addition of organometallic reagents to propargylic alcohols have been described, [9][10][11][12][13] one method that has received little attention is the 1,3-migration of the hydroxy group in tertiary allylic alcohols. This transformation has been primarily catalyzed by oxo catalysts, 14-16 though oxidative palladium catalysis has also been employed to perform the migration and oxidize the allylic alcohol to a β-disubstituted enone. 17 This form of rearrangement can be done by enzymes to form enones. 2 Rhenium assisted rearrangement has also been used in the synthesis of semisquarates. 18 Trifluoroacetic acid has also been used to isomerize allylic alcohols, and this method has been applied to the synthesis of valerenic acid, which binds to both the GABAA and 5-HT5A receptors, and is used as a treatment of insomnia. 19 Acid assisted allylic alcohol rearrangement was also used in the synthesis of two quinolone natural products isolated from Pseudonocardia sp. 20 Additionally one can envision that this rearrangement could be used to create artemisinin-like antimalarial drugs via Singh's synthetic (Scheme 3.1). 21 Iron has recently been studied as a catalyst for a number of coupling reactions. [12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28] The most widely used of these processes are alternatives to traditional palladium-catalyzed transformations, such as the Kumada coupling and C-H arylation and alkylation reactions. 22,29,30 Even a few examples of substitution reactions via π-allyl iron intermediates have been reported. [31][32][33][34][35] Scheme 3.1: Application of allylic 1,3-migrations REFERENCES Iron is preferable to late transition metal catalysts due to its low expense and toxicity. Additionally, iron catalysts are often capable of performing synthetic steps that conventional late transition metal catalysts cannot perform, such as breaking and functionalizing strong C-O bonds. Herein, we describe a one-pot addition of an organolithium reagent to an ,-unsaturated ketone, followed by an iron-mediated 1,3-rearrangement reaction. We propose that the novel reaction proceeds via the formal cleavage of a C-O bond and the intermediacy of an allylic cation. RESULTS AND DISCUSSION During the course of our studies on iron-mediated reactions involving organolithium and organomagnesium nucleophiles, 30,36 we discovered that the addition of an organolithium reagent to an α,β-unsaturated ketone in the presence of an iron salt, does not result in the expected 1,4addition products, but rather an isomeric allylic alcohol is formed (2). Cyclohex-2-enone, (1) was chosen as a reliable and simple substrate for reaction optimization (Table 3.1). A number of both iron(II) and iron(III) salts were selected, out of which FeCl2 was determined as the most efficient reagent. In general, iron(III) salts were inferior to iron(II) salts. Diethyl ether (with or without BHT stabilization) was the most desirable solvent for the rearrangement and the other common ethers solvents, THF and Me-THF, produced low or no yields of 2. Colder temperatures (entry 10) allowed for the formation of the 1,2-addition product, 3, but hindered the formation of the desired rearranged product 2. While the reaction proceeded in both inhibited and purified Et2O, the addition of one equiv of BHT further hindered the reaction (entry 11), indicating that the BHT preservative was not involved in the reaction and that commercial, stabilized ether was a suitable solvent for the addition/migration. Reduction of the loading of FeCl2 from one equiv to 10 mol% resulted in an 8% yield (entry 13), indicating that the reaction required a stoichiometric amount of the iron reagent. The scope and limitations of the addition-rearrangement with respect to both linear and cyclic α,β unsaturated ketone substrates were also investigated ( Table 2). Cycloalkenones were found to be the best substrates, with cyclohex-2-enone (1) giving the highest yield. Linear α,β-unsaturated ketones (entries 3-5) produced only the 1,2 addition product. a Aryllithium was synthesized via lithium-halogen exchange from the arylbromide and butyllithium. We then investigated the scope and limitation of organolithium and Grignard nucleophiles ( Table 3). Alkyllithium reagents did not yield any results, presumably due to their basicity. The naphthyllithium reagent produced only the 1,2-addition product 15, indicating that the 1,3 migration may be inhibited due to steric hindrance. Only trace amounts of the product were obtained from a Grignard reagent (entry 6). We investigated the feasibility of this method using heterocyclic aryllithiums, and we obtained only a trace amount of products 20 and 22. To probe the mechanism of the reaction, the biphenyl lithium reagent was added to 1 and stirred for 3 hrs. Purification by flash chromatography gave 3. FeCl2 (1 equiv) was then added to a solution of 3 (1 equiv) and the reaction was stirred at room temperature for 3 hrs, providing 2 after column chromatography (Scheme 3.2). This confirmed that the tertiary allylic alcohol (3) alcohol. This indicates that the 1,3-migration proceed primarily via a syn-facial pathway due to less energy needed for the iron-oxo species to approach from the same face of the allylic cation than to approach from the opposite face as the methyl group to give the syn product. Based on these data and the previous work by McCubbin, 37 we propose the mechanism shown in Scheme 3.4. The organolithium reagent reacts with the α,β-unsaturated ketone to give the tertiary alkoxide (1,2-addition). The FeCl2 coordinates to the alkoxide (28) and LiCl is formed. The iron-oxo species (29) cleaves the C-O bond, forming an allylic carbocation (30), and the iron-oxo species then attacks the 3-position of the allylic cation, forming a new C-O bond. The major product of this process arises from the syn migration of the iron-oxo species. We hypothesize that the intimate ionic pair (30) could explain the formation of both diastereomers (26 and 27) and the preference for the trans isomer (26). DFT calculations indicate that both the trans (26) and cis (27) rearranged products have similar ground-state energies, so the observed 2:1 selectivity likely arises from kinetic control. When aryllithium nucleophiles are used, the final allylic alcohol is conjugated which we believe is the overall driving force for the reaction. Scheme 3.3: Diastereoselective OH-migration Finally, the extent of the OH-migration was investigated (scheme 3.5). Phenyllithium was added to a solution of the conjugated dieneone, 32, then after an hour, FeCl2 was added. The mixture was then stirred at room temperature for 24 hours, producing 33, a 1,2-addition product, and 34, the product of a 1,5-OH migration. The 1,3-migration product (35) was not observed, likely because it was less conjugated than 33 or 34. This result corroborates the proposed OH-migration mechanism and confirms the overall driving force for the reaction is the creation of an extended conjugated system. CONCLUSION In summary, we have developed a novel iron-mediated process that isomerizes allylic alcohols. The system can be used to effect the transformation of cyclic -unsaturated enones to 3,3'-disubstituted allylic alcohols. Future work in this field could involve the enhancement of the diastereoselectivity of the process and its application to the synthesis of medicinal compounds. EXPERIMENTAL SECTION All reactions were carried out in oven-dried glassware under nitrogen atmosphere unless stated otherwise. Yields refer to chromatographically and spectroscopically pure compounds unless stated otherwise. 1 (1.0 equiv) was added. The reaction was allowed to run at room temperature overnight. Silica was then added to the reaction. Purification by flash column chromatograph using hexane and ethyl acetate provided the corresponding desired 1,3 rearranged allylic alcohols. Studies have shown the presence of AαC in the urine of smokers 4 and that even its concentration in tobacco smoke is higher than the known carcinogen 4-aminobiphenyl, (4-ABP) (which is a wellknown human bladder carcinogen). 5 AαC has also been found to cause cancer of lungs in mice 6 and also mutagenic towards Salmonella typhimurium. 7 It is interesting to note that the isomer of -carboline, -carboline has been found to have anti-cancer, 8 anti-malaria and anti-dopaminergic activity. -Carboline though, was not known for these properties until a recent study showed αcarboline as an antitumor agent against Glioblastoma multiforme. 9 Figure 4.1: Natural products with AαC as a backbone structure 2-Amino-α-carboline (AαC), isolated and identified from soybean several decades ago, though has been found to be the second most consumed HAA with a daily dietary intake of 5 ng/Kg/day, 10 does not have enough biophysical data on its interaction with DNA. Though there has not been a definitive correlation between AC and tumors in humans, studies in aminals 6 found a direct correlation between consumption of cooked meat and cancers. 11 The understanding of how AC interacts with DNA and its resultant mutation needs to be unfolded. This makes looking into the biophysical study of the mechanism through which AC cause these mutations, crucial. A factor that needed to be considered in synthesizing fluorinated AC was a means to detect the DNA adduct formed after synthesis since, for small or minor DNA conformers, 1 H NMR is mostly not useful. 19 FNMR is considered a powerful tool to help detect 23,24 and track the molecule of our interest; hence fluorinated analogues of AαC were also synthesized. The conventional means to form an N-acetoxyamine from an arylamine is to partially reduce a nitro group to hydroxylamine before acylation thus nitro-α-carboline (AαC-NO2) had to be synthesized to initiate the synthesis of the DNA adduct 25 (oxone) and acetone in the presence of a base (NaHCO3) 10 (scheme 4.7) and the use of Na2WO4 with H2O2 in MeOH though the latter had lower yields than the former and purification was difficult. This method involves concerted metalation deprotonation, CMD or a double C-H activation. Though this proposed scheme has many advantages, inexpensive starting materials, and fewer steps, many challenges were faced during the experimental. The Goldberg coupling gave low yields of the desired product 12, due to the formation of other side products. Cyclization was also not achieved due to the absence of electron donating groups on the rings. In order to increase the efficiency of this pathway, the modification as shown in scheme 4.9 can be done. DNA ADDUCT REACTION Our biomimetic synthesis for DNA adduct formation imitates the pathway for the metabolism of HAA as explained above. The AC-NO2 is partially reduced to hydroxylamine. The hydroxylamine will be acetylated using acetic acid or trifluoroacetic acid to form the very reactive acetamide as shown in scheme 4.10. Scheme 4.10: Biomimetic pathway for synthesis of DNA adduct In order to obtain the reactive N-acetoxy--carboline, the AC-NO2 will be partially reduced using Pd/C and hydrazine to obtain the hydroxylamine, which is then acetylated to obtain the Nacetoxy--carboline using Acyl chloride as shown in scheme 4.11. Scheme 4.11: Synthetic pathway to form N-acetoxy-α-carboline CONCLUSION In summary, we have synthesized 2-amino--carboline (AC) and 2-nitro--carboline (AC-NO2) with high yields. Future works would involve synthesizing the fluorinated analogue and DNA adduct for studies into the thermodynamic stability and other biophysical parameters of the adduct to ascertain how the carcinogen AC interaction with DNA leads to mutation. EXPERIMENTAL SECTION All reactions were carried out in oven-dried glassware under a nitrogen atmosphere unless stated otherwise. Yields refer to chromatographically and spectroscopically pure compounds unless stated otherwise. 1
v3-fos-license
2023-05-03T06:17:30.511Z
2023-05-01T00:00:00.000
258438805
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "f1173695bd85ba50c5fb79fd2b53a06622d4f081", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42723", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "sha1": "481bffda9e9e1eba1d82f84d427588eb02e5ac90", "year": 2023 }
pes2o/s2orc
Interlink between the gut microbiota and inflammation in the context of oxidative stress in Alzheimer’s disease progression ABSTRACT The microbiota-gut-brain axis is an important pathway of communication and may dynamically contribute to Alzheimer’s disease (AD) pathogenesis. Pathological commensal gut microbiota alterations, termed as dysbiosis, can influence intestinal permeability and break the blood–brain barrier which may trigger AD pathogenesis via redox signaling, neuronal, immune, and metabolic pathways. Dysbiosis increases the oxidative stress. Oxidants affect the innate immune system through recognizing microbial-derived pathogens by Toll-like receptors and initiating the inflammatory process. Most of the gut microbiome research work highlights the relationship between the gut microbiota and AD, but the contributory connection between precise bacteria and brain dysfunction in AD pathology cannot be fully demonstrated. Here, we summarize the current information of the fundamental connections between oxidative stress, inflammation, and gut dysbiosis in AD. This review emphasizes on the involvement of gut microbiota in the regulation of oxidative stress, inflammation, immune responses including central and peripheral cross-talk. It provides insights for novel preventative and therapeutic approaches in AD. Introduction Alzheimer's disease (AD) is the most common cause of neurodegenerative disorders such as dementia. Cerebral extracellular amyloid β (Aβ) aggregation and intracellular neurofibrillary tangles (NFTs) formation are the primary histological hallmarks of AD. 1 Now a days, AD is the foremost global health affliction worldwide. The global prevalence of dementia rises exponentially with age. At 60-64 years it is around 0.7-1.8%, and over 90 years it goes up to 63.9% worldwide. 2 In 2021, about 6.2 million Americans were affected by AD; this number is projected to rise to nearly 13.8 million by 2060. Although the death rate by stroke, heart disease, and HIV were reduced between 2000 and 2019; unfortunately, AD death rate was increased by around 145%. During COVID-19 pandemic, AD death rate was increased by 6% in the USA. 3 In 2021, the total cost of healthcare-related dementia was estimated at around $355 billion in the USA. 3 Though the precise biological changes of AD, the different rates of progression among affected individuals and how AD can be prevented, slowed down, or stopped are still largely mysterious. Therefore, studying the mechanism of AD pathogenesis and finding new treatment strategies for the prevention and cure of AD is one of the most important challenges to be tackled in AD research. Numerous evidences support that oxidative stress is one of the important cause of cell damage in AD 4,5 . The abundance of oxidative products which alter the major histopathology are increased in the AD brain. 6 Oxidative stress is mostly generated by reactive oxygen species (ROS) and reactive nitrogen species (RNS). Extreme accumulation of ROS induces neuronal damage. [4][5][6] Clearing or inhibiting the surplus ROS/RNS from the brain may be a fruitful treatment of AD. Oxidative stress will be discussed more in detail in the sections below. Recent evidence suggested that age-related attenuation of gut microbiota biodiversity is considered as an important reason of AD pathogenesis. 7,8 Gut microbiota can regulate the multiple neurochemical pathways through the "microbiota-gut-brain axis". "microbiota-gutbrain axis" states to a bidirectional network communication between the central nervous system (CNS) and the gastrointestinal (GI) tract connecting various overlying pathways such as the autonomic, neuroendocrine, and immune systems including bacterial metabolites and neuromodulator molecules directly affecting the brain function ( Figure 1). Although the microbiota-gut-brain axis helps the appropriate function of the digestive tract, it also controls the biochemical signals between the sympathetic nervous system, endocrine glands, and specific brain regions such as the hypothalamus and the frontal cortex. 11 Gut microbiota communicates to the brain through four significant routes; first, vagus nerve activation which joins the muscular and mucosal layer of the GI tract to the brain stem; secondly, secretion of serotonin from enterochromaffin cells (EC) which are present in the gut epithelial lining; thirdly, dysfunction of microglia; and last direct transfer of the chemical signals (toxins, short-chain fatty acids, γ-aminobutyric acid, etc.) to the brain. [12][13][14] These four routes may work in combination to progress Aβ signaling cascade from the gut to the brain. (a) Schematic representation of bidirectional communication between gut and brain through "microbiota-gut-brain axis". The communications are mainly carried out by neural, endocrine, and immunological pathways. (b) in the lumen gut microbiota, microbial-derived metabolites such as short-chain fatty acids (SCFAs), neurotransmitters, amino acids, and bacterial amyloid interacts with the host immune system. These interactions affect the host metabolism and may activate the vagus nerve. Therefore these interactions are key to maintaining the overall health of the host. During dysbiosis, unfavorable conditions may cause the activation of corticotropin receptor, subsequently triggering adrenocorticotrophic hormone release and finally influencing cortisol release which leads to the loss of intestinal barrier integrity. 9,10 As a result, intestinal and blood-brain barrier permeability is increased. Due to increased permeability, there is an increase in reactive oxygen species (ROS) in the neurons and microglia which may cause oxidative stress in neurodegenerative diseases such as Alzheimer's disease. GI tract encloses trillions of commensal microorganisms and~1000 of its species which regulate a variety of metabolic functions and preserve membrane barrier functions mainly in the gut. 9 Within two years after birth, the community of microbes stabilizes the host GI tract, but depending upon peripheral factors such as age, diet, health, genetics, lifestyle, and environment differ their composition among individuals. 10 Furthermore, neurotransmitter γ-aminobutyric acid (GABA) can be produced by some beneficial gut microbiota like Bacteroides, Bifidobacterium, Parabacteroides, and Escherichia spp. Therefore, gut microbiota controls the level of neurotransmitters in the host organism. 15 With increasing age, pathological changes in the gut microbiota composition, termed as dysbiosis leads to inflammation, disrupts the blood-brain barrier (BBB), activates the immune system, and produces ROS and this is also seen in AD circumstances 16 ( Figure 1). Increasing abundance of Escherichia, Shigella spp., Bacteriodetes, and decreasing population of Bifidobacterium spp. with disturbed bacteriodetes versus firmicutes ratio might quicken inflammation and Aβ aggregation in AD. 17,18 Therefore, a healthy well-balanced gut microbiome may reduce or prevent the detrimental effects of oxidative stress in AD. Here, we summarize the current research work on the association between oxidative stress and the gut microbiota in AD. In a nutshell, this review will provide evidence of the interlink between the gut microbiota and inflammation in the context of oxidative stress and the role of gut-microbial metabolites in AD. Oxidative stress in AD Oxidative stress is known as the imbalance between the ROS/RNS and antioxidant levels in cells. It disrupts the redox signaling pathway and contributes to microglial dysfunction in AD 19 . ROS are mainly formed as the secondary product of the leaky electron transport chain (ETC) (complex I and III) in mitochondria. 4 In addition, monoamine oxidases (MAO) (present in the outer mitochondrial membrane), 20 an isoform of nitric oxide synthase (NOS) (present in neurons), 21 NADPH oxidases (NOX) (present in the plasma membrane, and phagosomes of polymorphonuclear neutrophils, abundance in cortex and hippocampus regions of the brain), 22 non-heme iron enzymes such as lipoxygenases (present in the cytoplasm), 23 xanthine oxidase, cytochrome P450 monooxygenase, cyclooxygenase and D-amino oxidase (present in the cytoplasm) 24 are important ROS producers. Peroxisomes also have a contribution to ROS formation. 25 Oxidative stress is augmented by the formation of superoxide (O 2 .−) by one electron from the molecular oxygen (O 2 ), hydrogen peroxide (H 2 O 2 ), peroxynitrite (ONOO−), and hydroxyl radicals (.OH) which are produced by the Fenton and Haber Weiss reaction in AD. The multi-valence of transition metals such as iron, copper, aluminum, and zinc help free radical formation causing oxidative stress. Lipid peroxidation, protein oxidation, nucleic acid damage, and advanced glycation end-products (AGEs) formation are the main four reactions which cause cellular damage in oxidative stress. 4 Oxidative stress biomarkers such as malondialdehyde, 4-hydroxynonenal, and F2-isoprostane (lipid oxidative damage); protein carbonyls and 3 nitrotyrosine (products of protein oxidation), 8-hydroxydeoxyguanosine (nucleic acid oxidation) are observed at high concentration in the blood and the cerebrospinal fluid (CSF) of AD patients. 26 In swift, dysfunction in cellular organelles like mitochondria 27 , endoplasmic reticulum due to unfolded protein response (UPR) 28 , augmentation of metal ions in neuritic plaques 29 and hyperactivation of microglia followed by the upregulation of NADPH oxidase 30 are characterized by ROS/RNS production 31 in AD. Oxidative stress can accelerate the aggregation of Aβ and vice versa. 4,26 Additionally, hyperphosphorylated tau proteins may lead to reduced activity of NADHubiquinone reductase enzyme, increasing ROS production and mitochondrial dysfunction in AD. 32 Therefore, the relationship between oxidative stress and AD ( Figure 2) is well documented in previous studies which indicate the importance of the antioxidant defense system in the brain. Oxidative stress: the role of gut microbiota By the modulation of mitochondrial activity, commensal and pathogenic bacteria can change the cellular oxidative stress in the gut 33 . Formylated peptides are produced by commensal bacteria and it binds with G protein-coupled receptors (GPCRs) on macrophages and neutrophils and trigger inflammation 34,35 . As a result, superoxide is produced by NOX-1 which increases cellular ROS. 36 In addition, nitrate and nitrites compounds can be converted to nitric oxide (NO) by the gut Lactobacilli and Bifidobacterium, creating a high abundance of NO in the gut epithelia. NO is also produced from L-arginine using NOS by Streptococcus and Bacilli. 37 Although the nanomolar concentration of NO is considered as a neuroprotective and neurotransmitter of noradrenergic, noncholinergic enteric neurons; the higher concentration of NO produces ROS/RNS which further forms hydroxyl radicals, resulting in detrimental effects to neuroinflammation, axonal degeneration, and neurodegenerative disorders. 38 Furthermore, Salmonella typhimurium, E. coli, Mycobacterium, and Streptococcus anginosus can produce hydrogen sulfide from sulfur-containing amino acids (e.g. Cysteine) in the GI tract by sulfur metabolism. Due to the high concentration of hydrogen sulfide, cyclooxygenase activity is inhibited which alters the metabolism toward glycolysis, resulting in the reduction of mitochondrial oxygen consumption, ATP production, and overexpression of pro-inflammatory effects ( Figure 2). 39,40 Additionally, dietary trimethylamine N-oxide (TMAO) is accelerated oxidative stress by reducing superoxide dismutase Exogenous source such as toxins or radiation may cause ROS/RNS production. ROS production may accelerate amyloid beta (Aβ) aggregation and tau hyperphosphorylation in Alzheimer's disease., Streptococcus and Bacilli increases RNS production; whereas Salmonella and E. coli inhibit ATP production which accelerates oxidative stress, inflammation and thus may significantly influence amyloid-beta aggregation in AD. levels, increasing malondialdehyde and glutathione peroxidase, and exacerbating inflammation by the production of proinflammatory cytokines such as IL-6, IL-10, IL-1β, TNF-∞ in plasma and liver of male Apolipoprotein E knock-out (ApoE−/−) mice. 41 Oxidative stress and inflammation in AD Oxidative stress and inflammation are intimately connected in the pathophysiological conditions where redox homeostasis is interrupted. 42 Growing evidences indicate that inflammation/ neuroinflammation is a significant contributor to AD development and exacerbation. 42,43 Inflammation is mainly arbitrated by microglial and astroglial states in the brain. Microglia have an important role in the brain development and function. Deposition of Aβ increases neuronal injury and inflammation by triggering microglia activation in AD. Activated microglia release some pro-inflammatory cytokines such as IL-6, TNF-∞ and IL-1β which regulates the inflammation. 44 Interestingly, IL-1β regulates the amyloid precursor protein, parent protein of Aβ. 45 The CSF-IL-1β is higher in AD than in agematched control patients. 46 By the activation of p38 mitogen-activated protein kinase and glycogen synthase kinase-3β pathway, highly concentrated IL-1β accelerates tau hyperphosphorylation and NFT formation in triple transgenic AD mouse model. 47 Additionally, microglia have a significant role in Aβ clearance and degradation. 48 During microglial phagocytosis over time, the efficiency of microglia gradually declines Aβ clearance, which reduces Aβ proteolysis 49 and increases Aβ deposition and aggregation 50 . This incidence creates an overproduction of pro-inflammatory cytokines, which trigger more ROS production and AD onset 19 . Therefore, microglia is one of the important nonneuronal cells for plaque formation and oxidative stress initiation in the AD brain. Microglial degeneration can be occurred by stimulating receptor expressed on myeloid cells 2 (TREM2), C×3C motif chemokine receptor 1 (CX3CR1), GABA, and other inflammatory cytokine mediators. The expression of TREM2 on the microglial cell surface increases the phagocytic activity and causes ROS production. 51 During the phagocytosis activity microglia produce ROS like O 2 .. Active microglia also generate O 2 −. via NOX-2 pathway. H 2 O 2 and NO. also impose local inflammation by attracting microglia which are involved in oxidative stress 19 that may drive AD progression. Accumulating evidences showed that the astrocytes are intensively involved in maintaining oxidative stress at physiological or pathological conditions. 52,53 Astrocytes are playing a dual role in maintaining homeostasis of ROS/RNS regulation. 52 Under physiological conditions, astrocyte acts as a neuroprotector of CNS from oxidative injury. In this condition various antioxidants are secreted, endogenic antioxidative systems like nuclear factor E2-related factor 2 (Nrf2) is stimulated, the excitatory amino acids are removed, neurotransmitters are uptaken and metabolized, energy and neurotrophin are produced and finally ROS/RNS are degraded. 52,54 Whereas, under pathophysiological conditions, astrocytes trigger by stimulation from microglial activation and neuronal degradation. As a result, excessive ROS/RNS are produced from impaired mitochondria and antioxidant production is reduced; and elevated pro-inflammatory cytokines which leads to a detrimental effect seen in AD. Aβ and proinflammatory cytokines can trigger the astrocytes, causing instigation of NF-kB pathway, subsequently more pro-inflammatory cytokines and chemokines are produced . 55 Astrocytes are the vital cells that control glutamate homeostasis which circuitously maintains oxidative stress. 56 In the pathological circumstance, excessive glutamate is secreted from the pre-synaptic membrane and collected in the synaptic cleft which may allow a large influx of Ca 2+ by over-activating N-methyld-aspartate (NMDA) and α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) receptors. It starts with calcium overload and ROS production in mitochondria; and causes neurotoxicity. 57 Oligodendrocyte, the myelin-forming cells (myelinate neuronal axons) contribute to increased oxidative stress in the CNS. Oligodendrocytes have lower glutathione content and it increases the iron content in the brain, which makes them further susceptible to oxidative stress and damage. 58 Oligodendrocytes are also exposed to Aβ and lead to oxidative stress, resulting in demyelination. 59 Demyleination causes a reduction in neuronal action potential time with increased inflammation and oxidative stress, thereby contributing to cognitive impairment in AD. Oxidative stress and inflammation: role of the gut microbiota Gut microbiota plays numerous roles in the improvement of immune response, protection against pathogen colonization 60 at the intestinal epithelial barrier 61 , association with production/regulation of oxidative stress and inflammation 62 in the host. Preclinical 63 and clinical studies 64,65 show that modifications in gut microbiota is connected with AD progression. Gram-negative bacteria such as E. coli and Shigella colonization leads to surge in the formation of bacterial amyloids and lipopolysaccharides (LPS) which causes peripheral systemic inflammation, resulting in dysfunctional GI permeability and impaired BBB function. 66,67 Dysbiosis causes increase in pathogenic bacteria such as Escherichia, Shigella, Pseudomonas, Proteobacteria, and Verrucomicrobia and reduces beneficial bacteria e.g. Bifidobacterium, Bacteroides fragilis, Bacillus fragilis, Eubacterium hallii, Eubacterium rectale, and Faecalibacterium prausnitzii. 18 Gut microbial metabolites like indole-3-pyruvic acid and short-chain fatty acids (SCFAs) were gradually decreased from mild cognitive impairment (MCI) to AD patients. 68 The abundance of SCFA-producing bacteria such as Clostridia, Clostridiales Ruminococcaceae, Firmicutes, and Ruminococcus declined which indicates the AD progression by the host-microbe cross-talk signals. 68 Other studies revealed that Blautia, Desulfovibrio, Escherichia-Shigella, and Akkermansia are markedly altered in the APP/PSI transgenic mice. 69,70 High abundance of Enterotype I and III bacteria are connected with the incidence of dementia. 71 The level of Bifidobacterium, Blautia, Lactobacillus, and Sphingomonas were more than Anaerobacterium, Papillibacter, and Odoribacter in AD patients. 72 Additionally, the affluence of Firmicutes, Proteobacteria, Tenericutes Enterobacteriaceae, Coriobacteriaceae, Mogibacteriaceae, Phascolarctobacterium, and Coprococcus were higher in MCI patients than age-matched controls. 73 The abundances of Firmicutes, Actinobacteria, E. rectale, B. fragilis and Fusobacteriaceae were decreased whereas, the levels of Bacteroides, Tenericutes, E. coli, B. subtilis, Escherichia, shigella, Lactobacilli, Bifidobacteria, Prevotellaceae, Verrucomicrobia were increased in the feces when compared with respective control groups in both AD animal models and AD patients 74 . The higher abundance of fungi like, Phaffomyceteceae, Sclerotiniaceae, Cystofilobasidiaceae, Togniniaceae, Trichocomaceae, Botrytis, Cladosporium, Kazachstania, Phaeoacremonium and lower affluence of Meyerozyma were found in MCI patients. 75 For example, dietary amines can be metabolized to trimethylamine by gut microbiota whereas liver upon nutrient absorption converts trimethylamine to TMAO. Trimethylamine availability can be influenced by the gut dysbiosis. Increased levels of TMAO can promote the upregulation of inflammatory cytokines that raise oxidative stress in AD. 76 TMAO were found to be elevated in the cerebrospinal fluid of AD and MCI patients. 77 Therefore, beneficially modifying the gut microbiota content can show an optimistic role in the declining ROS through SCFA such as butyrate production, whereas dysbiosis may accelerate the systematic inflammation, activation of microglia, and BBB damage by elevated TMAO and further promotes AD progression. 78 Interlink between oxidative stress, inflammation, and TLRs in AD Toll-like receptors (TLRs) are type 1 transmembrane pattern recognition receptors that are composed of an extracytoplasmic leucine-rich repeat (LRR) domain, a single membrane-spanning helix, and a signaling Toll-interleukin-1 receptor (TIR) domain. Based on spatial distribution of TLRs, it is mainly divided into two groups. One group (TLR1, TLR2, TLR4, TLR5, TLR 6, and TLR11) are found on the plasma membranes and they can be activated by microbial products e.g. lipids, lipoproteins, and proteins. Other group (TLR3, TLR7, TLR8, and TLR9) is located in cytoplasmic compartments. It can be triggered by nucleic acid species. 79 TLRs stimulate the downstream signaling transduction by identifying microbes-derived pathogens through the damage-and pathogen-associated molecular patterns (DAMPs and PAMPs) and start the inflammation. Firstly, TLRs change their confirmation based on PAMPs or DAMPs, and then dimerize to enhance the downstream signaling adaptors such as myeloid differentiation primary response protein 88 (MyD88), TIR domaincontaining adaptor molecule (TIRAP), TIR domain-containing adaptor protein inducing interferon-b (TRIF) and TRIF-related adaptor molecule (TRAM) which stimulate the precise transcription factors and innate immune responses. 80 Except for TLR3, MyD88 is a common adaptor protein for all TLR-mediated signaling pathways. 81 Upon TLR involvement, the MyD88-dependent pathway can activate downstream signaling through the phosphorylation of IkB at ser 32 and 36 and it causes proteasomal degradation. As a result, free nuclear factor kappa B (NF-kb) translocates to the nucleus and activates targeted genes which produce proinflammatory cytokines such as TNF-α, IL-1, and IL-6 and oxidant enzymes such as NOX and iNOS which increases ROS levels. ROS accelerates NF-kB activation by enhancement of IKK phosphorylation by initiating protein kinase D (PKD) or obstructing protein phosphatase type 2A (PP2A). 82 In addition, TLRs also help ROS production within mitochondria and activate NOX by straight communication at the cell membrane or by increasing the phosphorylation of its p47phox subunit within the cytoplasm. 82 As a result, intracellular ROS level is increased and it helps to mobilize and dimerize TLRs and amplifies TLR responses ( Figure 3). Therefore, TLR-mediated inflammation plays a significant role in oxidative stress. Although 11 human and 13 mouse TLRs are identified, recent evidence indicates TLR2, TLR4, and TLR9 are mainly involved in AD onset ( Table 1). [80][81][82]101 Although, the function of TLR2 in AD is controversial, some researchers believe that TLR2 can identify Aβ42 and trigger proinflammatory cytokines (e.g. TNF-α, IL-6, and IL-1) secretion. 83,84 It was found that decreased levels of Aβ and higher toxicity of Aβ 1-42 can cause more cognitive decline in TLR2 knockout APP/PS1 mice. 85 It has been previously documented that uptake of Aβ42 by microglia is significantly increased by the activation of TLR2 expression. 86 Recently, we found that the integrity of the gut barrier was markedly decreased in Tg2576 AD mice which might increase bacterial amyloid (curli) burden in the gut before the appearance of Aβ deposition in the brain. 87 Curli stimulated TLR2 activation and co-localized with neuroendocrine marker PGP 9.5 within the epithelium and sub-mucosa of the gut in AD mice. 87 It indicated the vagus nerve activation by bacterial curli. Therefore, activation of TLR2 can stimulate AD pathogenesis 87,88,92,102 It indicates that inhibition of TLR2 from periphery might be beneficial for AD. In-vivo and in-vitro studies also showed that phagocytosis and Aβ clearance in microglia were increased by the deficiency of TLR2 89 . Secretion of pro-inflammatory cytokines and Aβ accumulation was reduced by the inhibition of TLR2 which improved the spatial learning performance in AD mouse models. 90,91 TLR4 also plays a double role in AD pathogenesis. Jin et al. found that some cytokines such as IL-1β, IL-10, IL-17, and TNF-α were increased in TLR4 mutant Mo/Hu APPswe PS1dE9 transgenic mice (TLR4M AD mice), indicating TLR4 signaling may be involved in AD pathogenesis. 93 Song et al. also observed that TLR4 mutation decreased the microglial activation and accelerated Aβ deposition in AD mice, suggesting that microglial TLR4-mediated Aβ-induced neurotoxicity increases the clearance of Aβ deposition in the brain. 94 In addition, Qin et al. found that neuroinflammation could help neuronal autophagy, indicating mild TLR4 stimulation that attenuates ADrelated tauopathy in tau-transgenic AD mice. 95 Go et al. found that TLR4 signaling in microglia was changed in the AD mouse model (TgAPP/PS1) and suggested the alteration of TLR4 signaling might help understand Aβ accumulation in the brain. 96 In addition, recent evidence indicated that activation of TLR9 signaling can defend neurons from stress; 97 Polymorphism of TLR9 may reduce the risk of AD 98 and TLR9 knockout mice showed impaired synaptic function. 99 Use of TLR9 agonists in AD mice cause the levels of Aβ aggregation and tau hyperphosphorylation declined which might improve cognitive deficits. 100 Therefore, immunomodulation through TLR9 may act as a probable therapeutic approach for AD but needs further investigation. Oxidative stress and Nrf2-Keap 1 pathway in AD Cellular central defense mechanism against oxidative stress is regulated by the nuclear factor E2related factor 2 (Nrf2)-Kelch-like ECH-associated protein 1 (Keap1) signaling pathway [103][104][105] . Under homeostatic condition with low abundance of ROS, Keap1 binds with Nrf2 and then cullindependent E3 ubiquitin ligases the Nrf2 by proteasomal degradation. Under stimulation of elevated ROS levels, Nrf2 detaches from Keap1 and then Nrf2 transports into the nucleus, binds with antioxidant response elements (ARE) and finally promotes to increase downstream antioxidant enzyme genes (GCLC, GCLM, HO-1, and NQO1) which reduces the oxidative stress in AD 106 . Numerous 3xTg-AD mice Ameliorated microglial function 100 evidences reported that Nrf2 expression level are progressively reduced in the brain with the increasing of age. That leads to poorer clearance of resulting ROS immediately from the cytoplasm with age [107][108][109] . Consequently, oxidative damage and synaptic structural damage occurs in the neuronal cell, which is one of the primary reasons for AD. From current accumulating evidences, it is indicated that the activation of Nrf2-Keap 1 pathway ameliorates oxidative stress in AD (Figure 4). Oxidative stress and Nrf2-Keap 1 pathway: role of the gut microbiota Numerous evidences suggest that cellular ROS is produced in the gut epithelial cells by the catalytic action of NADPH oxidases to the gut bacteria. 110,111 Enzymatically ROS production by NADPH oxidase 1 (Nox1) 112 in epithelia is accelerated by pathogens, whereas symbiotic bacteria like Lactobacilli controls the intestinal epithelial cell proliferation 113 , recompensation post-injury 114 , and modification of epithelial NF-kB signaling. 115 Jones et al. found that Nox1 is required to activate the Nrf2 pathway for the epithelial cytoprotection which is mediated by the gut bacteria Lactobacilli in the Drosophila and mice. Activated Nrf2 pathway boosts cytoprotective genes against environmental oxidative stress ( Figure 4). Therefore, Lactobacilli prompts their promising influence on host gut-epithelial tissue by the activation of lactobacilli dependent-Nox-Nrf2 signaling. 116 Therefore, Nrf2 signaling pathway initiates cytoprotection within the gut epithelial cells. 117 Crosstalk between autophagy and Nrf2 pathway in AD Autophagy is a significant metabolic process which eliminates the misfolded proteins in the cells. Inception of autophagy and autophagosomes formation is other promising pathway for the treatment of AD. 107 Initiation of the Nrf2 pathway and autophagy are both helpful for the reduction of AD progression. There is a reciprocal relationship between autophagy and oxidative stress. After the elimination of Keap1 from Nrf2, keap1 binds with phosphorylated ubiquitin-binding protein p62 and finally triggers the autophagy by proteasome or lysosome pathway. 118 The degradation of p62 is controlled by autophagy at normal conditions, whereas oxidative stress activates p62 by the stimulation of p62-Keap1-Nrf2-ARE pathway. 119 Like Nrf2 signaling pathway, the released Nrf2 induces the expression of some autophagy genes such as genes of autophagy initiation (ULK1), substrate recognition (SQSTM1 and CALCOCO2), autophagosome forming (ATG4D, ATG7, and GABARAPL1), autophagosome extension (ATG2B and ATG5), lysosomal clearance (ATG4D), suggesting Nrf2 can stimulate the autophagy. 120,121 In addition, p62 initiates tau protein degradation by selective autophagy, 122 and protect the neuronal homeostasis 123 . Hence, the p62-Keap1-Nrf2 positive feedback axis can act as a neuroprotective mechanism 124 and it connects the link between Nrf2 and autophagy pathways in AD pathogenesis 107 (Figure 4). Crosstalk between autophagy and Nrf2 pathway: role of the gut microbiota Recent evidence has demonstrated that modulation of the gut microbiota induces the potential beneficial effects on neurochemical pathways which can slow down AD progression. 125 In the treatment with SLAB51 in AD mice, autophagic markers such as beclin-1 and LC3-II were increased and the level of p62 was decreased, indicating the autophagic flux activation. SLAB51 treatment ameliorated brain damage with improved cognition and reduced levels of Aβ through the fractional recovery of ubiquitinproteasome system and autophagy pathways. 125 Te´gla´s et al. also found that using probiotic supplementation (with Bifidobacterium longum and Lactobacillus acidophilus) for twenty weeks, delayed the progression of AD in 3-month old, male APP/PS1 transgenic mice. They found that Nrf2 levels were elevated in the probiotic supplemented mice. Probiotics were found to stimulate the antioxidants such as superoxide dismutase (SOD) 126 and finally it promotes cellular defense mechanism against oxidative stress in AD. Through the microbiota-gut-brain axis Lactobacillus reuteri can promote the production and absorption of indole-3-aldehyde and indole-3-propionic acid in to the brain 127 that results in suppressed neuroinflammation and improved astrocyte activation. (Figure 4). Overall, it is clear that intervention with probiotics has a promising effect on suppressing oxidative stress and improving cognition in AD. Neuroprotection: role of the gut metabolites The rising line of evidence expose that the gut microbiota and its metabolites such as polyphenols, SCFAs, antioxidants, vitamins etc. regulate many biosynthetic pathways which may have favorable or unfavorable effects on the host system (Table 2). 12 Gut microbiota can change neurotransmitter function of Brain-Derived Neurotrophic Factor (BDNF) by either kynurenine pathway or the action and availability of SCFAs in the brain. 144 Hence, potent gut microbiota is of utmost importance and can maintain neuronal health through antioxidative or anti-inflammatory pathways. The gut microbiome also regulates the penetrability of metabolites to BBB and improves intestinal barrier integrity; a hindrance to the intestinal gut microbiota colonization with pathogens, can reverse these protective effects. Recent studies revealed that Vit B and Vit K can improve neuronal health in the brain development and function. 139,145 Deficiency in Vit B and Vit K correlated with decline in the memory functions of AD patients. Vit K2 (menaquinone-4) have antioxidative properties and markedly inhibits the rotenone-induced p38 activation, ROS production, and caspase-1 activity and finally reestablished mitochondrial membrane potential. 146 Vit K is produced by Escherichia coli, Klebsiella pneumoniae, Propionibacterium, and Eubacterium; Bacillus subtilis and E. coli produces B2 (riboflavin); Bifidobacterium, Lactococcus lactis, and Streptococcus thermophilus produces B9 (folic acid); Lactobacillus reuteri and Propionibacterium freudenreichii produces B12 (cobalamin). 139 There might be a great benefit in exploring these bacterial interaction with the host in the context of AD. With the help of the gut bacteria, dietary amino acids (e.g. tyrosine, tryptophan, and phenylalanine) are metabolized into SCFAs, indole derivatives, neurotransmitters, organic acids, amines, and ammonia. 147 The products of tryptophan metabolism such as tryptamine, tryptophan derivatives act as neuroactive molecule. 148 Indole propanoic acid (an indole derivative) acts as antioxidant by beneficially influencing decline in neuroinflammation. 149 Metabolized products of arginine and agmatine, ameliorates ROS production and indicates the therapeutic effects in CNS disorders. 150 In-vitro and in-vivo studies also showed that agmatine defends astrocytes and microglia from the detrimental effect of oxidative stress. 151 Useful metabolites like SCFAs formed by intestinal bacteria support to decrease ROS by controlling the activity of mitochondria. 12 SCFAs decrease oxidative stress by declining microglial activation in the brain. 152 In addition, SCFAs prevent neurotoxic Aβ aggregation in AD by obstracting Aβ40/42 assembly. 153 The abundances of Bifidobacterium spp. decreases with increase in Proteobacteria spp. with growing age that might contribute to AD pathogenesis. Bifidobacterium have an important role in maintaining hippocampal plasticity and memory functions through the regulation of cholesterol levels by accumulating the serum leptin levels. 154,155 Taken together, metabolites produced by the gut microbiota prevent and ameliorate oxidative stress associated with CNS depending upon the gut health of an individual. Metabolic and neuroactive metabolites produced by gut microbiota have beneficial effects on the host health. Gut bacteria like Lactobacilli and Bifidobacterium can produce neurotransmitter such as γ-Aminobutyric acid (GABA) which can control glucose homeostasis and change the behavioral activity in the host. 156 Tiwari V. et al. showed that synaptic plasticity in the hippocampus was changed by the reduced level of GABA with augmentation of glutamatergic neurotransmission in the AβPPswe-PS1dE9 mice model of AD. 157 GABA-producing bacteria like Lactobacilli may recover the metabolic and depressive-like behavioral abnormalities in mice. 158 In addition, Cyanobacteria produces β-methylamino-L-alanine (BMAA), neurotoxins causing cognitive impairment. 159 Therefore, gut microbiota plays the main role in modulating ROS production in the CNS. The role of gut microbiota in modulating Indole derivatives Clostridium sporogenes, Escherichia coli, 143 the host oxidative stress both peripherally and centrally looks very promising and needs further attention with more evidences to treat AD patients before symptoms occur in the brain. Conclusion There are enough evidences from our own research and other research works as highlighted in this review that the interrelationship of the gut microbiota and the brain has controlled to transformative advances in neuroscience research. Gut dysbiosis acts as an important player in modulating the microbiota-gut-brain axis that may contribute to increased inflammation and accelerate amyloid β aggregation by TLR associate signaling cascade upon AD onset. Gut microbiota can influence AD pathogenesis via increased neuroinflammation elevated oxidative stress, dysregulated neurotransmitters, reduced SCFAs, elevated TLRs and increased toxins. All these pathways are interlinked. In this review, we primarily explored the role of dysbiotic gut microbiota on triggering oxidative stress and inflammation. In addition, we have also focused on studies stating maintenance of eubiosis by inhibition of pro-inflammatory cytokines, increasing anti-inflammatory cytokines, preventing oxidative stress and inflammation, and improving bacterial metabolites with beneficial function to improve host health. To date, most of the gut microbiome research work mainly emphasize the connections between the gut microbiota and AD, although the contributory relationship between specific bacteria and the brain dysfunction in AD are not demonstrated functionally. Therefore, the specific function of the precise gut microbiota in AD patients persist as indefinable. If we could understand these interactions and molecular mechanisms in detail then the commensal gut bacteria can be utilized as targets for novel noninvasive diagnosis and future treatment strategies of AD. Furthermore, increasing the usage of germ-free, specific gene knockout and humanized sporadic AD animal models for the characterization of the gut microbiota and its crosscommunication with the host in AD pathophysiology is needed. Targeting the potential beneficial bacteria (as described in this review) as a novel intervention strategy can prohibit or slow AD onset or counteract its development. In addition, new treatment strategies such as fecal microbiota transfer, probiotics (beneficial bacterial cocktails), beneficial metabolite supplementations to improve the brain and the gut health in pre-symptomatic AD patients may have translational value. Therefore, efficacy of the gut microbiota and its effective metabolites in neurodegenerative and neuroprotective mechanisms is continued to be open for further investigation. Therefore, it may provide new suggestions in the pathology and treatment to AD. Acknowledgments All diagrams are generated using BioRender. Disclosure statement No potential conflict of interest was reported by the author(s). Funding This was supported by the National Institutes of Health (NIA 5R01AG070934-03 to B.P.G) Author Contributions T.K.D. and B.P.G. constructed the design and outlined the content. T.K.D. reviewed and analyzed the literatures, drafted the manuscript, and prepared the figures; B.P.G. obtained the funding, editing and supervised the writing of the review. All authors critically revised the manuscript. All authors have read and agreed to the published version of the manuscript.
v3-fos-license
2018-12-18T12:29:58.728Z
2015-12-25T00:00:00.000
75427267
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.jhsci.ba/ojs/index.php/jhsci/article/download/448/455", "pdf_hash": "a38c5c4bea165bc5b67f2c5eb35349b02d6193e3", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42724", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a38c5c4bea165bc5b67f2c5eb35349b02d6193e3", "year": 2015 }
pes2o/s2orc
An unusual case : right proximal ureteral compression by the ovarian vein and distal ureteral compression by the external iliac vein A 32-years old woman presented to the emergency room of Bozok University Research Hospital with right renal colic. Multidetector computed tomography (MDCT) showed compression of the proximal ureter by the right ovarian vein and compression of the right distal ureter by the right external iliac vein. To the best of our knowledge, right proximal ureteral compression by the ovarian vein together with distal ureteral compression by the external iliac vein have not been reported in the literature. Ovarian vein and external iliac vein compression should be considered in patients presenting to the emergency room with renal colic or low back pain and a dilated collecting system. INTRODUCTION The ureters are a pair of tubular structures that carry urine from the kidneys to the bladder with peristaltic movement.The ureters are about 25-30 cm long and 3 mm in diameter.In the abdominal cavity, several anatomical structures are neighboring to the ureters.If a complication occurs in one of the neighboring structures, it can cause an obstruction of the ureter.This includes complications related to the renal artery (1), retrocaval ureter (2), pregnancy (3), and the testicular vein (4).In addition, abdominal mass, ovarian vein syndrome (5-7), and iliac artery aneurysm (8) are also associated with ureteral obstructions. CASE REPORT A 32-years old woman presented to the emergency room of Bozok University Research Hospital with right renal colic.Written informed consent was obtained from this patient. The medical history of the patient showed that she had dysuria, urinary frequency, hematuria, intermittent urinary tract infection, and lower abdominal, pelvic, and right flank pain for 3 years.In addition, the women had no family history of congenital disease. The ultrasound (US) examination showed mild dilatation in the right collecting system.Urinary stone was not detected in the US results nor in the radiographs.Intravenous pyelogram (IVP) showed mild dilatation of the right proximal ureter and pelvicalyceal structures.The multidetector computed tomography (MDCT) urography revealed marked compression of the proximal ureter on the anterolateral aspect by the right ovarian vein (Figure 1).Right ureteral dilatation (8.6 mm) and grade 1 pelvicalyceal ectasia occurred due to the compression.In addition, the right distal ureter was compressed by the right external iliac vein (Figure 2) (Figure 3).A short segment ureteral dilatation (9.3 mm) was observed proximal to the compression.The results of the blood chemistry tests were normal (creatinine: 0.69 mg/dL, potassium: 4.1 mEq/L, blood urea nitrogen: 16.3 mg/dL).According to the urinalysis, the number of red blood cells was 7/HPF and the number of white blood cells was 2/HPF.Required analgesia for the patient was provided.The patient's renal function was not affected and follow-up was suggested.After 3 months, the examination showed no pain and recurrent urinary tract infection was not observed.In the US examination mild dilatation in the right kidney was observed. First Hodgkinson, and later Southwell and Bourne, reported that increased venous pressure causes dilation of the ovarian vein.As the result, the vein compresses the ureter.Dilated ovarian veins can cause dysuria, urinary frequency and urgency, gross hematuria, renal colic, lower back pain, and lower abdominal and pelvic pain.A chronic ureteral obstruction can occur repeatedly in patients, resulting in suffering from the disease for a long time, requiring several surgical procedures, or developing renal dysfunction (9).The renal function of our patient was not affected.In addition, no urinary tract infection was observed and the laboratory test results were normal.To our knowledge, a ureteric obstruction with the external iliac vein has not been reported in the literature.However, cases of a ureteral obstruction caused by external iliac artery aneurysms have been showed (8).In addition, Singh et al. reported external iliac artery aneurysms as the cause of a ureteric obstruction in a solitary kidney in four cases (10). CONCLUSION To the best of our knowledge, this is the first report of right proximal ureteral compression by the ovarian vein together with distal ureteral compression by the external iliac vein.MDCT urography can provide information in the case of vascular compression.Ovarian vein and external iliac vein compression should be considered in patients presenting to the emergency room with renal colic or low back pain and a dilated collecting system. FIGURE 1 . FIGURE 1.In late venous phase contrast-enhanced CT, axial, sagittal-oblique maximum intensity projection (MIP), and three dimensional volume rendering (3D-VR) images show the right proximal ureteral compression (black arrow) by the right ovarian vein (white arrow). FIGURE 2 . FIGURE 2. In late venous phase contrast-enhanced CT, axial and sagittal-oblique images show the right distal ureter (black arrow) compressed by the right external iliac vein (white arrow). FIGURE 3 . FIGURE 3.An anatomic illustration of right ureteral compression by the right external iliac vein.
v3-fos-license
2019-09-19T09:15:30.973Z
2019-09-14T00:00:00.000
204361788
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.bricslawjournal.com/jour/article/download/260/154", "pdf_hash": "3569be711ead9aff5806f5da7c2e289c2c1254c1", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42725", "s2fieldsofstudy": [ "Law" ], "sha1": "d47a15ba1c682347c7047ce8700c8a3f0fe33b25", "year": 2019 }
pes2o/s2orc
THE RuSSIan ConSTITuTIon oF 1993 anD THE ConSTITuTIonaLIZaTIon oF FEDERaL LEGISLaTIon: The Constitution of the Russian Federation of 1993 provided the basis and tools for large-scale societal transformations in Russia. Still, the question of whether the results of political and socio-economic reforms are irreversible and in line with constitutional ideas and norms is open to discussion. This study investigates the temporality of the process of the “constitutionalization” of Russian law using the statistics of Federal laws and Federal constitutional laws for the period 1994–2018. The article presents the outcome of the quantitative analysis as well as a discussion of the findings involving the approaches of the legal and political sciences. The research leaves open the question of the relationship between the durability of the democratic constitution and the quality and irreversibility of democratic transformations of the social system. Monitoring the dynamics of the adoption of primary laws and laws on amendments gives evidence that even a “rigid” democratic constitution can become “elastic” with age since its ideas and meanings can often be “stretched” to apply to current cases without the need to make any changes to existing constitutional norms. The authors propose considering the conceptual possibilities of adaptive governance theory to explain the features of modern Russian lawmaking (“adaptive lawmaking,” “agile lawmaking”). Introduction This work is part of a study on the role of the constitution in large-scale transformations of society. We understand the constitution both as a basis for the transformation of political and economic regimes and as a tool for the management of societal changes. it is evident that a new constitution cannot change, in an instant, the whole existing legal framework, which was at the same time a reflection, a "creator" and a guarantor of a previous social order. The "old law" keeps functioning and directly affects social relations. some of the old laws may be neutral for democratic change, but for the most part they are unable either to regulate new institutions and social relations or to contradict the new model of society. so, for social systems under transformation the time factor plays a crucial role. This factor affects the nature and range of possible political and legal events, 1 limits the spectrum of political and legal opportunities as well as determines the choice of options for action. also, the time factor acts on the possibility of reaching the "point of no return, " the transition through which ensures the irreversibility of democratic change. For these reasons, the analysis of the dynamics of the adoption of new (primary) laws directly prescribed by the constitution, as well as the analysis of the timeliness of the abolition of the "old legislation, " contrary to new principles and ideas, is of interest. This kind of research could provide evidence-based data with which to assess the quality of social, political and economic changes boosted by the adoption of the new democratic constitution and ensure that new legislation renders the democracy irreversible. russian researchers in constitutional law do not see any particular difficulties in determining the content and periodization of the processes that took place in the system of russian law after the adoption of the democratic Constitution of 1993 and relate to the implementation of constitutional principles and models. For instance, academician Taliya Khabriyeva calls the ongoing transformations "the constitutionalization of modern russian legislation" and identifies three main stages in this process: During the "formation" stage, in the first seven years of the Constitution ['s operation] … the foundations of legislative regulation of the new socioeconomic formation were created. Codes and other legal acts were developed that revealed the content and ensured the operation of constitutional values and norms … at the second -"adaptation" -stage, which covered the first decade of the new [i.e. twenty-first] century, lawmaking was aimed at solving urgent problems of political and socio-economic development and adjustment of legal regulators. in recent years, the third -"modernization" -stage of development of constitutional values and norms has come. at this stage, the task of the radical transformation of the legislation has not been set longer. however, this process is not limited to the current improvement of legislation. modernization is distinguished by the scale and method of solving problems, which requires the adoption of not only new laws but also the improvement of methods and means of legal influence. 2 We decided to investigate in more detail the conclusion that after the entry into force of the new Constitution of 1993 there is a period during which the adoption of new laws (basic, primary laws) dominates, after which legislators pay more attention to the clarification and improvement of already adopted legal acts. in this paper, we will call these laws the "new laws" or "primary laws" (the individual units related to the primary regulation). as for the laws used as a tool for modification of existing legislation, we will call them "laws on amendments, " although, of course, both types of the mentioned legal acts are the newly adopted laws. at the same time, our aim was to analyze the dynamics of the adoption of new laws in comparison with the dynamics of the adoption of laws on amendments. The question that we hoped to clarify through this analysis centered on the relationship between the durability and stability of the democratic Constitution and the quality and irreversibility of democratic transformations of the social system. Why are we interested in issues related to statistical studies of different types of Federal laws? many observations and expert judgments on the development of russian legal policy motivated us to direct our attention to this subject, but two stimuli are perhaps the most potent. The first stimulus was the many academic papers and media reports that noted the rapid and uncontrolled growth of the laws on amendments in russia. Their authors argue that this practice makes law enforcement and an understanding of legislation quite tricky. The second stimulus was the papers and the expert positions accusing the current Constitution of the russian Federation of vagueness in respect of norms. some experts go further and claim the Constitution of 1993 has a sham nature. The critics base their reasoning on the observation that the constitutional views on the state order and economic system, as well as the political model, are still not implemented one hundred percent. many authors note that a quarter of a century after the adoption of the democratic Constitution russia still retains the "antinomic symbiosis of democracy and authoritarianism," 4 since power is exercised both by democratic and formally undemocratic methods. such judgments apply equally to the current period and the time of the "democratic years of the 1990s. " it is well known that the first President of russia Boris Yeltsin was repeatedly accused both by politicians and by experts of authoritarian methods of governance. 5 in particular, one of the reasons was that Yeltsin initiated and performed many reforms by decree, without waiting for the consent of the legislators or openly against their will. 6 From 1990 to the present day, a large number of enthralling discussions covering the legal and political assessment of the style of government of russian leaders have taken place, as well as on the most accurate scientific definition of the essence of the current russian political regime. 7 neil robinson, for example, argues that accurate evaluations are not yet possible, because russia has "far from finished either state or regime building. " 8 some researchers believe that the basic cause of the "incompleteness" and inconsistency of russian democracy is the Constitution of 1993 itself. For example, political scientist andrey medushevskiy argues that the new russian Constitution объективные и субъективные факторы // Журнал зарубежного законодательства и сравнительного правоведения. Гельман В.Я. «Transition» по-русски: концепции переходного периода и политическая трансформация в России (1989-1996) // Общественные науки и современность. 1997. № 7. С. 64-81 [vladimir Y. gelman, "Transition" in Russian: Concepts of Transition and Political Transformation in Russia (1989-1996, 7 social sciences and modernity 64 (1997) is initially not able to determine the political regime rigidly since its provisions are "unclear, misleading, " "deliberately vague" and "can be interpreted in different ways. " 9 one of the co-authors of the Constitution of 1993 Professor sergey shakhray does not share this point of view. he believes that attempts to "blame" the constitutional act for the observed failures of democracy are the result of naïve faith in the magic power of the written word that is capable, only by its existence, automatically to change the mentality of the elites and the prevailing political practices. 10 Therefore, our present interest is related to consideration of the dynamics of the adoption and clarification of Federal constitutional laws and Federal laws for the period 1994-2018 as a first step to investigation of how the temporality and other characteristics of russian lawmaking influence the irreversibility of key social transformations, prescribed by the models contained in the provisions of the russian Constitution of 1993. Materials and Methods We conducted the quantitative analysis of various information on Federal constitutional laws and Federal laws adopted after the entry into force of the Constitution of the russian Federation of 1993 using the official Database "Federal legislation. " This online database is part of the state system of russian legal information "official internet Portal of legal information" that can be found at http://pravo.gov.ru (in russian). as of 1 January 2019, the Database "Federal legislation" contained 195,025 Federal acts adopted since 1 January 1994. We carried out a general search in the Database "Federal legislation" using the tag of "Federal law or Federal constitutional law, " with the interval of dates from 1 January of each selected year to 1 January of the following year. in each annual array of laws (legislative acts adopted by the Federal assembly of the russian Federation and signed by the President of the russian Federation during the analyzed year), we did a new sampling search using the additional search criteria (in russian) for the field of "act name" such as "amendments, " "changes, " "additions, " "addenda" and so on. To find the rate of change in the number of laws in percentages, we used the compound annual growth rate. Compound annual growth rate (Cagr) is a specific term widely applied for describing the geometric progression ratio that provides a constant rate of return over the period. also, this term is often used to describe a number of the elements of the process (in our case, the legislative process), for example, laws adopted. Where v(t 0 ) is the initial value, v(t n ) is the end value and t n -t 0 is the number of years. For our calculations, we chose to take the data from 1 January of each selected year to 1 January of the following year for two reasons. First, the Constitution of the russian Federation entered into force at the end of 1993 (25 December 1993) and, therefore, logically, the laws were adopted after that date. second, our choice aimed at ensuring that the average annual rate of change in the number of laws was correctly calculated, so this required data for the full calendar year. General Statistics The results of the search conducted in the Database "Federal legislation" showed that the total number of Federal constitutional laws and Federal laws adopted in the period from 1 January 1994 to 1 January 2019 amounted to 7,912. more than 5,000 of them (5,451) are the laws on amendments and additions to previously adopted legislation. Thus, the number of laws aimed at amending or clarifying existing legislation accounts for almost 69% of the total number of Federal constitutional laws and Federal laws enacted during the study period. Dynamics of Adoption of Laws according to the results of the analysis, about 47% of the total number of laws available in the Database "Federal legislation" were adopted in the period from 1 January 1994 to 1 January 2010, that is, during the first sixteen years following the entry into force of the new Constitution of the russian Federation. more than 53% of the total number of laws available in the Database "Federal legislation" were adopted in the period from 1 January 2010 to 1 January 2019, that is, during eight years. at the same time, a quarter of the laws available in this database were adopted over the past four years -from 1 January 2015 to 1 January 2019. These results show that the total number of laws increased with acceleration, and the rate of change in this indicator continually increased, especially in the last few years. a comparison of the dynamics of the adoption of primary laws and the dynamics of the appearance of laws on amendments shows the following results: during 2010-2018, compared with the period 1994-2009, the average annual rate of the adoption of primary laws decreased by about a third, and the annual average rate of the adoption of laws clarifying the then current legislation increased by 2.75 times (see Table 1 below for more details). We made a comparison of the number of primary laws and the laws on amendments over the years. The graphical representation of the results obtained (see Figure 1 below) shows that since 2003 the number of laws on amendments adopted annually far exceeds the number of primary laws. Number of Laws Year Primary Laws Laws on Amendments Dynamics of Adoption and Modification of Laws That Are Directly Provided by Articles of the Constitution of the Russian Federation Table 2 below provides information on the temporal characteristics of the processes related to the adoption of Federal constitutional laws and Federal laws that are directly provided by the provisions of the russian Constitution of 1993. The presented data show that the laws, which are very important for new state-building and democracy-establishment, were adopted with a noticeable delay. For example, the Federal Constitutional law "on the government of the russian Federation" was adopted only in December 1997, that is, four years after the nationwide vote for passing the Constitution of 1993. 11 Federal constitutional laws on the symbols of the new russian state (laws on the state coat of arms, flag and anthem) passed at the end of December 2000, that is, seven years after the adoption of the democratic Constitution. 12 The Federal Constitutional laws "on the state of emergency" and "on martial law, " which were urgently needed in the context of the escalation of regional conflicts, were passed only in may 2001 13 and January 2002, 14 respectively. until their passage, the law of the russian soviet Federative socialist republic (rsFsr), adopted in 1991, was in force. The Federal law "on the Citizenship of the russian Federation" was adopted by the Parliament only in may 2002. 15 some of the acts provided for in the Constitution of the russian Federation of 1993 are still pending. These are Federal constitutional laws on changing the status of a constituent entity of the russian Federation (part 5 of art. 66 of the Constitution of the russian Federation) and on convening a Constitutional assembly (art. 135 (2) Continuation of Table 2 The Table 2 The (1) Chronology of Modification of Several Laws Adopted in Pursuance of Constitutional Provisions / Data on the Increase in the Number of Laws The analysis showed that many laws adopted in pursuance of the provisions of the Constitution of the russian Federation in the field of new state-building and political development almost immediately became the object of the introduction of amendments. also, the volume of their texts was permanently/consistently increasing. For example, after the entry into force of the Constitution of 1993 and to date, three Federal laws "on the Procedure for Forming the Federation Council of the Federal assembly of the russian Federation" were passed sequentially one after another. The first version of this law, the law of 1995, 17 was in effect for five years with no changes or additions. The volume of its text was 943 characters. in 2000, the second version of this Federal law was adopted. During its effective period until 2012, legislators amended and modified this law eleven times. legislators modified this act six times by passing laws on amendments to this law directly. Five more times changes were made through the Federal law of 14 February 2009 "on the modification of individual legal acts of the russian Federation in Connection with the Change of the order of Formation of the Federation Council of the Federal assembly of the russian Federation" (no. 21-FZ) and by the laws of amendments to the mentioned act. The initial version of the 2000 law contained a little more than 8,000 characters, but after all of the changes and additions, the length of the text increased to almost 16,000 characters. The third Federal law regulating the formation of the Federation Council was adopted in December 2012 and is still in force. 18 During the six and a half years of the law's operation, legislators modified it nine times. The length of the original text was slightly more than 17,500 characters. To date, with all introduced amendments, it exceeds 30,000 characters. Thus, for all of the years of the existence of the Federation Council in the contemporary history of russia, the volume of the law regulating an order of formation of this state body has grown by a factor of about 32.5. We noted similar trends (an increase in the number of amendments and the length of the text) concerning other acts significant for the new russian statebuilding. Tables 3, 4 and 5 below provide the data on the year of the law's adoption, the number of modifications and the length of the text, respectively. For example, the Federal Constitutional law "on the government of the russian Federation," adopted in 1997, is still in force. over the course of time, legislators passed twenty laws on amendments and additions to this legal act, with the result that the text of the law on the russian Federation government today has increased by more than a quarter in comparison with its initial text. after the adoption of the Constitution of 1993 and up to the present, three Federal laws "on the election of the President of the russian Federation" have been adopted, successively replacing each other. The first Federal law 19 was in force from 1995 to 1999 and had no changes. The second Federal law 20 was in force from 2000-2002. amendments and additions were made one time. The third Federal law 21 came into force in January 2003. as of 1 January 2019 legislators have changed the law thirty-eight times. The length of the text has increased by about 14%. Comparing the length of the first Federal law and the latest version of the third Federal law "on the election of the President of the russian Federation, " we can see that the first Federal law was more than four times more compact than the current legal act. With regard to an act of the utmost importance for any democratic state so as to guarantee the right of citizens to express their will and participate freely in elections, we may refer to two Federal laws that were adopted in succession following the entry into force of the Constitution of the russian Federation of 1993. The first Federal law "on Basic guarantees of electoral rights and the right to Participate in a referendum of Citizens of the russian Federation" 22 was in effect from 1997 to 2002. legislators amended this law two times. Why Was the Pace of Adoption of the Laws Prescribed by the Constitution Slow in the 1990s and Accelerating After 2000? it is evident that the pace and quality of implementation of the principles and models that the russian Constitution of 1993 contains depended and continue to rely on the impact of a complex set of various factors. We believe the critical factor is the recognition and acceptance by political elites of the value of a democratic Constitution, as well as their determination and ability to enact the necessary laws based on constitutional ideas so that the new legislation will contribute to the transformation of social reality in strict accordance with the constitutional intent. however, it is clear that in practice a full consensus of elites is an unattainable state, especially for transforming societies. The russian constitutionalist Professor marat Baglay has repeatedly pointed out that constitutional law is more closely related to politics than other branches of law, because it directly interacts with the principles of democracy and the issues of the political order. This cause gives rise to the struggle of various political actors around the Constitution, laws, judicial decisions, and other legal acts that constitute the sources of constitutional law. 24 so, it is not surprising that the dynamics of the appearance of new laws that can create a new social order under the ideas of a democratic Constitution directly depend on the ability of political actors, opposed to each other, to impact the legislative process and its results. The essential feature of the "era of change, " which began in russia at the end of the twentieth century, was that several large-scale transformation processes simultaneously took place in the country. They influenced each other in extremely complex and unpredictable ways. as Professor sergey shakhray notes: … along with the change in the economic and social system, along with a deep macroeconomic and financial crisis, in russia in the late 1980s and early 1990s, a full-scale political revolution was undergoing. moreover, all this systemic transformation took place under conditions of the collapse of the state and its institutions. To bring the country out of the destructive socio-economic crisis in the shortest possible time, the new russian government led by Boris Yeltsin began "shock reforms, " the first step of which was the deep liberalization of both political and economic life. it was assumed that a free market would start the engine of sustainable economic development, and the maximum possible level of political freedom would ensure the transition to a sustainable "self-enforcing" democracy. however, as history shows, the absolutization of any solution is risky technology: the pendulum, swung too far in one direction, is sure to swing back in the opposite direction. Today, we see many cases where states face the need to correct both "market failures" and "failures of democracy. " The broadest possible implementation of the principles of political liberalism in russia in the early 1990s led to ambiguous results: the parties and social movements, supporting the course of President Boris Yeltsin and his government, failed to gain a significant majority in the russian state Duma. Table 6 below shows statistic data related to the elections to the state Duma on party lists in the first years after the adoption of the new russian Constitution. The figures show how many parties and electoral blocs expressed and realized their intention to participate in the elections, and how badly the Deputy Corps of the state Duma was politically fragmented. as another illustration, the results of the elections to the state Duma on party lists of 1993 (party-list proportional representation principle), presented on the official website of the Central election Commission of the russian Federation, can be cited (Table 7 below). The Parliament, elected on 12 December 1993, consisted of many political factions that opposed each other, as well as the President and the government. so, this main legislative body of russia was not too efficient for implementing the new constitutional ideas in legislation. it is not surprising that throughout the second half of the 1990s the adoption of new laws to ensure political, economic and social reform, as well as the implementation of the constitutional provisions, went forward with great difficulty and delay. This fact is noted in many papers. For example, one of the co-authors of the russian Constitution of 1993, directly involved in governing the political and legal transformations of the 1990s, sergey shakhray emphasizes: … the lack of a mature political culture and the "revelry" of the multi-party system led to the de facto paralysis of Parliament and legislative work in the 1990s, as has been repeatedly noted. as a result of this situation, the roles of the head of state and the Constitutional Court of the russian Federation (which were, on objective grounds, forced to repair the "failures" in the activities of legislative bodies) have disproportionately increased and, as a consequence, the influence of Parliament has decreased. 28 however, statistics show that beginning at the turn of the twentieth century, the pace of passing Federal laws and their overall number began to grow steadily. many acts of recent years have been adopted with remarkable swiftness. a number of facts illustrate this conclusion. For example, in the 1990s the process of consideration and adoption of the land Code of the russian Federation took seven years: the government of the russian Federation submitted the first version before the state Duma in 1994; the legislators adopted the final release of the law, after lengthy discussions, in 2001. 29 in 2012, the Federal law establishing criminal liability for the dissemination of intentionally false information to harm someone's reputation (the "law on Defamation" 30 ) passed all the procedures (it was adopted in three readings by Deputies of the state Duma, approved by the Federation Council and signed by the President of the russian Federation) in twenty-four days. many hypotheses exist to explain the fact of the increasing quickness of adoption of laws and the overall increase in the number of russian regulations by different reasons, including legal, technological and political factors, the needs of economic regulation and risk management in a fast-changing world, and even psychological causes. however, evidence-based studies are required to verify and support these tentative conjectures. in the meantime, we can only rely on the qualified opinions of experts. For example, we can explain the increase in the quickness of passing laws by the fact that most of the legislative acts adopted today are the documents on amendments, addenda to the existing legislation, but not new independent units that belong to the primary regulation. 31 as a result, legislators need less time to discuss conceptual and substantive issues when they are working with draft laws on amendments than in situations concerning the consideration of the large primary acts. We can also agree with the opinion of political scientists that a significant factor affecting the quickness of adoption and increasing the number of laws is the features of the political profile of the state Duma of the last convocations (after 2000). as it is widely known, political forces belonging to the so-called "party in power" get a steady majority in the modern russian Parliament. 32 Therefore, the situation, typical for the mid-1990s when the Deputies practiced delaying or blocking the adoption of legal acts submitted to the state Duma by the President of the russian Federation or the government, is unlikely to return. indirectly, the results of an express analysis of the Database of Federal Bills hosted on the official website of the state Duma of the Federal assembly of the russian Federation 33 confirm this conclusion. We conducted the search on an array of bills submitted to the state Duma by the President of the russian Federation and compared the statistics of "presidential" bills rejected or withdrawn from consideration by the state Duma with the statistics of "presidential" laws passed (see Table 8 below for more details). The results show that during 1993-1999, the Deputies rejected on average every seventh bill submitted to the state Duma by the President of russia Boris Yeltsin. since 2000, the Deputies have declined just over one percent of the bills initiated by Presidents vladimir Putin (2000-2008, 2012to present) and Dmitry medvedev (2008-2012. The data also show that since april 2012 the state Duma has not rejected any "presidential" bills. interesting observations can also be reported regarding the findings in the study of Federal laws statistics for the period from 1 January 1994 to 31 July 2016, realized by the Center for strategic research 34 and the Company garanT. The study notes that along with the overall trend of increasing the number of russian laws, there is a correlation between the highs and lows in the number of Federal laws adopted during the year with the dates of Federal elections (correlation coefficient -0.41). … the elections of the state Duma of the Federal assembly of the russian Federation affect the growth of the number of Federal laws adopted by the state Duma of present convocation in the final year before new elections (correlation coefficient -0.24). … the elections of the President of the russian Federation affect the reduction in the number of Federal laws adopted in the presidential election year (correlation coefficient -0.33). 35 The same study indicates that when analyzing each of the various legal branches separately, the individual dynamics of the appearance of the new laws has its specifics and differs from the overall picture. however, there are branches of law whose rhythms coincide with the general dynamics. in particular, the study talks about such legal 34 The Center for strategic research (Csr) is a moscow-based think tank with a focus on strategy and policy development and implementation. 35 Ткаченко Н. Статистический анализ федерального законодательства [natalya Tkachenko, Statistical Analysis of Federal Legislation] 6 (moscow: Center for strategic research; garanT, 2017). sectors as "grounds of the state-legal system, " "legislation on Taxes and Fees, " "Defense, military Duty, and military service, Weapons, " "regulation of Certain Types of economic activity" and "Criminal law, Criminal Procedure, Criminal-executive law. " 36 We can assume that the observed effect is associated with the implementation of constitutional ideas about the new principles of the state, law and economy design. The political needs for the early establishment of the foundations of a new social order, as well as its protections, stimulated the development of legal branches that are directly related to the performance of these tasks. Why Was 2003 a Milestone After Which the Number of Annually Adopted Laws on Amendments Began to Steadily Exceed the Number of Annually Passed Primary Laws? The fact that around 2003 the trend towards the predominance of the adoption of new acts gave way to the trend towards the prevalence of the legislative policy of amendments and additions to the existing legislation (see Figure 1 above) was recorded not only by us, but also by other authors who have studied legislative statistics. at the same time, the statistics show that in each separate branch of law a turning point comes at its unique moment, which does not coincide with the average date found for the entire array of Federal legislation. researcher in Federal laws statistics natalia Tkachenko writes: Within each specific branch of legislation, the change of the predominating legislative policy [i.e. the transition from the adoption of primary laws to the legal policy aimed at modifying existing legislation] occurs, as a rule, after the passage of the Basic sectoral law (Code). however, the time interval between the adoption of the Basic sectoral law and the transition to the policy of amendments, as a dominating one, may vary significantly in different sectors. 37 Tkachenko explains the phenomenon of the "turning point of 2003" with the suggestion that in that year a new stage of legal policy replaced the previous one: at the first stage … accumulation of legal norms with their simultaneous interconnection [happens]; at the second stage (conditionally, starting from 2002-2004) the development of the legislative system as a result of its interaction with the economic and social system and the system of society as a whole [happens], and this development is manifested in the form of changes in legislation. 38 36 Tkachenko 2017, at 6. 37 Id. at 7. 38 Id. at 45. it seems that the phenomenon of the crossover point of 2003 on our graph (Figure 1 above) illustrating the overall dynamics of the adoption of "new" laws and laws on amendments can also be explained by the fact that it was in 2003 that the russian authority launched large-scale reforms in almost all spheres of society. in particular, russia began administrative and Federation reforms, reform of the court system, local government system, the budget and tax system, the system of political parties, education and science, as well as the transformation of specific sectors of the economy. We can assume, since all these changes were evolutionary, that the legal support for the reforms did not require the abolition of previously existing laws, but their modification by the introduction of numerous amendments and addenda. however, all these hypotheses require evidence-based verification using a detailed analysis of the content of the laws and the study of their temporal characteristics. Why Is the Number of Laws on Amendments More Than Twice the Number of Primary Laws in Current Russian Legislation? The predominance of the legal policy aimed at modifying existing legislation over the primary regulation of social relations has been called a core trend of modern russian lawmaking by many researchers. as we noted earlier, acts on amendments make a significant contribution to the rapid growth of the total number of russian regulations and constitute today more than two-thirds of the total number of Federal laws. For example, according to maria Pronina, a researcher in issues of legal technique in modern russia, we can regard many laws on amendments as an auxiliary tool designed for a single application. after completing its mission to clarify the text of the primary legislation, the law on amendments turns into a so-called "empty shell": … the main task of the "law-shell" on amendments and additions is the inclusion of changes to the current law, and then it should self-destruct. 39 since in practice self-destruction does not occur, "empty shell" laws continue to exist (i.e. remain in effect) and affect the increase of the total amount of legislation. it follows then that the data showing a significant increase in the number of laws can be adjusted downwards if we exclude "empty shells" (laws on amendments that have fulfilled their purpose) from the array of existing laws. however, in the russian Federation there is no official state practice now aimed at providing the legal acts in an up-to-date form. only private legal information providers allow their users to see the digital copy of the law in the actual state with all amendments included in the text of the original act. nevertheless, digital resources may contain errors and therefore are not entirely reliable. so, we have to continue taking into account the "empty shells" among the existing (in effect) laws. Based on the analysis of the literature and our observations, we can offer several hypotheses that explain why the laws on amendments are dominant in current russian legislation. The significant increase in the number of laws on amendments may be the result of pragmatic reasons and the routine needs of the legislative process. in case of detection of errors in the current law, or the new phenomena of social life demanding a legal regulation, the improvement of the law has to occur in short forms. Practice (including the experience of many other countries) shows that "short" laws that cover a narrow range of issues require less time and fewer resources to pass. For example, the uK house of Commons Political and Constitutional reform Committee notes in its report "ensuring standards in the Quality of legislation" that the government … on the whole does not like big bills because the scope is broad, and amendments can come in on any subject … amendments can come in on new subjects late in a bill's passage, and that is quite often an area where mistakes creep in, so you might see more of that in a multi-purpose bill than in a small confined bill. 40 agreeing to the discussion of short, narrowly focused laws on amendments is an effective way to make their passage easier and faster, but in the end this practice leads to an increase in the total number of Federal acts of this type. in a number of publications we also found a hypothesis that one might call a "conspiracy theory. " This concept assumes that the endless introduction of changes to existing legislation (first of all, using the acts that amend several laws that differ in subject matter) allows for purposeful modification of the basic ideas underlying the primary laws, or even gives a new reading of the constitutional principles. These conclusions should not be discounted, as experts cite real cases of how amendments have led to a transformation in the meaning of the original concepts or legal provisions. For instance, svetlana Boshno and galina vasyuta, civil law researchers, describe in detail how the original meaning of the small-and medium-sized business concept was changed due to the amendments to the Federal law on the licensing of arms trafficking. 41 also, one of the reasons for the predominance of laws on amendments in the russian legal ecosystem may be the fact that legislators objectively cannot foresee all the new political, economic and social phenomena that continually arise due to the effects of the fast-changing world and which require proper regulations. additionally, the entry into force of the new law changes social reality inevitably and causes various consequences, including unforeseen ones. so the legal framework needs to be refined and updated continually. The research direction related to the subject under discussion is the assessment of the impact of the growth in the number of laws on amendments on the state of the russian legal framework as a whole. experts agree that the abundance of laws on amendments and addenda makes law enforcement difficult: First, the reader studying the law published in the official source, or a separate brochure, or in the collection, cannot be sure that this edition is relevant, and must verify this; secondly, the amendment has to publish in the official printed issue of the law collection of the russian Federation. The pace 42 This refers to the delimitation of authority between the Federal and regional levels of government. 43 Гаджиев Г.А. Экономическая Конституция. Конституционные гарантии свободы предпринимательской (экономической) деятельности // Конституционный вестник. 2008. № 1. С. [253][254]Economic Constitution. Constitutional Guarantees of Freedom of Entrepreneurial (Economic) Activity, 1 Constitutional Bulletin 249, 253-254 (2008)]. between the official publication of the original law and publishing of changes to this law can range from several months to several decades. 44 it is evident that the expansion of the practice of the adoption of laws on amendments negatively impacts on the stability of the russian legal system as a whole, as well as on the integrity and efficiency of vital legislative acts: Thus, the stability of legislation in the field of tax law does not exceed two weeks. Forest Code changes every 22 days, land Code and Criminal Procedure Code -once a month. The Code of administrative offenses "lives" without amendments on average no more than ten days a year. 45 Why Does the Total Number of Laws Increase as Well as the Length of the Text of Primary Laws? in modern legal literature, we can find various explanations about why the number of laws is growing and the text of primary laws is lengthening. academician Taliya Khabriyeva links the extensive growth of laws with objective processes of constitutionalization of legislation, the emergence of new legal branches and the complications of the structure of the traditional branches of russian law, and the tasks of adaptation and modernization of the legal system to new political, economic and social realities. Khabriyeva points out that, With the increase in the number of laws, there is a problem of loosening the role of legislation as the most important regulator of public life. 46 legal researcher elena lukyanova sees the reason for the accelerated growth of the number of Federal laws in the strengthening of political centralization in russia: against the background of the ongoing political centralization, we can observe the processes of centralization of legal regulation, the curtailment of the regional and judicial lawmaking. one of the notable trends in the development of law, at the end of the twentieth century, was the change in the system of law sources (forms): the emergence and spread in the russian legal system of legal precedent, in the role of which were, in particular, the … Today, in the transformation of sources (forms) of law in the russian Federation [we can observe] a reverse trend: the strengthening of the position of the normative legal act (law) in comparison to other sources (forms) of law, in particular, judicial precedent. The normative legal act is the most convenient form for the implementation of the centrist policy, thus in the development of the normative legal act (law) can be observed negative trends: its politicization and unreasonableness, forced adoption. 47 regarding the tendency to increase the number of laws, we can put forward several hypotheses that require further exploration and confirmation. To begin with, the increase in the number of laws can be caused by the enlargement in the amount of the social life phenomena, which, according to legislators, are of direct concern to society (for instance, they can cause harm to society or a threat to the public security). accordingly, the area covered by public law is continually enlarging. This trend is not typically russian, but global. as we know, in public law mandatory rules prevail. also, state-made legislation, based on the concept of "everything which is not allowed is forbidden, " is objectively more detailed and requires clarification and updating frequently. The need for dynamic updating comes from the fact that new phenomena of life occur more often than the legislator can foresee, and, more so, have time to impose a ban or give permission. Therefore, the total number of laws and the overall length of their texts are growing for reasons of harmonizing legislation with fast-changing life. We are talking about the so-called "red Queen effect": … we must run as fast as we can, just to stay in place. and if you wish to go anywhere, you must run twice as fast as that. 48 another reason may be a global commitment to risk management and control, which leads to increased over-regulation worldwide. as an illustration, the results of a study by the David levi-Faur group, which analyzed data on the growth in the number of regulatory agencies in 48 countries (16 sectors) over 88 years , can be cited. if, before the end of the 1960s, rarely were more than 5 to 6 agencies created, since the beginning of the 1990s more than 25 agencies were being created 47 Лукьянова Е.Г. Правовая система России: современные тенденции развития // Труды Института государства и права РАН. 2016. № 6. С. 6-23 [elena g. lukyanova, The Legal System of Russia: Modern Trends, 6 Proceedings of the institute of state and law of the ras 6 (2016)]. 48 The famous quote from the book "alice's adventures in Wonderland" by lewis Carroll. annually. By the end of 2007, there were more than 600 such institutional regulators in the 48 countries under study. 49 many commentators state that modern societies live in an era of "regulatory governance" or "regulatory capitalism. " 50 as the american political scientist steven vogel noted in the mid-1990s, "The freer the markets, the more rules. " 51 The tendency to increase the length of the texts of laws is also widespread. For example, British experts are no less concerned than russian experts about the increase in the volume of legislation: Whilst the number of acts has decreased since the 1980s, the mean average number of pages per act has increased significantly, from 37 and 47 pages during the 1980s and 1990s respectively, to 85 in the past decade. This continues a trend of an increasing number of pages decade on decade since the 1950s when the average was. 52 additionally, the poor quality of the bills, especially the laws on amendments, can be the reason for the increase in the number of regulations. This factor is often spoken of by russian legislators when they openly recognize that they "hurried" the adoption of a law, and "as a result, since the adoption of the document, a single year has not passed, and there are already a lot of amendments to it. " 53 They are echoed by those who must comply with the requirements of the law: The document is so raw that each company understands it in its way. moreover, each new explanation gives rise to more questions than answers. and in the autumn, new amendments will be introduced in the law that is unlikely to simplify life. 54 49 We should recognize that the adoption of poor-quality legislation, which in the russian tradition is figuratively called "raw" (a closer in meaning term -"undercooked"), is observed everywhere. as an example, Deputies of the republic of Kazakhstan, criticizing the state of national legislation that was incessantly "swelling" because of the adjustments, have attested: more than once, the laws whose "ink has not yet dried" were massively amended. We have not yet got rid of this legislative disease. alas, most of the amendmentsbecause initially the law was adopted hastily, without proper study. 55 also, the earlier cited report of the uK house of Commons Political and Constitutional reform Committee on the need to improve the quality of legislation notes that Parliament often has to adopt a large number of poorly prepared laws in a short time because of political pressure from the government and ministries: The Constitution society 56 told us that the primary reason for poorquality legislation was political: "There are very strong political pressures on governments, and individual ministers, to push through large quantities of new legislation on tight timetables and with insufficient preparation. " 57 however, it seems that the adoption of "undercooked" legislation, which entails a lot of amendments and, consequently, an increase in the total number of laws, cannot be adequately explained by the haste and lack of professionalism of Deputies, government pressure or other subjective factors. We believe that this phenomenon occurs due to the increasing influence of the challenges of the VUCA-world, which is characterized by volatility, uncertainty, complexity and ambiguity. From the desire to put growing uncertainty under control, strategies based on the principle of so-called adaptive governance have emerged. There is extensive and controversial literature relevant to the understanding and conceptualization of adaptive management. Without delving into this subject, which one of the witnesses who provided information for the Committee's report. 57 ensuring standards, supra note 40, at 9. since the 1990s "continues to attract considerable interest in academic and policy circles, " 58 we prefer to talk about a structured, iterative decision-making process based on systematic, multilevel monitoring of changes. 59 in our opinion, this approach could help, simultaneously, to research and to transform the uncertain situation purposefully: the new information accumulated as a result of monitoring becomes the basis for the next step to improve governance and provide legal certainty. From these ideas, we can assume that in russia, as in other countries, the legislators consciously or unconsciously are increasingly beginning to use the strategy of adaptive governance. at the first stage, the problem "of keeping up with a fastchanging world" is solved by sacrificing the quality of the law that needs to pass. at the next stage, the legislators begin to finalize the law to return the proper level of quality, for which they use the iterative process based on the analysis of the negative consequences of the application of this law and consideration of the comments of the stakeholders. This iterative strategy is close to the so-called agile practices that are used for creating software and other new products. This approach includes adaptive planning, evolutionary development, early product "delivery" and its continual improvement. Therefore, we could call this kind of legislative process "adaptive lawmaking" or "agile lawmaking. " as an example we can cite the previously mentioned Federal law "on Basic guarantees of electoral rights and the right to Participate in a referendum of Citizens of the russian Federation," which, during 2002-2018, was amended more than one hundred times to update and fine-tune this act in harmony with the changing political realities. Final Remarks These pages present the results of a quantitative analysis of the array of Federal laws and Federal Constitutional laws for the period of 1994-2018. it is clear that quantitative methods, allowing us to analyze the statistics and to fix the dynamics of the development of different types of laws, are not able to describe the observed 58 See, e.g., Frances Cleaver & luke Whaley, Understanding Process, Power, and effects and explain their underlying reasons. as we showed in the section "Discussion and Conclusions" above, the obtained quantitative results can be interpreted in various ways in the subject fields of law, political science, psychology and other sciences as well. however, we can already draw several conclusions based on the evidence. in particular, our data show that the "formation" stage of the process of the constitutionalization of russian law (in the terminology of academician Khabriyeva) can be extended to 2003 when the trend for the modification of then current legislation became steadily prevalent over the adoption of primary laws. The average annual rate of change in the number of laws is constantly growing, with a particularly significant increase in the last few years. This acceleration needs explanation. The phenomenon of the increasing total number of laws, more than twothirds of which are laws on amendments, including "empty-shells," also requires conceptualization and in-depth study. The causes of this phenomenon, as well as the consequences for the stability and integrity of the legal system, the state and society, need to be analyzed in detail with the involvement of various sources of information and conceptual approaches of the social sciences. This subject is especially important because according to a number of experts, numerous amendments can uncontrollably modify the essential principles laid down in the original act and (what is more critical) in the Constitution also. 60 The question of the relationship between the durability and stability of the democratic Constitution and the quality and irreversibility of democratic transformations of the social system remains open. observations show that even a "rigid" democratic constitution can become more "flexible" with age due to legislators' opportunities to interpret the constitutional provisions in legislation and give them a sense different from the initial one. although formally the russian Constitution of 1993 is not flexible, but rigid, practice shows that we can call it, rather, an elastic Constitution, since its ideas and meanings can often be "stretched" to apply to current cases without the need to make any changes to existing constitutional norms. it seems to us that the concept of adaptive governance looks quite promising as a means to describe the features of the modern legislative process, which can be called adaptive lawmaking. The obtained quantitative results and observations have allowed us to put forward some hypotheses that need to be verified during the next stages of the project. These stages involve the use of qualitative research methods such as, in particular, grounded theory, systematic content analysis of legal acts, 61 diachronic approaches to primary law analysis and comparative historical analysis of political and legal events.
v3-fos-license
2023-09-15T15:21:41.581Z
2023-09-13T00:00:00.000
261847667
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fped.2023.1232281/pdf?isPublishedV2=False", "pdf_hash": "600370d36185d1d9989a0a6cac191b7d9f6c931a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42726", "s2fieldsofstudy": [ "Medicine" ], "sha1": "9bcb79f7ec538539116a65787b0dce0a408ff2e8", "year": 2023 }
pes2o/s2orc
Immunology of cord blood T-cells favors augmented disease response during clinical pediatric stem cell transplantation for acute leukemia Allogeneic hematopoietic stem cell transplantation (HSCT) has been an important and efficacious treatment for acute leukemia in children for over 60 years. It works primarily through the graft-vs.-leukemia (GVL) effect, in which donor T-cells and other immune cells act to eliminate residual leukemia. Cord blood is an alternative source of stem cells for transplantation, with distinct biological and immunological characteristics. Retrospective clinical studies report superior relapse rates with cord blood transplantation (CBT), when compared to other stem cell sources, particularly for patients with high-risk leukemia. Xenograft models also support the superiority of cord blood T-cells in eradicating malignancy, when compared to those derived from peripheral blood. Conversely, CBT has historically been associated with an increased risk of transplant-related mortality (TRM) and morbidity, particularly from infection. Here we discuss clinical aspects of CBT, the unique immunology of cord blood T-cells, their role in the GVL effect and future methods to maximize their utility in cellular therapies for leukemia, honing and harnessing their antitumor properties whilst managing the risks of TRM. Introduction For children with high-risk acute leukemia, allogeneic HSCT is an important and effective treatment strategy.HSCT produces potent anti-leukemia activity through the dose intensification of chemotherapy used in conditioning and more importantly, the GVL effect mediated primarily through donor T-cells and other immune cells.Maximizing GVL must occur in balance with prevention of graft-vs.-hostdisease (GVHD), in which alloimmune T-cell responses are directed against healthy tissues resulting in patient morbidity and mortality in both its acute and chronic forms (1).Conventional HSCT includes bone marrow transplantation (BMT) and peripheral blood stem cell transplantation (PBSCT), often from family or matched unrelated donors (MUD).Cord blood has been used as a source of hematopoietic stem cells (HSCs) in transplantation for acute leukemia for over three decades (2).It has many important differences and consequent advantages compared to conventional stem cell sources, including significantly reduced relapse rates in acute leukemia, particularly for those patients with residual disease pre-transplant, and lower rates of chronic GVHD (3,4).Here we discuss the key clinical and immunological features of cord blood, and in particular cord blood T-cells, in order to shed light on the mechanisms through which this augmented disease response occurs. Graft-vs.-Leukemia T-cells are important mediators of GVL (5).In allogeneic HSCT, T-cell depletion with ex vivo graft manipulation or in vivo with T-cell depleting antibodies, such as anti-thymocyte globulin (ATG) or alemtuzumab, is used to prevent of GVHD.In malignancy, however, the use of T-cell depletion results in an increased risk of relapse (6)(7)(8)(9).Conversely, relapse is reduced in those patients who develop GVHD and when relapse occurs, remission can be restored using donor lymphocyte infusions (DLI) (10)(11)(12)(13)(14)(15).These clinical observations provide clear evidence for the potent role of T-cells in the GVL effect.In recent years, advancements in adoptive T-cell therapies, particularly chimeric antigen receptor (CAR) T-cells, have allowed sustained remission and cure to be attained in patients with previously incurable hematological malignancies, which further emphasizes the importance of T-cells in controlling malignant disease (16,17). T-cells recognize leukemia through interactions between their T-cell receptor (TCR) and human leukocyte antigen (HLA) molecules expressed on the surface of leukemia cells, which present antigenic peptide.HLA molecules are encoded by the human major histocompatibility (MHC) complex, which is a highly polymorphic region of genes located on chromosome 6 (18).CD8 + T-cells recognize peptide bound to HLA class I molecules, which are expressed on all nucleated cells, whereas CD4 + T-cells recognize HLA class II molecules, which are primarily expressed on specialized antigen-presenting cells (APCs).Both CD8 + and CD4 + T-cells mediate GVL reactions through interactions with antigen presented by HLA class I and class II molecules respectively (19). HLA offers a potential important GVL target.In CBT, HLAmismatch has shown to correlate with reduced relapse rates, with higher relapse rates following transplant in patients receiving the best matched cord blood units (20)(21)(22).Following haploidentical HSCT, loss of the entire mismatched haplotype through uniparental disomy of chromosome 6 p has been described in multiple patients (23).Leukemia relapse occurs as a consequence of this genetic event due to immune evasion.This suggests that in both the CBT and haplo-setting, mismatched HLA is a key target for GVL mediators.Loss of class I HLA expression through focal genetic deletions has also been described following matched allogeneic HSCT (24).In this setting, loss of HLA is likely to prevent presentation of important peptides involved in the GVL effect, for example, minor histocompatibility antigens (miHAs).These peptides are presented by major HLA molecules and differ between donor and recipient due to genetic polymorphisms meaning they can be recognized by engrafting Tcells and used to elicit a GVL effect (25,26).Leukemiaassociated antigens, expressed solely on leukemic cells and not normal tissues, are another potential target for GVL.There has been much work looking at identifying such antigens in order to generate effective, specific antigen-directed immunotherapy (27).Early clinical studies demonstrating that recipients of syngeneic HSCT have a higher incidence of relapse compared to allogeneic HSCT, suggest that it is not solely leukemia-associated antigens involved in the GVL effect, however, and a difference between donor and recipient is required (28).In double CBT (dCBT), in which two cord blood units are infused simultaneously in order to overcome limitations of cell dose, one unit asserts dominance over the other to become the engrafting unit.Higher CD3 + and naïve CD8 + T-cell content of the unit is positively associated with unit dominance in which one rejects the other (29).NK cell alloreactivity is also important in GVL, eliminating leukemia whilst protecting against GVHD (30).Following allogeneic transplant, development of antibodies against specific miHAs is positively correlated with survival, indicating a role of B-cells in the GVL effect also (31). Cord blood transplantation The first successful CBT was performed in 1988 for a patient with a diagnosis of Fanconi's anemia (32).Today, there are cord blood banks situated in many countries across the globe with over 800,000 estimated cord blood units stored worldwide in public banks and over 4 million privately stored (33).It is feasible and practicable to collect and store cord blood stem cells through cryopreservation without deleterious effect on their viability (2).This means that cord blood has the advantage of being readily available without the need for stem cell harvest from an adult or sibling donor, which can dramatically reduce donor search time and procurement of stem cells for transplant from 2 to 4 months for bone marrow (BM) or peripheral blood stem cells (PBSCs) to as little as 2 weeks for cord blood (34,35).This can have important clinical consequences for those patients with high-risk malignancies.In addition to this, cord blood registries are able to provide donors from a larger selection of ethnic backgrounds, which allows those groups who may have previously struggled to find an appropriately matched unrelated donor to be eligible for transplant (1).Despite this, reported rates of cord blood use for allogeneic transplant have been steadily declining with the rise of haploidentical transplants as an alternative option for those without a related or unrelated fully matched donor (33,36,37). Outcomes after cord blood transplantation for acute leukemia The greatest risk to survival in acute leukemia is relapse.In those patients with residual disease pre-transplant and highest disease risk, use of CBT reduces relapse rates compared to other stem cell sources (3,4).Superiority of CBT to other cell sources in preventing relapse has been replicated in multiple studies in both adults and children (3,4,(38)(39)(40).In 2007, Eapen et al. showed in a retrospective, registry study, that in children with acute leukemia there was significantly reduced relapse risk following a 5/8 HLA-mismatched CBT compared to BMT (relative risk 0.54; p = 0.0045) (20).Milano et al., reported in 2016, significant superiority of CBT in preventing relapse when compared to MUD and mismatched unrelated donor (MMUD) stem cell sources, in adults with acute leukemia and residual disease pretransplant, (hazard ratio in the HLA-mismatched group, 3.01; p = 0.02 and hazard ratio in the HLA-matched group, 2.92; p = 0.007) (4).This translated into improved overall survival for CBT recipients with a higher risk of death in the MUD and MMUD groups (hazard ratio in the HLA-mismatched group, 2.92; p = 0.001 and hazard ratio in the HLA-matched group, 1.69; p = 0.08) (4).For those patients without residual disease the benefit from CBT was less evident with lower hazard ratios (hazard ratio in the HLA-mismatched group, 1.36; p = 0.3 and hazard ratio in the HLA-matched group, 0.78; p = 0.33).Ando et al. showed improved 2-year overall survival with CBT compared to BMT and PBSCT, in adults with acute leukemia regardless of disease status pre-transplant (76.4% vs. 62% and 67.2%; p = 0.021) (40).In a large, multi-center, retrospective review of pediatric patients with acute myeloid leukemia (AML), Horgan et al. reported dramatically reduced incidence of relapse at 2-years with CBT in comparison to other cell sources in those patients with detectable MRD (36.2% vs. 66.2%;hazard ratio 0.46; p = 0.007).This promoted improved disease-free survival in the cord blood group (50% vs. 21%; hazard ratio 0.55; p = 0.017) (3).Excellent outcomes have also been reported by Barker et al., in adults using dCBT with 3-year OS and progression-free survival (PFS) at 82% and 76% respectively (41).These results taken together indicate that CBT produces a significant GVL effect, which translates into better relapse and leukemia-free survival outcomes for patients, particularly in those with high-risk disease. Transplant-related mortality including infectious complications Historically, CBT has been associated with a higher incidence of TRM and infectious complications in conjunction with delayed neutrophil and platelet engraftment (8,20).Mortality during CBT in children has reduced over time (42).Early registry-based studies performed by Eapen et al. in both children and adults, described an increased risk of TRM using CBT compared to BM grafts (20,43).For children receiving CBT with two-allelic mismatches the risk of TRM was greater than matched BM (relative risk 2.31; p = 0.003) and this risk was also evident with one-allelic mismatch (relative risk 1.88; p = 0.0455) (20).In adults receiving 4-6/6 CBT there was also increased TRM seen in comparison to fullymatched BM (relative risk 1.69; p < 0.01) and PBSC (relative risk 1.62; p < 0.01) (43).In comparison to 1 allele mismatched BM and PBSC grafts TRM was similar.In another study from 2001, Rocha et al. reported increased early TRM within the first 100 days post-transplant in patients receiving CBT with ATG serotherapy compared to BMT (hazard ratio 2.13; p < 0.001) (8).Increased incidence of early TRM with CBT has also been described by Weisdorf et al. within the first 3-months post-transplant (hazard ratio 2.83; p < 0.0001) but beyond 3-months TRM rates were similar between CBT and BM groups (hazard ratio 1.00; p = 0.99) (44).Konuma et al. examined outcomes of patients aged 55 and above receiving CBT without serotherapy against BM and PBSC recipients, finding reduced rates of TRM in the latter groups (hazard ratio 0.61; p < 0.001 and 0.63; p < 0.001 respectively).This has also been reported in children receiving T-replete CBT for myeloid malignancy with an increased rate of TRM compared to a comparator arm (hazard ratio 2.04; p = 0.042) (3).Other studies, however, have produced conflicting data with no significant increase in TRM detected with CBT (38,40,[45][46][47][48]. Heterogeneity of patient populations, disease groups, conditioning regimens, particularly the use of T-depleting serotherapy and outcome measures confounds direct comparison of these studies.TRM is a significant consideration in CBT and needs to be considered on the basis of risk to the individual patient and the risk of their disease. The incidence of primary graft failure following CBT is around 11%-12% in adults and children, compared to 5%-6% with HLAmatched BM and PBSC grafts (49,50).The risk of graft failure is lower in those patients undergoing transplantation for hematological malignancy (50).In CBT, for deaths primarily attributed to graft failure the TNC dose received is predictive (RR 0.4 for increasing dose; p < 0.001) but level of HLAmismatch is not (51).Fully-HLA matched CBT is associated with improved neutrophil engraftment (relative risk 1.8; p < 0.001) although there is no significant differences between recipients of grafts with 1-or 2-HLA mismatches (relative risk 1.0; p = 0.896).Patients can be successfully salvaged following graft failure with a second allogeneic HSCT (49,52,53). Infection is a large contributor to non-relapse mortality in all HSCTs including CBT.Viruses in particular can create a higher burden of morbidity and mortality in CBT (54).Members of the herpes virus family including cytomegalovirus (CMV), varicella zoster virus (VZV), human herpes virus 6 (HHV6) have been shown to have higher incidence with CBT (55-57).There are low rates of viral transmission with CBT due to screening, therefore this represents reactivation rather than primary infection.CMV can require longer duration of treatment with CBT (57).Risks from viruses can be mitigated, however, through routine testing, prophylaxis and pre-emptive treatment upon identification and this is an evolving area with improvements in supportive care and new therapies (58,59).For example use of letermovir for CMV reactivation, has shown promising results that may translate into better outcomes in pediatric CBT in the future (59).The use of anti-thymocyte globulin (ATG) serotherapy can negatively impact infection rates in CBT and development of T-cell responses to common viral and bacterial pathogens (60,61).In one study, ATG use in CBT was significantly associated with increased risk of CMV, EBV and adenovirus viraemia and death from viral infections (61).This occurred in conjunction with delayed immune reconstitution. Omitting T-cell depleting serotherapy in malignancy, may therefore improve both relapse and non-relapse mortality (62,63).In adults, particularly elderly patients or those with multiple co-morbidities, reduced intensity conditioning (RIC) regimens used in conjunction with CBT can allow the graft itself to drive engraftment and GVL (64)(65)(66).Overall survival in these patients is comparable to those receiving myeloablative conditioning (MAC) regimens, although reduced non-relapse mortality is offset by increased risk of relapse in some studies (64).Other complications associated with CBT include autoimmune cytopenia and gastrointestinal complications including the cord colitis phenomenon (67,68). GVHD and tolerance of HLA-mismatch One advantage of cord blood is its greater tolerance of HLAmismatch.Outcomes in children with acute leukemia following CBT with mismatch at one or two alleles are at least equivalent to fully matched unrelated donor transplants despite HLA disparity (20,45).The important composite end point of GVHD-free, relapse-free survival is superior in CBT recipients compared to other stem cell sources (3,39).Across all HSCTs, the impact of HLA-mismatch on survival also appears to be less pronounced if there is a higher risk of the disease (69). HLA-mismatch can result in better outcomes when transplanting for malignancy.Use of HLA-mismatch in CBT correlates with reduced relapse risk in both children and adults (20,21).Sanz et al. demonstrated in adults with AML, that an HLA-mismatch of 2 or more alleles was associated with a lower 5-year incidence of relapse (without an increase in TRM Mismatch ≥2 22%; mismatch <2 44%; p = 0.04) (21).Yokoyama et al. further examined the relationship between HLA-mismatch and outcomes in CBT for acute leukemia, and reported inferior survival only in children receiving CBT with 4 or more allelic mismatches (hazard ratio 2.03; p = 0.011) (22).Overall survival was comparable between all other groups.In addition to this, a significantly higher incidence of relapse was noted in adults who had received a fully HLA-matched 8/8 CBT (hazard ratio 1.53; p = 0.0037).Altogether these are important observations that greater HLA-mismatch can be tolerated by CBT and that this could mediate greater GVL effect without excessive TRM. The incidence of grade II-IV acute GVHD is higher in CBT than for MSD transplants, reported at around 35%-50% (40,44,45,70).Although there is variation within the literature, the risk of grade II-IV acute GVHD is similar between CBT and MUD transplants if bone marrow is used as the cell source, but lower in CBT if PBSCs are utilized (20,40,44).CBT confers lower risk of acute GVHD than MMUD transplants (39,43).There is reduced risk of severe grade III-IV acute GVHD with CBT compared to both MUD and MMUD transplants (39,40,48).Despite the high incidence of acute GVHD with CBT this does not translate into increased rates of chronic GVHD.The risk of developing chronic GVHD is lower in CBT than both MUD and MMUD transplants using both BM and PBSC cell sources (39,40,(43)(44)(45)70).The incidence of chronic GVHD using CBT is between 5%-28%, compared to 44%-53% for MUD transplants (3,40,44).Haploidentical transplantation is another alternative donor stem cell source option, often considered in place of CBT.Lower chronic GVHD rates are seen consistently with CBT in comparison to haploidentical transplantation (71, 72).Table 1 summarizes the literature on acute and chronic GVHD based on donor type and cell source. There is increased risk of development of chronic GVHD in adults compared to children receiving CBT (relative risk 5.7; p < 0.05) as well as BMT (relative risk 4.8; p < 0.05) and PBSCT (relative risk 10.0; p < 0.05) (74,75).For children receiving MSD BMT, there is reduced incidence of grade II-IV acute and chronic GVHD in younger patients aged 2-12 years, than those older patients aged 13-18 years (76).This is not seen in CBT with similar rates of acute and chronic GVHD across all age groups of children (77). It is important to consider when comparing CBT with MSD or MUD transplants that CBT will be HLA-matched at 5 or more alleles, rather than fully HLA-matched at 8 or 10 alleles as when using adult donors, and therefore more GVHD may be expected.In addition to this, the presence of GVHD is associated with reduced risk of leukemia relapse due to the corresponding GVL effect (13, 14).Conversely, the management of severe grade III-IV acute GVHD includes corticosteroids and other immunosuppressive agents (78).Prolonged use of immune suppression may contribute to increased risk of relapse, and its rapid withdrawal can be effective in the management of early relapse (78)(79)(80).GVHD itself can also negatively impact immune reconstitution, particularly thymopoiesis (81).Naïve T-cells induce potent alloreactive responses in xenograft studies, and there is clinical data to suggest that graft depletion of naïve T-cells reduces the severity of GVHD (82, 83).A peak of activated CD8 + T-cells expressing activation marker CD38 in peripheral blood has been associated with development of acute GVHD (84).This implies that high numbers of naïve T-cells transferred in cord blood grafts, differentiating into effectors in response to alloantigen could drive GVHD (85). Research in the field of GVHD biomarkers is rapidly developing, to inform both diagnosis and prognosis of the condition (86).ST2 is one such biomarker that has been associated with development of acute GVHD after day 28 in CBT (87).Analysis of cell-signaling in the pathophysiology of GVHD has highlighted the importance of the rat sarcoma/ mitogen-activated protein kinase kinase/ extracellular receptor kinase (RAS/MEK/ERK) pathways in alloreactive T-cells.Detection of higher levels of phosphorylation of the ERK1/2 pathway in CD4 + T-cells has been described as a biomarker for the development of acute GVHD (88), MEK inhibitors have additionally been shown to preferentially inhibit cytokine production in naïve alloreactive T-cells (89).In the future, stratification of patients on the basis of biomarkers could facilitate decisions around therapeutic interventions for GVHD, allowing GVL to be maximized in those at low risk but earlier intervention in those at high risk of severe GVHD, reducing the burden of TRM in CBT. The clinical characteristics of CBT for leukemia are summarized in Figure 1.These clinical findings suggest key differences in the immunology of cord blood, and in particular T-cell biology, when compared to adult peripheral blood or bone marrow.These differences are responsible for the observed improved relapse rates, greater tolerance of HLA-mismatch and reduced rates of chronic GVHD.It also highlights the importance of improving the understanding and application of supportive care for patients undergoing CBT, to reduce the burden of TRM and further improve survival outcomes. Clinical protocol for cord blood transplantation for pediatric acute leukemia Our center is a large children's cord blood transplant center.In AML that is refractory to chemotherapy or relapsed, our first choice would be to select to use an unrelated cord blood donor without T-depleting serotherapy due to the reduced risk of relapse observed by our group and others in this high-risk cohort of patients (3,4).We recognize the increased procedure-related risk with performing a mismatched unrelated CBT in comparison to a MUD transplant, but with higher risk myeloid leukemia the better associated disease outcomes overcome this consideration (90).Recent American Society for Transplantation and Cellular Therapy guidelines suggest that selection of less well-matched units could be considered for those patients with hematological malignancy (91). We would consider refractory T-cell leukemia, mixed phenotype acute leukemia and refractory infant leukemia as similar to high-risk myeloid malignancy (90).Our conditioning regimen would typically include myeloablative busulfan if a first allogeneic transplant, without serotherapy.GVHD prophylaxis would include ciclosporin and mycophenolate mofetil, which we would aim to wean early in the absence of GVHD.Single cord blood unit selection follows a nationally defined protocol (92). In children with relapsed leukemia after an earlier transplant procedure, we have used experimental procedures to augment GVL.This has included our use of granulocyte transfusions in conjunction with T-replete CBT to promote CD8 + T-cell expansion (93). Cord blood graft composition Cord blood grafts are relatively enriched for hematopoietic stem cells (HSCs) when compared to BM and in particular PBSC grafts (94).Clinical cord blood units for transplant, however, usually contain 1-2 log lower cell dose than those obtained from peripheral blood or bone marrow donors so the actual number of transferred cells is usually smaller (95).Cord blood HSCs are of a more primitive phenotype with reduced CD38 expression (96).These stem cells have high capability for self-renewal and proliferation (96, 97).Cord blood also contains higher numbers of myeloid and lymphoid progenitors with high replication potential (98, 99).Analysis of lymphocyte subsets also shows higher absolute numbers of T, B and NK cells within a given volume of cord blood when compared to adult peripheral blood samples (100). Lymphocyte subsets within cord blood Lymphocyte subsets within cord blood differ from those in adult peripheral blood both in number and phenotype.Within the CD3 + compartment of cord blood there is a higher proportion of CD4 + to CD8 + T-cells when compared to those in adult peripheral blood (100, 101).Cord blood T-cells consist Invariant Natural Killer T (iNKT) cells are a specialized subset of T-cells that crossover between innate and adaptive immunity (111).They are defined by their restricted TCR that can solely recognize lipid antigen presented by CD1d (111).Numbers remain stable throughout life from birth to adulthood but are highly variable between individuals (111,112).They comprise a comparatively small proportion of all T-cells at approximately 0.1%-0.2% on average in both cord and peripheral blood.In murine HSCT models, adoptive transfer of iNKT cells can protect against GVHD, whilst preserving the GVL effect through inhibition of alloreactive T-cell expansion and activation and induction of donor regulatory T-cell expansion (113, 114).The scarcity of iNKT cells in peripheral blood compounds their utility in clinical applications, however.An early study of iNKT cells engineered from cord blood HSCs has shown promising pre-clinical results in amelioration of GVHD whilst preserving GVL (115).NK cells are lymphocytes that form part of the innate immune system.They elicit anti-cancer effects through release of cytotoxic granules as well as activation of apoptotic pathways and can produce inflammatory cytokines (116).Cord blood grafts are relatively enriched for NK cells, which constitute around 20%-30% of the lymphocyte population in comparison to 10%-15% in peripheral blood (101).NK cells can be divided into two subpopulations, which are CD56 dim CD16 bright and CD56 bright CD16 dim (116,117).These sub-populations have distinct phenotypic properties with CD56 dim CD16 bright NK cells mediating cytotoxicity through granzyme B and perforin production and CD56 bright CD16 dim NK cells mainly producing inflammatory cytokines such as IFNγ and TNFα (118).The predominant population of NK cells in cord blood are CD56 bright CD16 dim with reduced cytotoxic capabilities and a more immature phenotype (119,120).These cord blood NK cells have reduced expression of granzyme B and killer immunoglobulin-like receptors (KIR) and higher expression of the inhibitory receptor NKG2A (120, 121).Within cord blood, however, a distinct NK cell progenitor exists that can readily differentiate into functional, mature NK cells with the ability to produce IFN-γ, TNFα, IL-10, and GM-CSF and lyse cells in vitro (122).Cord blood NK cells also express higher amounts of CXCR4 suggesting superior bone marrow homing when compared to peripheral blood NK cells (120).Cord blood is a rich source of NK cells for use in immunotherapy, with encouraging early results (123). B-cells are immunoglobulin producing lymphocytes that play a very important role in adaptive immunity.B-cells make up a larger proportion of the lymphocyte population in cord blood compared to peripheral blood, at around 15%-20% (101).Characteristics of B-cells differ between cord and peripheral adult blood.Cord blood contains a higher percentage of B-cell progenitors in comparison to adult and pediatric bone marrow (124).Most B-cells within cord blood do exhibit the same CD20 + /CD5 − phenotype as B-cells found in adult peripheral blood (125, 126).There is, however, a distinct CD20 + /CD5 + population of B-cells within cord blood (127).These CD5 + B-cells are specifically derived from fetal and neonatal progenitors and are characterized by IgM production that is polyreactive and indeed autoreactive with low affinity binding (126, 128).Additionally, cord blood Bcells produce accelerated responses to stimulation with distinct transcriptional pathways (125). Cord T-cell biology Cord blood and adult peripheral blood T-cells have different functional properties, which are summarized in Table 2. In vitro studies show that cord blood T-cells have a greatly increased capacity for proliferation in response to multiple stimuli and lymphopenia.Cord blood T-cells proliferate more rapidly than those in adult peripheral blood in response to cytokine stimulation (129, 133,134).In assays analyzing these responses, IL-7 produced greater proliferation in cord blood Tcells, particularly the CD4 + population whereas IL-15 stimulated the greatest proliferation in cord blood CD8 + T-cells (129).Clinically in BMT, low serum levels of IL-15 are associated with an increased risk of post-transplant relapse in AML indicating its potential role in the GVL effect, perhaps through induction of CD8 + T-cell responses (150).CD8 + T-cells in neonates, also exhibit an intrinsic ability to rapidly proliferate in response to in response to CD3 and CD28 co-stimulation (130,132).Cord blood CD4 + T-cells proliferate more than adult peripheral blood T-cells when stimulated with self-APCs (135).This occurs in conjunction with enhanced TCR signaling and upregulation of the MAPK signaling cell-cycle pathway ( 135).This greater proliferative potential is thought to arise, in part, secondary to distinct lineages of fetal and adult progenitor cells and a transcriptional program geared towards lymphopenia-induced proliferation (130,131).In xenograft studies, greater levels of expression of the proliferation marker Ki-67 is also seen in both cord blood CD4 + and CD8 + T-cells, in comparison to those in adult peripheral blood.This indicates that there is also a higher proportion of proliferating cord blood T-cells in vivo (129, 130). Cord blood CD8 + T-cells in particular have higher levels of Ki-67 expression (129).Additionally, spontaneous telomerase expression occurs in cord blood T-cells to allow proliferation without telomere shortening, aiding longevity of the T-cell progeny (129). In response to antigenic stimulation, fetal CD8 + T-cells preferentially become terminally differentiated, short lived effectors rather than memory T-cells as seen in adults (130).Naïve CD4 + T-cells in the newborn expressing CD45RA are also able to much more rapidly transform to CD45RO effector memory phenotype on stimulation when compared to naïve adult peripheral blood CD4 + T-cells (136). CD4 + helper T-cells Th1 and Th2 are broadly defined by their ability to produce IFNγ/TNFα and IL-4/IL-5/IL-13 as well as the transcription factors T-bet and GATA-3, respectively.Cord blood Tcells produce less inflammatory cytokines overall in response to stimulation when compared to adult peripheral blood (129,144,145).Both CD4 + and CD8 + subsets produce less IFNγ and TNFα as well as lower levels of IL-2 and IL- 4 (136, 144, 145).This includes in response to CMV peptide stimulation (147).Reduced expression of the transcription factor nuclear factor of activated T-cells c2 (NFATc2), which upregulates the expression of many cytokines involved in T-cell inflammatory responses, has been hypothesized as a mechanism for this observation (148).Low production of IFNγ specifically by CD4 + T-cells in the neonate, has been associated with hypermethylation of the IFNγ promoter (151).Neonatal naïve T-cells that have been allowed to mature in vitro, however, acquire the ability to secrete IFNγ and IL-2 on secondary stimulation (136). Cord blood T-cells display less alloreactivity when compared to peripheral blood T-cells, which may contribute to the reduced rates of GVHD seen clinically ( 134).Cord blood T-cells more readily undergo apoptosis in response to alloantigen (143,149).Levels of constitutive perforin expression in cord blood CD8 + T-cells are low in comparison to those in adult peripheral blood, and cytotoxicity mediated through the FAS-ligand pathway is impaired (142,143).Cord blood T-cells co-cultured with immature adult dendritic cells produced greater IL-10 and reduced IFNγ than adult T-cells, which is a regulatory cytokine profile (138).Cutaneous lymphoid antigen (CLA) is expressed on T-cells involved in migration to areas of skin inflammation.There is no expression of CLA on cord blood T-cells, which could be significant in the lower incidence of chronic skin GVHD with CBT (100). Cord blood is a rich source of CD4 + /CD25 + /FoxP3 + regulatory T-cells (Tregs), which are inherently programmed to immune tolerance with low levels of immunological memory (140).Cord blood Tregs have strong suppressive capabilities against alloreactive T-cells with high levels of IL-10 production in vitro, following activation and expansion (139, 141).This suppressive effect is potent and occurs consistently, inhibiting production of T-cell activation-dependent cytokines such as IL-2, IFN-γ, TNFα and GM-CSF in mixed lymphocyte reaction assays (141).In comparison to Tregs isolated from peripheral blood, cord blood Tregs also have much greater capacity for expansion with higher Ki-67 expression and a gene expression profile that favors proliferation and chromatin modification (139, 152).Cord blood Tregs also express higher levels of the chemokines CCR9 and CCR7 than their peripheral blood counterparts, which regulate trafficking to the gut and lymph nodes respectively (152).The potential of cord-derived Tregs to prevent GVHD whilst preserving the GVL effect has been established in xenograft models (153).Further work has optimized the purification and expansion process to allow for use in clinical settings (154,155).There are promising results from early clinical trials using cord-derived Tregs as preventative therapy for GVHD with reduced rates of acute and chronic GVHD without an increase in relapse (154,156). Cord blood antitumor effects in xenograft models The superior clinical outcomes in malignancy have been further supported in pre-clinical studies, investigating cord blood T-cells as the primary mediators of enhanced antitumor activity.In xenograft models, very successful antitumor effects of cord blood T-cells have been demonstrated (137,157).In one study, cord blood mononuclear cells induced dramatic remission in mice with lung and cervical cancers, with high levels of tumor infiltration by CD3 + T-cells shown (157).Further in vitro assays demonstrated tumor specific antigen cytotoxicity.Comparative analysis of cord blood and peripheral blood T-cells, also shows that cord blood Tcells produce a greatly superior antitumor response (137).In an Epstein-Barr virus (EBV)-driven human B-cell lymphoblastoid tumor mouse model, cord blood T-cells had enhanced antitumor effect with rapid infiltration and induction of remission.Cord blood T-cells expressed enhanced levels of the tumor-homing receptor CCR7.In contrast, peripheral blood T-cells were slower to infiltrate and had reduced antitumor activity with tumor progression.Analysis of the tumor infiltrating cord blood lymphocytes showed them to be primarily CD8 + T-cells that had converted from naïve to central and effector memory phenotype.Importantly, these cord blood T-cells were able to mediate a greatly augmented GVL effect without exerting xenoreactivity, mirroring the clinical picture seen with CBT.Effector memory CD4 + T-cells have also been shown to exert effective GVL effects without GVHD in the mismatched HSCT setting in mouse models of chronic myelogenous leukemia (158). In summary, T-cells contained within a cord blood graft are a distinct entity from those derived from peripheral blood and bone marrow.They are predominantly CD4 + T-cells and both CD4 + and CD8 + T-cells are naïve (100).These subsets possess the ability, however, to quickly transform into effector T-cells with cytotoxic potential (132, 137).Both CD4 + and CD8 + cord blood T-cells are capable of rapid proliferation in response to lymphopenia, cytokines and APCs, more so than adult T-cells.There is some suggestion, however, that there is impairment in the production of inflammatory cytokines and mediation of cytotoxicity through certain cellular pathways (129, 143).There is, therefore, much more to be understood about how an enhanced GVL effect is mediated during cord blood transplantation.Cord blood CD8 + T-cells are endowed with enhanced tumor-homing capabilities due to high levels of CCR7 expression, which in conjunction with greater proliferation may contribute to the superior GVL effect (135,137).Additionally, the primitive nature of cord blood itself could potentially reduce susceptibility to inhibitory cosignaling and induction of an exhausted T-cell state (130,143).There are also differences in clinical characteristics of CBT including greater use of HLA-mismatch and omission of T-cell depleting serotherapy, which likely contribute to the phenomenon seen. Mechanism of T-cell reconstitution following HSCT Following transplant conditioning there is a period of profound lymphopenia and aplasia.T-cell reconstitution occurs through two separate pathways.In the first, there is thymus-independent peripheral expansion of T-cells transplanted within the graft or of residual recipient T-cells that have escaped transplant conditioning, driven by exposure to antigen and cytokines.This is the pathway that predominates in the early post-transplant phase (159).Distinct from this, restoration of thymopoiesis from hematopoietic progenitors occurs in parallel but at a much slower rate.This thymus-dependent pathway utilizes progenitors transferred within the graft or those derived from donor hematopoietic stem cells to produce naïve T-cells with a greater TCR repertoire, thus promoting longevity of T-cell reconstitution with increased functionality (81,160). Within a lymphopenic environment, homeostatic proliferation of naïve T-cells occurs during which they acquire the phenotype and functional properties of effector T-cells (161,162).Interaction of cells with peptide bound to MHC molecules and IL-7 is essential to the survival and expansion of naïve CD4 + and CD8 + T-cell populations (159,163).With lymphopenia there is reduced competition for APCs and cytokines allowing greater expansion to occur. Reliance on the thymus-independent pathway alone for T-cell reconstitution can result in a reduced TCR repertoire that is skewed and oligoclonal (147).Restoration of thymopoiesis is required for the formulation of de novo naïve T-cells following HSCT.This is necessary to restore a complete and long-lasting Tcell compartment capable of responding to a wide range of antigens (147,159,164).Thymopoiesis is the process through which progenitors derived from HSCs in the bone marrow (or in the context of transplant, infused with the graft) proliferate and mature within the thymus and become committed to the T lineage.It is in this manner that TCR specificity is generated (159).One technique to assess reestablishment of thymopoiesis, is detection of T-cell receptor rearrangement excision circles (TRECs), which are extra-chromosomal fragments of DNA that are generated as by-products of the TCR gene rearrangement process within the thymus, and can be used to identify recent thymic emigrants (RTEs) (164).They exist stably within the cytoplasm but are not replicated during cell division, which means they can be measured to quantify functionality of the thymus (159). T-cell reconstitution after cord blood transplantation Cord blood T-cell reconstitution is CD4 + biased, which differs from BM and PBSC cell sources in which CD8 + T-cells predominate (135,(165)(166)(167)(168)(169)(170).Cord blood transplant without Tcell depleting serotherapy has been shown to result in rapid CD4 + reconstitution in both adults and children (135,165,171).This is initially attained through the thymus-independent pathway based on the low number of detectable TRECs and may be secondary to the increased proliferative capabilities of cord blood CD4 + T-cells in lymphopenic environments (135).Successful CD8 + T-cell reconstitution can be slower in cord blood transplant recipients than BM and PBSCs (166, [171][172][173].The most prevalent early reconstituting CD4 + and CD8 + cord blood T-cells are effector memory, with an increase in naïve Tcells occurring from 6 to 9 months suggesting thymus-dependent recovery at this time point (171).Further phenotypic analysis of early reconstituting CD8 + T-cells in CBT suggests significantly greater proportions of highly activated effector memory cells expressing CD38 than in PBSCT recipients (173).Marked CD8 + T-cell expansion can occur with CMV reactivation following CBT (171).A polyclonal TCR repertoire with normal spectratyping has been observed early in some patients within the first month post-CBT without T-cell depleting serotherapy (165).Conversely, delayed restoration of TCR diversity has been reported in the ATG setting (81). There is efficient recovery of thymopoiesis seen with cord blood transplant in the T-replete setting with TRECs detectable within the first 3 months post-transplant and reaching normal levels within 6 months (165,174).This is initially comparable to haploidentical and MSD transplants but is in fact superior in cord blood at 2-years post-transplant with higher TREC numbers and greater TCR repertoire indicating efficient regeneration of thymic function with CBT (81,174).This occurs despite the much lower cell dose contained within cord blood units than BM or PBSC grafts.Higher numbers of lymphomyeloid progenitors (LMPs) contained within cord blood grafts may contribute to this finding, but it may also suggest that cord blood LMPs themselves are superior in their ability to reconstitute thymopoiesis in comparison to those in other graft sources (160).With use of ATG serotherapy, however, CD4 + reconstitution following cord blood transplant is significantly delayed (147,166,168,175,176).ATG serum concentration modelling during HSCT shows cord blood CD4 + T-cells are particularly susceptible to ATG exposure even at low levels (62,63,177).Further research is warranted to define when polarization into CD4 + T helper cell subsets occurs following CBT and how this may differ from other stem cell sources.Sustained CD4 + reconstitution has been associated with improved OS children with leukemia, independent of cell source (178).In CBT specifically, CD4 + T-cell reconstitution has been positively associated with improved overall and leukemia-free survival in children and adults (62,171).Early post-transplant, higher numbers of cytotoxic, effector CD8 + T-cells are also associated with improved OS and reduced NRM (40). NK cell and B-cell reconstitution after cord blood transplantation NK cells are the first lymphocyte to engraft following CBT, achieving normal levels within the first month (166, 175,179,180).In the post-transplant period, T-cell lymphopenia is associated with a compensatory increase in NK cells to above physiological levels (147).NK cell reconstitution occurs more rapidly with CBT than PBSC or BM cell sources (167, 178,179,181).Despite their initially immature phenotype, NK cells can quickly attain their innate cytolytic abilities following CBT meaning they are crucial in exerting a GVL effect in the early post-transplant period for acute leukemia (116,167).NK cells in CBT recipients show significantly higher expression of activation markers CD69 and NKP30 in the first few weeks, than those receiving PBSCs (173). Significant B-cell recovery starts around 3-4 months after HSCT reaching normal levels by 6-12 months (168,182).Adequate immunoglobulin production, however, can take several years to be achieved (183).B-cell recovery after CBT is comparatively enhanced and occurs much earlier than with BMT (166,167,178,184).Similarly to NK cells, B-cell numbers have also been seen to reach higher than physiological levels following CBT, perhaps as a compensatory measure for T-cell lymphopenia (147).Immunoglobulin levels in response to commonly encountered antigens approach normal levels within the first year of CBT (95, 170).In adults, IgM recovery is comparable between dCBT and PBSCT recipients.Normal levels of IgG, however, occur at 5-6 months following dCBT, which is not seen at 12 months following PBSCT (185).Higher numbers of naïve B-cells expressing CD127 are seen with CBT recipients (172).Stromal cells contribute to B-cell development and therefore the greater numbers of mesenchymal stem cells found in in cord blood might contribute to this quick recovery and early functionality (96, 186). Future directions to augment GVL effect with cord blood transplantation We have established that T-cells are important mediators in GVL and that CBT is associated with reduced relapse rates, secondary to the superior antitumor properties of cord blood T-cells.The exact mechanism of this is yet to be elucidated, but may involve a combination of factors including the differing composition of cord blood grafts with predominance of naïve Tcells; the innate biology of cord blood T-cells themselves, particularly their ability to proliferate and transform into shortlived effectors; their greater tolerance of HLA-mismatch, allowing this to be utilized in conjunction with omission of T-cell depleting serotherapy in conditioning regimens; as well as the differing pattern of immune reconstitution following CBT, with earlier restoration of thymopoiesis due to greater numbers of LMPs.Therefore, future directions must look at further defining and augmenting the GVL effect of CBT. Our group previously described significant T-cell expansion driven by granulocyte transfusions in conjunction with CBT (187).This expansion was predominantly of CD8 + T-cells and importantly was associated with prolonged remission in patients with acute leukemia.On the basis of these findings, we have conducted an early phase I/II study in which we have trialed the use of third-party, pooled granulocyte transfusions in conjunction with CBT to promote CD8 + expansion in children with high-risk myeloid malignancy (93).We have reported significant and reproducible T-cell expansion occurring in association with a cytokine release syndrome, in all patients excluding one with primary graft failure.Expanded T-cells were CD8 + and effector memory or terminally differentiated effector memory (TEMRA) phenotype and exhibited canonical markers of activation and cytotoxicity.The phenotype of expanded T-cells appears to be similar to that of the highly effective tumoreradicating T-cells from xenograft models (137).In this high-risk cohort of patients with MRD positive acute leukemia, 90% of patients achieved hematological remission, 80% became MRD negative and 50% of patients are alive and in disease remission with over one year median follow up (93). Further research into the mechanisms of GVL are necessary to better understand and advance this field.We speculate that cord blood T-cells are biologically distinct from adult T-cells, and that the observed improved GVL without chronic GVHD likely reflects this distinct biology.Understanding this relationship between T-cell biology and clinical outcomes might allow the Tcell responses we have reported, to be improved further yet or re-directed.Specifically identifying targets of cord blood T-cells, separating this entity from GVHD and understanding the role of HLA-mismatch, will mean that T-cell techniques can be honed, and potentially improve patient outcomes. Universal or "off-the shelf", allogeneic CAR T-cells are also being trialed in numerous studies, in order to overcome the limitations of using autologous CAR T-cells, which rely upon the functionality of the recipient T-cell pool and can have prolonged manufacture time with high costs (188).Cord blood T-cells could be ideal candidates for this technology due to their naïve phenotype, increased replicative capabilities and reduced alloreactivity (188, 189). The lack of availability of DLI with CBT has previously been seen as a disadvantage, but with newer methods of T-cell separation, cryopreservation and cord unit expansion, theoretically, cord blood T-cells could be stored for DLI in the future (190,191).Improvements in supportive care and utilization of targeted or less toxic conditioning regimens could dramatically reduce the burden of TRM associated with CBT.The advent of new treatments for acute GVHD, which may include cord bloodderived Tregs, and anti-viral drugs such as letermovir will be crucial in these efforts (59, 154). Conclusion In conclusion, cord blood transplantation offers an enhanced GVL effect that has shown to be particularly effective in difficultto-treat leukemia, without the risk of chronic GVHD.This effect can be further enhanced through amelioration of the effects of TRM and trials to appropriately risk stratify patients.Cord blood T-cells are a phenotypically distinct entity from those in peripheral adult blood or bone marrow, but the exact mechanisms through which they elicit superior antitumor effects remain to be elucidated.Further research is required but cord blood offers exciting opportunities in the field of cellular immunology and the landscape of CBT is likely to rapidly expand and evolve in coming years, giving rise to novel and innovative treatment modalities. predominantly of the naïve (CD45RA + /CD45RO − ) phenotype, whilst adult peripheral blood is mainly comprised of memory (CD45RA −/ CD45RO + ) T-cells (100).Cytotoxic CD8 + T-cell populations are absent in cord blood with lower numbers of effector T-cells and those expressing activation markers such as HLA-DR (100).Whilst naïve T-cells are the most numerous subset in cord blood, some antigen-experienced T-cells are also present.The most notable of these includes those specific for maternal minor histocompatibility antigens(102).Gamma delta (γδ) T-cells are a distinct subset of T lymphocytes defined by their expression of γδ T-cell receptors.They have effector capabilities and can promote inflammation(103).In peripheral blood, they make up only a small proportion of around 1%-5% of all T-cells with conventional αβ T-cells comprising the majority of all those in circulation (104).They are abundant at barrier sites.Absolute numbers of γδ T-cells are negligible in cord blood when compared to adult peripheral blood (100, 105, 106).Cord blood γδ T-cells are primarily of the Vδ1 subtype and have a naïve phenotype in comparison to Vγ9Vδ2 T-cells, which are the most numerous in adult peripheral blood(107, 108).The small number of Vγ9Vδ2 Tcells within cord blood are functionally immature(109).Cord blood γδ T-cells, however, have a highly diverse polyclonal repertoire(107, 108, 110).Receptor diversity is increasingly restricted with age and very limited in adult γδ T-cells (107, 108).In vitro, cord blood γδ T-cells readily expand and differentiate, becoming functionally cytotoxic (108).These characteristics in combination, mean that cord blood γδ T-cells are being further investigated for use in cancer immunotherapy (108). FIGURE 1 FIGURE 1Summary of the clinical characteristics of cord blood transplantation for acute leukaemia. TABLE 1 Summary of acute and chronic GVHD by donor type and cell source. TABLE 2 Cord blood T-cell biology differences to adult peripheral blood T-cells.
v3-fos-license
2017-08-15T06:14:34.136Z
2012-02-10T00:00:00.000
22515005
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.intechopen.com/citation-pdf-url/27965", "pdf_hash": "009dd52774cee19a68adf51d1bbbda0cc26d9c86", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42727", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "3c23767b82c3433aab82957335a003b723122d59", "year": 2012 }
pes2o/s2orc
Management of Septic Shock in Children Paediatric septic shock is a frequently occurring disease condition that is associated with high morbidity and mortality (Watson et al, 2003). Shock is an acute, complex state of circulatory dysfunction resulting in failure to deliver oxygen (DO2) and nutrients to meet metabolic demands (VO2) which are usually increased during shock. If left untreated, multiple organ failure and ultimately death will occur (Smith et al, 2006). This strongly points out the importance of early recognition and aggressive treatment of children with shock. Comparable to adults, such an approach – termed early-goal directed therapy (EGDT) – has been shown to significantly reduce mortality in paediatric septic shock. Paediatric studies have pointed out that the risk of death showed a two-fold increase with each hour delay in the reversal of shock (Carcillo et al, 2009; Han et al, 2003; Inwald et al, 2009; Rivers et al, 2001). Hypovolaemic shock and septic shock are the most common forms of shock in children. Hypovolaemic shock is characterized by a decrease in intravascular blood volume to such an extent that effective tissue perfusion cannot be maintained. In children hypovolaemic shock is mainly caused by fluid and electrolyte loss due to vomiting and diarrhea or acute haemorrhage. Septic shock is actually a combination of distributive shock (i.e. a decreased total vascular resistance and maldistribution of blood flow in the microcirculation) and relative as well as a absolute hypovolaemia. Furthermore, impairment of myocardial fuction may occur with symptoms of cardiogenic shock. The great majority of children with septic shock will not be presented in hospitals with PICU facilities. Furthermore, from a pathophysiologic perspective paediatric shock does not resemble adult septic shock. This strongly suggests that every physician that could be faced with these children needs to understand how to recognize paediatric shock and have basic knowledge of the principles of primary management. This chapter summarizes the pathophysiology, clinical manifestations and primary management of paediatric septic shock. Introduction Paediatric septic shock is a frequently occurring disease condition that is associated with high morbidity and mortality . Shock is an acute, complex state of circulatory dysfunction resulting in failure to deliver oxygen (DO 2 ) and nutrients to meet metabolic demands (VO 2 ) which are usually increased during shock. If left untreated, multiple organ failure and ultimately death will occur (Smith et al, 2006). This strongly points out the importance of early recognition and aggressive treatment of children with shock. Comparable to adults, such an approach -termed early-goal directed therapy (EGDT) -has been shown to significantly reduce mortality in paediatric septic shock. Paediatric studies have pointed out that the risk of death showed a two-fold increase with each hour delay in the reversal of shock Han et al, 2003;Inwald et al, 2009;Rivers et al, 2001). Hypovolaemic shock and septic shock are the most common forms of shock in children. Hypovolaemic shock is characterized by a decrease in intravascular blood volume to such an extent that effective tissue perfusion cannot be maintained. In children hypovolaemic shock is mainly caused by fluid and electrolyte loss due to vomiting and diarrhea or acute haemorrhage. Septic shock is actually a combination of distributive shock (i.e. a decreased total vascular resistance and maldistribution of blood flow in the microcirculation) and relative as well as a absolute hypovolaemia. Furthermore, impairment of myocardial fuction may occur with symptoms of cardiogenic shock. The great majority of children with septic shock will not be presented in hospitals with PICU facilities. Furthermore, from a pathophysiologic perspective paediatric shock does not resemble adult septic shock. This strongly suggests that every physician that could be faced with these children needs to understand how to recognize paediatric shock and have basic knowledge of the principles of primary management. This chapter summarizes the pathophysiology, clinical manifestations and primary management of paediatric septic shock. Pathophysiology of paediatric shock The balance between DO 2 and VO 2 is the key factor in the pathophysiology of shock. DO 2 is also in children determined by the cardiac output (CO) and arterial oxygen content (CaO 2 ) according to the formula DO 2 = CO * (Haemoglobin * 1.36 * SaO 2 ) + (0.003 * PaO 2 ) (1) The CO is determined by the heart rate (HR) and stroke volume (SV), the latter is determined by the pre-load, afterload and contractility of the heart. The VO 2 is increased in septic shock. Hence, the body will try to compensate for this by increasing the DO 2 through various mechanisms including increasing the HR and the venous vascular tone to optimize cardiac pre-load. Tachycardia is one of the earliest compensatory mechanisms. If this compensation is inadequate to meet cellular oxygen demands, the systemic vascular resistance (SVR) will be increased allowing perfusion of vital organs such as the heart and brain. In addition, oxygen extraction will be increased. Of importance, children are able to maintain normal blood pressure. This phase of shock is termed compensated shock. Oxygen debt will occur if these mechanisms fail when the shock is not reversed. Under normal conditions oxygen debt will occur when the ratio DO 2 : VO 2 is 3 : 1. However, as a result of the increase in VO 2 during septic shock oxygen debt will occur already at DO 2 : VO 2 2 : 1. Microvascular perfusion becomes marginal and cellular function deteriorates, affecting all organ systems (uncompensated shock). If not adequately managed, irreversible shock will occur. Vital organs will be damaged to such an extent that death is inevitable. There are considerable differences in the pathophyiology of septic shock between children and adults. Vasomotor paralysis is the predominant cause of mortality in adults (Parker et al, 1987). Myocardial dysfunction in adult septic shock manifests mainly as decreased ejection fraction with either normal or increased CO. This is because adults are capable of increasing their CO by tachycardia in combination with ventricular dilatation allowing an increase in SV (Parker et al, 1984). In contrast, paediatric septic shock is mainly characterized by severe hypovolaemia; the decrease in CO and not SVR is associated with mortality (Carcillo et al, 2002). This is because especially younger children have higher baseline HR's compared to adults; hence they cannot increase their HR without impairing CO. Furthermore, children are not capable of increasing their SV (Feltes et al, 1994). This means that children need to be resuscitated with fluids aggressively. Nevertheless, the haemodynamic response of fluid-resuscitated children is different from adults. Ceneviva and co-workers evaluated 50 children with fluid-refractory, dopamine resistant septic shock (Ceneviva et al, 1998). The majority had low CO in combination with high SVR, but 22% had low CO and low SVR. Furthermore, haemodynamic profiles changed frequently during the first 48 hours. Another interesting difference between children and adults relates to the VO 2 . The VO 2 is mainly determined by oxygen extraction in adults, whereas in children is it mainly determined by the DO 2 (Carcillo et al, 1989). This indicates that all efforts must be made to maintain adequate DO 2 . Symptoms of paediatric shock The early diagnosis of paediatric shock warrants a high index of suspicion and knowledge of disease conditions that predispose children to shock. It is imperative to understand the reference values for vital parameters in children. Early signs of septic shock may be subtle and easily missed. Tachycardia is the earliest presenting symptom. Blood pressure will be normal during compensated shock, but the pulse pressure is widened. Children will have plethora, warm extremities and bounding pulses ("warm shock". If the shock is not reversed, signs of failure of the compensatory mechanisms can be noted including cold extremities and prolonged capillary refill time ("cold shock"). Of note, the capillary refill time has little discriminative value in paediatric shock. Hypovolaemic children may still have a capillary refill time that is within the normal limit (2 seconds). Many children with fever have tachycardia and warm extremities on physical examination. Not all of these children are in shock. For early recognition of shock it is then absolute necessary to evaluate the mental state of the child. In general, children in shock are lethargic and have decreased consciousness, but the opposite (i.e. agitation, restless, anxious) also occurs. Underlying mechanisms include most likely a combination of cerebral hypoperfusion, metabolic alterations and production of cytotoxic substances. Oxygen debt will occur when the shock is not recognized and thus not treated properly. Clinically, the child suffers from depressed consciousness, poor skin perfusion, decreased urinary output and hyperventilation to compensate for the metabolic acidosis. Age The contribution of laboratory tests is limited. In contrast with adult septic shock, blood gasses and serum lactate levels are not diagnostic for paediatric shock but may be used for monitoring the effectiveness of treatment (Brierley et al, 2009). Repeated evaluation and monitoring of the patient remains the most effective physiologic monitor. Management of paediatric shock The American College of Critical Care Medicine (ACCM) has published clinical guidelines for the haemodynamic support of neonates and children with septic shock in 2002 and revised them in 2009 (Brierley et al, 2009;Carcillo et al, 2002). These guidelines advocate amongst others early recognition, adequate fluid resuscitation and timely and appropriate antibiotic therapy. Notwithstanding the fact that the efficacy of these guidelines has not been confirmed in a randomized clinical trial, data strongly suggests that adherence to these guidelines results in improved survival (de Oliveira et al, 2008;Dellinger et al, 2008;Han et al, 2003). Han and co-workers evaluated 91 patients with septic shock who were referred to their PICU (Han et al, 2003). Shock reversal within 75 minutes and adherence with the ACCM guidelines was associated with > 90% survival. Worrisome however was that adherence to these guidelines was only achieved in < 30% of all patients. A study of 200 children with severe sepsis in the United Kingdom showed a drop from 25% to 6% in mortality when shock was reversed, although only 8% of patients were managed according to the ACCM guidelines (Inwald et al, 2009). The primary goal of the primary management of paediatric shock is to prevent organ failure caused by oxygen debt through optimalisation of and balancing DO 2 and VO 2 . This means that is important to maintain blood pressure above the critical point which below flow cannot be effectively maintained. Thus, shock should be clinically diagnosed before hypotension occurs. Clinical targets include age-appropriate HR and blood pressure, normalisation of the capillary refill time, normal consciousness and adequate urinary output (> 1 mL/kg/hour) (Figure 1) (Brierley et al, 2009). After each intervention it is evaluated whether or not these clinical targets have been achieved. Recognition and management during the first 15 minutes Within the first five minutes the child is evaluated according to the Paediatric Advanced Life Support approach -i.e. a structural approach examining Airway, Breathing and Circulation ( Figure 1). The diagnosis septic shock is confirmed when tachycardia, fever and symptoms of inadequate tissue perfusion are present. These symptoms include altered consciousness, as well as shortened capillary refill time, bounding pulses and widened pulse pressure (in case of "warm shock") or prolonged capillary refill time, weak pulses, mottled skin and decreased urinary output (in case of "cold shock"). The next step then is to administer 100% oxygen via a non-rebreathing mask (flow 10 -15 L/min) and to insert two peripheral lines. Blood is drawn for haematological, biochemical studies and blood culture. Subsequently, aggressive fluid resuscitation is mandated (Carcillo et al, 1991). This means that within 15 minutes three fluid boluses of 20 mL/kg (max 500 mL) are administered. Crystalloid fluids are the first choice. After each bolus the child is evaluated if the clinical targets have been met. Rapid and sufficient fluid administration is significantly associated with improved survival (Ceneviva et al, 1998). Of importance, antibiotics must be administered within the first 15 minutes. Although not confirmed in paediatric studies, adult data indicated that mortality doubled for each hour delay in administration of antibiotic treatment (Kumar et al, 2006). Electrolyte disturbances or hypoglycaemia is corrected in this first phase of primary management. Management after the first 15 minutes After 15 minutes it is evaluated if the clinical targets have been met. If not, the shock is classified as "fluid-resistant". The next step then would be to refer the patient to a PICU facility. It now depends upon the haemodynamic profile of the child what the next therapeutic intervention would be. If the child has a haemodynamic profile that is compatible with "cold shock", fluid administration is continued and dopamine 10 microgram/kg/minute is started via a peripheral line while in the mean time a central venous line is inserted. If the child has a haemodynamic profile that is compatible with "warm shock", fluid administration is continued and norepinephrine 0.1 microgram/kg/minute is started. Fluid administration will be continued until the liver becomes palpable enlarged or crackles are noted at pulmonary auscultation. Nevertheless, cumulative fluid administration up to 200 mL/kg may be necessary (Maar, 2004). Fluid administration should not be discontinued because of assumed possible development of pulmonary oedema, acute respiratory distress syndrome (ARDS) or cerebral oedema (Brierley et al, 2009). As an alternative to crystalloids, colloids such as albumin may be considered at this stage (Boluyt et al, 2006). Also, endotracheal intubation and initiation of mechanical ventilation should be strongly considered in order to optimize DO 2 . As discussed, in paediatric shock VO 2 is dependent upon DO 2 . Furthermore, especially small children have a small functional residual capacity (FRC) that is easily compromised by pulmonary capillary leakage or if the child gets fatigued. Also, VO 2 may rise with 15 -30% due to increased work of breathing during septic shock (Butt, 2001;Carcillo et al, 1989). Last, but not less important, sedation and mechanical ventilation may be needed to facilitate invasive procedures such as insertion of central lines. Also, increased intrathoracic pressure reduces left ventricular afterload that may be beneficial when there is a low CI/high SVR state. Nevertheless, early intubation may still be subject of debate. One of the arguments often used is the vasodilatory effect of agents used for induction. This effect may further compromise DO 2 in the septic child. We advocate the use of ketamine as induction agent (Yamamoto, 2000). Ketamine is a centrally acting Nmethyl-D-aspartate (NMDA) receptor antagonist allowing cardiovascular stability. We would also advocate refraining from the use of etomidate because of its negative effects on adrenal gland function (Brierley et al, 2009). The use of corticosteroids and sodium bicarbonate during the first hour of primary management of paediatric shock is also subject of heavy scientific debate. Corticosteroids are definitely indicated for children with purpura fulminans, or children with a recent history of prolonged corticosteroid use of proven abnormalities in the hypothalamicpituitary-adrenal gland axis (Langer et al, 2006). In addition, the use of corticosteroids may be considered when children do not respond to infusion of vaso-active drugs ("catecholamine-resistant shock") (Brierley et al, 2009). Sodium bicarbonate is usually administered to correct metabolic acidosis as it is presumed that vaso-active drugs function less well in an acidic environment (Tabbutt, 2001). However, the metabolic acidosis is caused by insufficient tissue perfusion. This indicates that is necessary to optimize tissue perfusion rather than correcting the acidosis with sodium bicarbonate (Dellinger et al, 2008). Also, two studies performed in critically ill adults with septic shock and pH ≥ 7.15 have shown no beneficial effect on haemodynamic variables when patients were treated with www.intechopen.com (Boyd et al, 2008). It is unclear if optimizing haemoglobin (Hb) levels through transfusion of red blood cells (RBC) is beneficial. One group of investigators could not confirm a beneficial effect on VO 2 despite optimalisation of CaO 2 in paediatric shock (Mink et al, 1990). Nevertheless, it is currently recommended to maintain Hb > 10 g/dL (Brierley et al, 2009). Fresh Frozen Plasma (FFP) is indicated for active haemorrhage or a prolonged activated partial thromboplastin time (APTT); in clinical practice usually twice the age-dependent reference value (Brierley et al, 2009). Management after the first hour One hour after presentation it is determined whether or not the shock has been reversed. If not, then the patient is recognized as having a fluid refractory dopamine-resistant shock. The patient is managed in the PICU. Treatment goals in this phase are similar to the golden hour (i.e. age-appropriate HR and blood pressure, normalisation of the capillary refill time, normal consciousness and urinary output > 1 mL/kg/hour), but now also include maintainance of age-appropriate perfusion pressure (mean airway pressure minus central venous pressure), cardiac index (CI) between 3.3 and 6.0 L/min/m 2 , central venous oxygen saturation (SvO 2 ) > 70%, normal anion gap and normal lactate. Fluid replacement should be continued and directed at these endpoints. The type of haemodynamic support depends upon the haemodyamic profile of the child (i.e. low CO/high SVR, high CO/low SVR, or low CO/low SVR) (Figure 2). It seems therefore rational to use haemodynamic monitoring devices such as pulse contour analysis or Doppler ultrasound to assess the haemodynamic profile especially since frequently change. Irrespective of haemodynamic profile, support should be targeted at a CI between 3.3 and 6.0 L/min/m 2 . Pollack and co-workers have shown that a CI within this range was associated with the best outcome in paediatric shock (Pollack et al, 1985). Also, SvO 2 should be maintained > 70%. The SvO 2 can be used as a surrogate marker of the CO. Oliveira and co-workers randomized 102 children with septic shock to be managed using the ACCM guidelines with or without monitoring the SvO 2 (de Oliveira et al, 2008). Their SvO 2 goaldirected therapy resulted in less mortality (28-day mortality 11.8% vs. 39.2%, p = 0.002), and fewer new organ dysfunctions (p = 0.03). However, this strategy was associated with more crystalloid (28 (20-40) vs. 5 (0-20 ml/kg, p<0.0001), blood transfusion (45.1% vs. 15.7%, p =0.002) and inotropic (29.4% vs. 7.8%, p = 0.01) support in the first 6 hours of admission. For patients with low CI, normal blood pressure and high SVR (i.e. "cold shock" with normal blood pressure), it is recommended to reduce ventricular afterload. This can be achieved using either epinephrine or dobutamine. Some have argued to additionally use a short-acting vasodilator such as nitroprusside or nitroglycerin to recruit the microcirculation. Alternatively, the use of type III phosphor-diesterase inhibitors such as milrinone may be considered (Barton et al, 1996). These agents have a synergistic effect with beta-adrenergic agents because they stimule intracellular cyclic adenosine monophosphate. Patients with low CI, low blood pressure and low SVR (i.e. "cold shock with low blood pressure) it is recommended to titrate vasopressor therapy. In general, dopamine is the firstline vasopressor therapy. At high infusion rates, the alpha-adrenergic effects of dopamine predominate. Alternatively, norepinephrine or high dosage epinephrine may be considered. Once adequate blood pressure is achieved, a vasodilator can be added to improve the SvO 2 by recruiting the microcirculation. Finally, patients with persisting high CI and low SVR despite fluid administration and norepinephrine may benefit of agents such as vasopressin or phenylephrine. Of importance, CO may be reduced when these agents are used so close monitoring of the CO and/or SvO 2 is mandated. When the shock cannot be reversed and co-morbidities that fuel the shock (such as pericardial effusion, pneumothorax, hypoadrenalism, ongoing blood loss, increased intraabdominal pressure, or necrotic tissue) high-flow veno-arterial extra-corporeal membrane oxygenation (ECMO) or high-flux continues renal replacement therapy (CRRT) with flows > 35 mL/kg/hour may be considered. Yet, these modalities may be qualified as last resort and their effects on final outcome need to be established. Conclusion Early recognition and aggressive primary management of paediatric septic shock is significantly associated with improved patient survival. Tachycardia, fever and altered consciousness are the first clinical manifestations of paediatric septic shock. Primary management includes aggressive fluid resuscitation, adequate oxygen delivery through early intubation and mechanical ventilation, and early referral to a paediatric intensive care unit. Future research should be directed towards obtaining stronger scientific evidence to confirm the components of the ACCM guidelines.
v3-fos-license
2019-04-03T13:08:18.075Z
2018-11-07T00:00:00.000
92108532
{ "extfieldsofstudy": [ "Biology", "Art" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.190086", "pdf_hash": "e2b853ab1240b1d6ce80c8934c77159f84d6f469", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42728", "s2fieldsofstudy": [ "Psychology" ], "sha1": "e2b853ab1240b1d6ce80c8934c77159f84d6f469", "year": 2019 }
pes2o/s2orc
Neuroimaging supports the representational nature of the earliest human engravings The earliest human graphic productions, consisting of abstract patterns engraved on a variety of media, date to the Lower and Middle Palaeolithic. They are associated with anatomically modern and archaic hominins. The nature and significance of these engravings are still under question. To address this issue, we used functional magnetic resonance imaging to compare brain activations triggered by the perception of engraved patterns dating between 540 000 and 30 000 years before the present with those elicited by the perception of scenes, objects, symbol-like characters and written words. The perception of the engravings bilaterally activated regions along the ventral route in a pattern similar to that activated by the perception of objects, suggesting that these graphic productions are processed as organized visual representations in the brain. Moreover, the perception of the engravings led to a leftward activation of the visual word form area. These results support the hypothesis that these engravings have the visual properties of meaningful representations in present-day humans, and could have served such purpose in early modern humans and archaic hominins. what these look like -they could be very different visually, and if they are not visually standardized like the other stimuli, this variation would need to be addressed in the discussion. Thirdly, I think readers should be told more explicitly about the importance of this study, since it's really a first-time study (wow!!) and it makes a big contribution to 2 disciplines. The authors are too modest and they should make more of that! On page 9, I might have missed it, but what statistical test was used to compute the first correlation "of their activation profiles in 10 regions, compared to their scrambled version, was computed in each hemisphere separately for each subject."? Also on page 9, "Then, the resulting Pearson's correlation coefficients of each subject were Fisher z-transformed and analysed using a univariate t-test that was significant in both the left (t(25) = 7.6, p < 0.0001) and right (t(25) = 4.8, p < 0.0001) hemispheres. This finding confirms that both Engravings and object perception recruit the ventral pathway in a similar way in both hemispheres." Where are these data, please? I perhaps missed them in the text? METHODS Several elements of detail are missing from the Methods. On page 13, "The pictures of objects (256 * 256 pixels) were represented by/consisted of nameable human-made artefacts" Were these pictures taken from a database or created? How was their nameability validated? On page 15, "Each of the 15 stimuli within a block was displayed for 300 ms, including two repetitions, and the participants were asked to detect the stimuli." How did participants respond? By button press? Was their accuracy of response recorded? Abstract -hominins is twice misspelled (with M). Are the interpretations and conclusions justified by the results? No Is the language acceptable? Yes Do you have any ethical concerns with this paper? No Have you any concerns about statistical analyses in this paper? I do not feel qualified to assess the statistics Recommendation? Major revision is needed (please make suggestions in comments) Comments to the Author(s) Mellet and colleagues conducted a study to test whether Palaeolithic engravings elicits similar representations in the visual pathway to objects and this similarity was indeed found. There is only one engraving shown in the paper (there should be more shown) and based on this one example these engravings look like complex shapes. Therefore, I don't see any reason why they would not elicit responses in visual regions, even the ones higher in the hierarchy, as complex shapes have been shown to elicit responses in these regions. Therefore, I find this question not very novel. I could understand an added value of testing the actual engravings but I can't even evaluate them as they are not shown in the manuscript or in supplementary material. In general, there is not enough information in the manuscript to fully evaluate it and the unreadability of some of the figures makes it hard to evaluate the findings. I see several issues with this manuscript that I would invite the authors to address. 1. As previously mentioned the stimuli of the engravings are not shown and that seems to be crucial for understanding the value of this paper. Other stimuli classes are also not shown and the authors do not state what categories the objects were sampled from and what words were used. All this information is important for this paper. 2. Why stimulus time presentation is not the same for objects, scenes, and words, and for engravings (300ms vs 200 ms)? If they are being directly compared as few parameters as possible should differ between the conditions. 3. It is not enough to say O3-1 and O3-2 correspond LO. To make this claim the authors should show an overlap of these regions using a LO mask from for example Wang atlas. 4. Different ROIs should be discussed more. It is not enough to say that 10 ROIs elicit a given response pattern. It is important to discuss more what these regions are implicated in and why the result makes sense. The authors try this procedure for LO, however, even there they discuss only a part of the picture. The authors say that "LO is sensitive to the shape, but not the semantics". There is a body of literature that claims otherwise and this should be also discussed. 5. The authors partly acknowledge that they can't claim that these engravings elicited similar patterns of responses as objects in early humans. However, this should be stressed more. The fact that the visual cortex did not expand that much during the evolution does not mean that the visual representations in the cortex in early humans and people nowadays were the same. A [potentially stronger argument could be that the visual representations in humans and macaque monkeys are similar and therefore it is likely that the visual representations of early humans were similar, however, we can't explicitly test that. 6. Resolution of figures is not acceptable in the paper. I can't even read the values on the y-axis of Figure 2 and therefore comment on the results. 7. Labels should be added in panels B and D in Figure 3 as otherwise the dots are not readable. Minor comments: 1. Table 1 could be presented in a color-coded way to enable rapid detection of significant values, e.g., significant values colored in green. 2. The header should be "funding" not "fundings", as the latter word does not exist in the English language. Decision letter (RSOS-190086.R0) 24-Apr-2019 Dear Dr Mellet, The editors assigned to your paper ("Neuroimaging supports the representational nature of the earliest human engravings") have now received comments from reviewers. We would like you to revise your paper in accordance with the referee and Associate Editor suggestions which can be found below (not including confidential reports to the Editor). Please note this decision does not guarantee eventual acceptance. Please submit a copy of your revised paper before 17-May-2019. Please note that the revision deadline will expire at 00.00am on this date. If we do not hear from you within this time then it will be assumed that the paper has been withdrawn. In exceptional circumstances, extensions may be possible if agreed with the Editorial Office in advance. We do not allow multiple rounds of revision so we urge you to make every effort to fully address all of the comments at this stage. If deemed necessary by the Editors, your manuscript will be sent back to one or more of the original reviewers for assessment. If the original reviewers are not available, we may invite new reviewers. To revise your manuscript, log into http://mc.manuscriptcentral.com/rsos and enter your Author Centre, where you will find your manuscript title listed under "Manuscripts with Decisions." Under "Actions," click on "Create a Revision." Your manuscript number has been appended to denote a revision. Revise your manuscript and upload a new version through your Author Centre. When submitting your revised manuscript, you must respond to the comments made by the referees and upload a file "Response to Referees" in "Section 6 -File Upload". Please use this to document how you have responded to the comments, and the adjustments you have made. In order to expedite the processing of the revised manuscript, please be as specific as possible in your response. In addition to addressing all of the reviewers' and editor's comments please also ensure that your revised manuscript contains the following sections as appropriate before the reference list: • Ethics statement (if applicable) If your study uses humans or animals please include details of the ethical approval received, including the name of the committee that granted approval. For human studies please also detail whether informed consent was obtained. For field studies on animals please include details of all permissions, licences and/or approvals granted to carry out the fieldwork. • Data accessibility It is a condition of publication that all supporting data are made available either as supplementary information or preferably in a suitable permanent repository. The data accessibility section should state where the article's supporting data can be accessed. This section should also include details, where possible of where to access other relevant research materials such as statistical tools, protocols, software etc can be accessed. If the data have been deposited in an external repository this section should list the database, accession number and link to the DOI for all data from the article that have been made publicly available. Data sets that have been deposited in an external repository and have a DOI should also be appropriately cited in the manuscript and included in the reference list. If you wish to submit your supporting data or code to Dryad (http://datadryad.org/), or modify your current submission to dryad, please use the following link: http://datadryad.org/submit?journalID=RSOS&manu=RSOS-190086 • Competing interests Please declare any financial or non-financial competing interests, or state that you have no competing interests. • Authors' contributions All submissions, other than those with a single author, must include an Authors' Contributions section which individually lists the specific contribution of each author. The list of Authors should meet all of the following criteria; 1) substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data; 2) drafting the article or revising it critically for important intellectual content; and 3) final approval of the version to be published. All contributors who do not meet all of these criteria should be included in the acknowledgements. We suggest the following format: AB carried out the molecular lab work, participated in data analysis, carried out sequence alignments, participated in the design of the study and drafted the manuscript; CD carried out the statistical analyses; EF collected field data; GH conceived of the study, designed the study, coordinated the study and helped draft the manuscript. All authors gave final approval for publication. • Acknowledgements Please acknowledge anyone who contributed to the study but did not meet the authorship criteria. • Funding statement Please list the source of funding for each author. Once again, thank you for submitting your manuscript to Royal Society Open Science and I look forward to receiving your revision. If you have any questions at all, please do not hesitate to get in touch. This paper presents a very original study --which I believe is the first of its kind --to study the brain activation of people viewing prehistoric engravings. The authors found similarity between how the brain views engravings and pictures of nameable objects. The topic is very innovative and as such, it does not have any precedents to follow. Therefore, the approach had to be created by the authors. While they combine archaeological background and neuroscience background, they use methods from both disciplines but the combination is new, and thus the way they created the stimuli for the experiments has no comparison in previous literature. The methodology for neuroimaging follows standard protocols (as for example ones I've used in my previous work), so I believe the experimental design is sound. The choice of stimuli using a scrambled version of each stimulus category is excellent. In the introduction (page 3), the authors state "If the abstract patterns intentionally engraved by hominins were perceived as structured forms, with a potential meaning, their perception should engage the ventral route cortex". Here I wondered what's the opposite hypothesis? Would it be that the ventral route is NOT engaged? Or would another network be more active? The first major thing I think is missing in this paper is some discussion about symbolism in hominins. As these results have great implications for the topic, there should be more discussion about it. At least readers should be given a brief introduction to the debates, with some key references. It would be a shame to exclude it because it's what makes this paper so original. It is crucial to define "symbols or icons", a term that's used several times in the paper. My second main point is that I feel the conclusion is too strong given the lack of discussion about it. "Our findings support the hypothesis that these engraved patterns were used by human cultures of the past to store and transmit coded information" (page 12-13)...... "We conclude that they were probably used as icons or symbols by both modern and archaic hominins." This paper doesn't contain any discussion of all the big debates around the topic of symbolic capacity in hominins (of which the last author is an expert), therefore the authors cannot make this kind of strong conclusion. The fMRI results themselves (which is all that's discussed in the paper) give only one element of data to this hypothesis. Instead, I suggest to reword as "Our fMRI results lend an element of support to the hypothesis that these engravings could have been used as icons or symbols by both modern and archaic hominins, as was previously suggested by other work (+ cite several references)". Hominin symbolism papers should be mentioned in order to give readers enough background to appreciate the paper, including readers from both archaeology and neuroscience, whether they are familiar with the topic or not. This issue is related to the archaeological engravings that were chosen, as they are listed in Table S1 -the prehistoric engravings which were used as stimuli contain an extremely wide range of types, eras, styles, geographic origins.... What makes us think we can judge them all the same? Maybe some were used as symbols or some were not. It would be useful to provide images of what these look like -they could be very different visually, and if they are not visually standardized like the other stimuli, this variation would need to be addressed in the discussion. Thirdly, I think readers should be told more explicitly about the importance of this study, since it's really a first-time study (wow!!) and it makes a big contribution to 2 disciplines. The authors are too modest and they should make more of that! On page 9, I might have missed it, but what statistical test was used to compute the first correlation "of their activation profiles in 10 regions, compared to their scrambled version, was computed in each hemisphere separately for each subject."? Also on page 9, "Then, the resulting Pearson's correlation coefficients of each subject were Fisher z-transformed and analysed using a univariate t-test that was significant in both the left (t(25) = 7.6, p < 0.0001) and right (t(25) = 4.8, p < 0.0001) hemispheres. This finding confirms that both Engravings and object perception recruit the ventral pathway in a similar way in both hemispheres." Where are these data, please? I perhaps missed them in the text? METHODS Several elements of detail are missing from the Methods. On page 13, "The pictures of objects (256 * 256 pixels) were represented by/consisted of nameable human-made artefacts" Were these pictures taken from a database or created? How was their nameability validated? On page 15, "Each of the 15 stimuli within a block was displayed for 300 ms, including two repetitions, and the participants were asked to detect the stimuli." How did participants respond? By button press? Was their accuracy of response recorded? Abstract -hominins is twice misspelled (with M). Reviewer: 2 Comments to the Author(s) Mellet and colleagues conducted a study to test whether Palaeolithic engravings elicits similar representations in the visual pathway to objects and this similarity was indeed found. There is only one engraving shown in the paper (there should be more shown) and based on this one example these engravings look like complex shapes. Therefore, I don't see any reason why they would not elicit responses in visual regions, even the ones higher in the hierarchy, as complex shapes have been shown to elicit responses in these regions. Therefore, I find this question not very novel. I could understand an added value of testing the actual engravings but I can't even evaluate them as they are not shown in the manuscript or in supplementary material. In general, there is not enough information in the manuscript to fully evaluate it and the unreadability of some of the figures makes it hard to evaluate the findings. I see several issues with this manuscript that I would invite the authors to address. 1. As previously mentioned the stimuli of the engravings are not shown and that seems to be crucial for understanding the value of this paper. Other stimuli classes are also not shown and the authors do not state what categories the objects were sampled from and what words were used. All this information is important for this paper. 2. Why stimulus time presentation is not the same for objects, scenes, and words, and for engravings (300ms vs 200 ms)? If they are being directly compared as few parameters as possible should differ between the conditions. 3. It is not enough to say O3-1 and O3-2 correspond LO. To make this claim the authors should show an overlap of these regions using a LO mask from for example Wang atlas. 4. Different ROIs should be discussed more. It is not enough to say that 10 ROIs elicit a given response pattern. It is important to discuss more what these regions are implicated in and why the result makes sense. The authors try this procedure for LO, however, even there they discuss only a part of the picture. The authors say that "LO is sensitive to the shape, but not the semantics". There is a body of literature that claims otherwise and this should be also discussed. 5. The authors partly acknowledge that they can't claim that these engravings elicited similar patterns of responses as objects in early humans. However, this should be stressed more. The fact that the visual cortex did not expand that much during the evolution does not mean that the visual representations in the cortex in early humans and people nowadays were the same. A [potentially stronger argument could be that the visual representations in humans and macaque monkeys are similar and therefore it is likely that the visual representations of early humans were similar, however, we can't explicitly test that. 6. Resolution of figures is not acceptable in the paper. I can't even read the values on the y-axis of Figure 2 and therefore comment on the results. Figure 3 as otherwise the dots are not readable. Labels should be added in panels B and D in Minor comments: 1. Table 1 could be presented in a color-coded way to enable rapid detection of significant values, e.g., significant values colored in green. 2. The header should be "funding" not "fundings", as the latter word does not exist in the English language. Author's Response to Decision Letter for (RSOS-190086.R0) See Appendix A. Recommendation? Accept as is Comments to the Author(s) I am satistifed that the authors made good edits in response to the reviewers' comments. I have no further suggestions. Comments to the Author(s) The authors have addressed my comments. 04-Jun-2019 Dear Dr Mellet, I am pleased to inform you that your manuscript entitled "Neuroimaging supports the representational nature of the earliest human engravings" is now accepted for publication in Royal Society Open Science. You can expect to receive a proof of your article in the near future. Please contact the editorial office (openscience_proofs@royalsociety.org and openscience@royalsociety.org) to let us know if you are likely to be away from e-mail contact. Due to rapid publication and an extremely tight schedule, if comments are not received, your paper may experience a delay in publication. Royal Society Open Science operates under a continuous publication model (http://bit.ly/cpFAQ). Your article will be published straight into the next open issue and this will be the final version of the paper. As such, it can be cited immediately by other researchers. As the issue version of your paper will be the only version to be published I would advise you to check your proofs thoroughly as changes cannot be made once the paper is published. This paper doesn't contain any discussion of all the big debates around the topic of symbolic capacity in hominins (of which the last author is an expert), therefore the authors cannot make this kind of strong conclusion. The fMRI results themselves (which is all that's discussed in the paper) give only one element of data to this hypothesis. Instead, I suggest to reword as "Our fMRI results lend an element of support to the hypothesis that these engravings could have been used as icons or symbols by both modern and archaic hominins, as was previously suggested by other work (+ cite several references)". Hominin symbolism papers should be mentioned in order to give readers enough background to appreciate the paper, including readers from both archaeology and neuroscience, whether they are familiar with the topic or not. Answer: We have focused the discussion on the fMRI results in order to limit over-interpretations and avoid reaching conclusions based on controversial hypotheses proposed by researchers working in other disciplines rather than on our own results. The state of the debate and the ambiguities implicit in the interpretation of the archaeological record are now presented in the introduction section and we think there is no point in reinjecting them again in the discussion. However, we have softened the conclusion, as suggested by the reviewer. The sentence now reads: "Although our results do not allow us to reach definitive conclusions on the nature of these representations, they support for the first time with experimental data the hypothesis that they have been used as icons or symbols by both early modern and archaic hominins, as suggested in previous works (d'Errico, 2003;Henshilwood et al., 2009;Rodríguez-Vidal et al., 2014;Villa and Roebroeks, 2014;Majkic et al., 2017;Majkić et al., 2018)" Comment: This issue is related to the archaeological engravings that were chosen, as they are listed in Table S1 the prehistoric engravings which were used as stimuli contain an extremely wide range of types, eras, styles, geographic origins.... What makes us think we can judge them all the same? Maybe some were used as symbols or some were not. It would be useful to provide images of what these look like -they could be very different visually, and if they are not visually standardized like the other stimuli, this variation would need to be addressed in the discussion. Answer: The images of all the engravings are now presented in the supplementary material. Comment: Thirdly, I think readers should be told more explicitly about the importance of this study, since it's really a first-time study (wow!!) and it makes a big contribution to 2 disciplines. The authors are too modest and they should make more of that! Answer: We have added a sentence in the introduction section in which we underline the novelty of the study. "We report here the first attempt to shed light on the function of Paleolithic engravings by mapping the brain regions involved in their perception". Comment: On page 9, I might have missed it, but what statistical test was used to compute the first correlation "of their activation profiles in 10 regions, compared to their scrambled version, was computed in each hemisphere separately for each subject."? Answer: In that section, we raise the question as to whether there a significant relationship between the profile of activation during engravings and objects perception when individual variability is considered. We first computed a Pearson's correlation between these two profiles of activation in each subject. This produced 26 x 2 hemispheres Pearson's r coefficients (one per participant and per hemisphere). No statistical test were performed at this stage since the distribution of r coefficient does not allow to use a parametric test (see below). Comment: Also on page 9, "Then, the resulting Pearson's correlation coefficients of each subject were Fisher ztransformed and analysed using a univariate t-test that was significant in both the left (t(25) = 7.6, p < 0.0001) and right (t(25) = 4.8, p < 0.0001) hemispheres. This finding confirms that both Engravings and object perception recruit the ventral pathway in a similar way in both hemispheres." Where are these data, please? I perhaps missed them in the text? even evaluate them as they are not shown in the manuscript or in supplementary material. In general, there is not enough information in the manuscript to fully evaluate it and the unreadability of some of the figures makes it hard to evaluate the findings. Response: To be precise, the goal of our research was not that of contrasting the perception of Paleolithic engravings and objects. It involved contrasting the earliest engravings with four visual categories, including objects, and their scrambled version, in order to evaluate what areas were more elicited by the earliest engravings, which present different degrees of complexity. Although we concur with this reviewer that involvement of areas higher in the hierarchy was a reasonable expectation, at least for the more complex engravings, the only available and highly widespread theory before our research to explain the emergence of this behaviour in the evolution of our genus (Hodgson, 2006(Hodgson, , 2014 was predicting that the emergence and perception of these engravings exclusively involved the primary visual cortex. In the revised version of the manuscript, we explain previous hypotheses in more detail and present the images of all objects and the tracings of all the engravings included in the experiments. The resolution of the figures has also been improved. As a result of these changes, the novelty of the study is now more apparent and the dataset more explicit. Testing the photos of the actual objects would have introduced a bias in the research since engravings occur on media of different colour, texture and state of preservation. I see several issues with this manuscript that I would invite the authors to address. Comment: As previously mentioned the stimuli of the engravings are not shown and that seems to be crucial for understanding the value of this paper. Other stimuli classes are also not shown and the authors do not state what categories the objects were sampled from and what words were used. All this information is important for this paper. Response: We thank the reviewer for this suggestion. All the stimuli used are now provided as supplementary material. Comment: Why stimulus time presentation is not the same for objects, scenes, and words, and for engravings (300ms vs 200 ms)? If they are being directly compared as few parameters as possible should differ between the conditions. Response: The conditions including the engravings and strings of linear B characters were the subject of a behavioural pre-manipulation that allowed us to optimize the presentation times for these stimuli. The stimuli for the other conditions were provided by another team and came from previous published studies (Kauffmann et al, 2015, Roux-Sibilon et al, 2018 for which presentation times were 100 ms longer. Since the two presentation times resulted from a well-argued choice, we preferred not to modify them. We are aware that it is preferable to limit the differences but it is important to note that the visual categories were not compared directly but that we compared the difference between intact stimuli and their scrambled versions (whose presentation times were identical). We only compare the visual categories on the basis of this difference. The potential biases related to the difference in presentation time are therefore eliminated. Comment: It is not enough to say O3-1 and O3-2 correspond LO. To make this claim the authors should show an overlap of these regions using a LO mask from for example Wang atlas. Response: We plotted the maximum activation of several studies which located LO, and superimposed them on the O3-1 and O3-2 regions. As one can see here below these peaks project quite well on these two regions (yellow: O3-1, blue: 03-2). given that the SD for the peaks coordinates vary from 7 to 10 mm. In addition, it is in these two regions that the activation is maximum in the contrast "object minus scrambled object" from the present study. This contrast is typically used to locate the LOC. We added the following sentence in the revised version of the manuscript: "LO is defined as the brain area showing the greatest activation while viewing a known or novel object compared to its scrambled version (Malach et al., 1995;Grill-Spector et al., 2001). As shown in table S3 (supplementary material) these two regions exhibited the largest activation in the "objects minus scrambled objects" contrast in the present study." Comment: Different ROIs should be discussed more. It is not enough to say that 10 ROIs elicit a given response pattern. It is important to discuss more what these regions are implicated in and why the result makes sense. The authors try this procedure for LO, however, even there they discuss only a part of the picture. The authors say that "LO is sensitive to the shape, but not the semantics". There is a body of literature that claims otherwise and this should be also discussed. Response: Besides the ventral regions for which a specificity has been shown (such as LOC, VWFA, FFA etc…), many regions have not yet been specifically involved in the processing of a particular visual category. It is possible that these regions have a more general purpose (Grill-Spector, 2003). It has also been proposed that the representation of a percept is reflected by a distinct pattern of response across all ventral cortex, and this distributed activation produces the visual perception (Haxby et al., 2001). In this context, it is difficult to discuss the involvement of each region and we preferred to discuss the overall activation pattern along the ventral pathway. Nevertheless, when some regions corresponded to well documented functional areas (LOC, VWFA...) we discussed them in accordance with the existing literature. The reviewer raised an important issue regarding the role of LO in semantic. To our knowledge, there is no study that has reported such sensitivity in the lateral occipital (LO) part of the LOC. On the contrary, several studies have shown that activity in this region (mainly based on adaptation paradigm) is not affected by the change in visual categories (Grill-Spector et al., 1999;Vuilleumier et al., 2002;Chouinard et al., 2008;Kim et al., 2009;Margalit et al., 2017). The situation is more nuanced with regard to the ventral part of the LOC (called Pfs). At least two studies reported an effect of visual categories in the left fusiform gyrus (Koutstaal et al., 2001;Simons et al., 2003). However, none of the studies cited above reported this activation and they concluded that the entire LOC is not sensitive to semantic information. We added the following sentence in the revised manuscript: "Concerning the ventral part of the LOC, it has been shown that the left fusiform gyrus, which is involved in the visual processing of engravings, is sensitive to semantic information (Koutstaal et al., 2001;Simons et al., 2003). However, the studies mentioned above did not report such a property". Comment: The authors partly acknowledge that they can't claim that these engravings elicited similar patterns of responses as objects in early humans. However, this should be stressed more. The fact that the visual cortex did not expand that much during the evolution does not mean that the visual representations in the cortex in early humans and people nowadays were the same. A [potentially stronger argument could be that the visual representations in humans and macaque monkeys are similar and therefore it is likely that the visual representations of early humans were similar, however, we can't explicitly test that. Response: The reviewer is right in pointing out the main difficulty of our approach. Inferring the cognitive abilities of fossil human populations from the functional study of the modern human brain is, of course, not a simple undertaking. As suggested by the reviewer regarding the visual cortex, anatomical-functional differences probably do not represent a main bias. Evolution does not seem to have profoundly modified its structure (Ponce de León et al., 2016;Holloway et al., 2018). The study of functional homologies between monkeys and humans points to a preservation of major functional subdivisions, at least with regard to low-level visual areas and the ventral pathway (Orban et al., 2004). As pointed out by the reviewer, this does not guarantee with certainty that the representations in the visual cortex of our ancestors were identical to ours. The relatively small impact of evolution on this cortex suggests that inferences about the past can reasonably be made from results obtained when working with the modern brain. Another aspect that need to be considered is that if these engravings were familiar to the past hominims, who purposely produced them, they were not to our participants. This may have an effect on the neural networks mobilized. We are currently conducting the same study with archaeologists who are experts in this type of material in order to compensate for this familiarity bias. The following paragraph has been added at page 8 (in red in the revised version): "There is of course no guarantee that the brain areas activated by the engravings were, in our ancestors, identical to ours. Regarding the visual cortex, anatomical-functional differences do not probably represent a main bias. Evolution does not seem to have profoundly modified its structure (Ponce de León et al., 2016;Holloway et al., 2018). Moreover, investigations on functional homologies between monkeys and humans points to a preservation of major functional subdivisions, at least with regard to low-level visual areas and the ventral pathway (Orban et al., 2004). Since these regions appear to have been moderately impacted by the evolution of the brain It is reasonable to think that the present results also apply to other representatives of the Homo lineage". Comment: Resolution of figures is not acceptable in the paper. I can't even read the values on the y-axis of Figure 2 and therefore comment on the results. Response: We have improved the resolution of the figures and increased the size of the characters for a better readability. Comment: Labels should be added in panels B and D in Figure 3 as otherwise the dots are not readable. Response: As requested by the reviewer, we added the labels in all the panels of the Figure 3. Minor comments: Comment: Table 1 could be presented in a color-coded way to enable rapid detection of significant values, e.g., significant values colored in green. Response: Table 1 now includes a color code to display significant activation, deactivation and left-right asymmetries. Comment: The header should be "funding" not "fundings", as the latter word does not exist in the English language.
v3-fos-license
2019-05-13T13:03:46.990Z
2018-01-01T00:00:00.000
150430649
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://armgpublishing.sumdu.edu.ua/wp-content/uploads/2016/12/files/bel/volume-2-issue-3/5.pdf", "pdf_hash": "6edf02a5782c6cde79d057762bf455aab08321e2", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42732", "s2fieldsofstudy": [ "Business" ], "sha1": "30ff8b928c76a9728712c83fac2ec03d319027a9", "year": 2018 }
pes2o/s2orc
The Essence of Ethical Leadership and Its Influence in Achieving Employees’ Job Satisfaction This study strongly conveys that, ethical practices have to be followed by organizations to ensure employees’ job satisfaction, achieving employees’ job satisfaction is essential for the success and longevity of the organizations. The research structurally consists of four sections. The first section deals with the essence of ethical leadership explaining the need and benefits associated with businesses from ethical practices. The second section defines the ethical factors, which if not taken into account reduce the effectiveness of leadership, the typical ethical mistakes of managers, which leads to a reduction in the transparency of relationships in the team, discourages creative thinking and forms a negative working environment (indifference to employees, delay in wage payments, inhibition of career opportunities for workers, unfavorable conditions for increasing the labor productivity, etc.). The third section, deals with the prerequisites of ethical leadership (honesty in relations with employees, clients, shareholders and the public, integrity and consistency, balance of relations, consistency of standard expectations and real attitude to work, compliance with guarantees and obligations, loyalty of management to employees and vice versa − employees to the company, taking into account not only the financial, but also the emotional long-term effects of business decisions, etc.). The fourth section explains the role of ethical leadership in maintaining employees’ job satisfaction. The author points out that, for achieving the employees’ job satisfaction, leaders should remain aware of the real, rather than formal, reasons for employees' dissatisfaction, create a dynamic stress-free work environment, stimulate the development of the common interests of the staff/managers, motivate employees to communicate fearlessly the matters concerning the job, take a proactive intermediary position in working conflicts, etc.. Within each of these sections, the author summarizes the methodical achievements of the researchers from different countries of the world, as well as defines the applied ethical practices that can be applied by the leaders of different companies, regardless of a sector specificity and a dominant type of management. What is Ethical Leadership? Leadership is the art of persuading the followers, to do certain activities or tasks, that has been set as goals to be achieved, the goals could be either political, organizational or corporate related, irrespective of the requirements, the leader as a protagonist, shall persuade his/her followers towards achieving desired goals.The role of the leader is to provide direction and motivation to individuals towards the desired goal (Katarina et al, 2010 : 32).A great leader is not just the one who influences his/her followers but one who could instill ethical behavior and creates a high-performance organization.Great leadership achieves positive, substantiate and sustainable results through people or employees in an organization (James and Don, 2010: 190).Brown et al (2005: 120) define "ethical leadership here as the demonstration of normatively appropriate conduct through personal actions and interpersonal relationships, and the promotion of such conduct to followers through two-way communication, reinforcement, and decision-making".According to the Brown et al, the ethical leader is a leader with qualities such as honesty, trustworthiness, fairness and care towards his/her organization and followers, making him/her be a legitimate and credible role model. To achieve the ethical standards, ethical leaders should frequently communicate with their followers for providing them clarity and ensuring that they are not deviating from the set standards.The essential characteristic of ethical leaders is they act as proactive role models of ethical conduct (Brown and Treviño, 2006: 597). The methodology of the study: Data for this study were drawn from a review of secondary sources.The literature for this study has been presented in four different sections.Section I deals with the essence of ethical leadership, it deals with the need for ethical leadership and it elaborates on the benefits associated to businesses for following the ethical practices.The subsequent section, i.e., Section II specifies, why leadership fails, it explains the causes of leadership failure.Section III deals with the prerequisites of ethical leadership, it presents comprehensively, the preconditions, which are required to confirm a leader, as an ethical leader.The last section, i.e., Section IV emphasizes on the role of ethical leadership in achieving employees job satisfaction, this section points out the causes of employees' job dissatisfaction that the leaders should remain aware and also focuses on the best ethical practices that are to be followed to achieve employees' job satisfaction. Need for ethical leadership The term ethics is understood from two important dimensions firstly, ethics refers to well-founded standards that prescribe what is right and what is wrong?Ethics prescribes what humans ought to do in terms of rights, obligations, benefits to society and justice and fairness.Ethics refers to those ethical standards that impose reasonable obligations to refrain from crimes and frauds.Ethical standards include those standards that enjoin virtues of honesty, compassion and loyalty, not to lie, not to exploit employees or the natural resources, fair trade, saving the Planet, etc.Secondly, ethics refers to the continuous effort of studying one's own ethical standards, moral beliefs and moral conduct and striving to ensure that we as individuals/organization live up to standards that are reasonable and solidly-based (Velasquez et al, 2010). The leadership is said to be ethical when it could favor good governance.The essence of good governance is to fight against fraud and corruption and shall assist in achieving the organizational and individual goals.Good governance is fundamentally about honesty, trustworthiness, diligence, transparency, competence and fairness.Good governance is possible through ethical leadership.There is an urgent need in societies to encourage and promote ethical leadership supported by a strong moral code of conduct.The essential characteristic of a leader of any institution, whether in government or a private company, supposed to have is integrity, trustworthiness, transparency and equity (V.Mungunda, 2015). Benefits associated to businesses for following ethical practices Following the code of ethics is an integral part of an ethical organization.Therefore, every organization for its success must actively promote ethical policies and procedures and shall initiate programs to ensure awareness among employees and their commitment towards ethical practices. Figure 1 displays the benefits available to businesses for following ethical practices. Ethical firm contributes improvement to the society and enjoys its patronage: The members of the society give the firms or businesses the legal recognition to existing.Every business firm uses the society's scarce resources such as land, raw materials and services of skilled employees.In true sense, the firm owes an obligation to the society in which it exists.This would imply that business organizations are expected to create wealth, generate income and enhance the social welfare of the society in which it dwells.An ethical organization endeavors to honor its obligations towards the society by producing goods/services and creating wealth.Ethical businesses generate incomes to the society by providing employment opportunities.Further, ethical businesses enhance social welfare through increased economic efficiency, improved decision-making Solves ethical dilemmas Corporate reputation process, acquiring and utilizing the advanced technology and equipment, stable channels of distribution and sales of products to the consumers at a competitive price.These ethical efforts of the firm would result in enjoying the patronage of members of society (Fernando, A.C., 2010: 2.13). High productivity and strong teamwork: In common, the disparity exists between preferred values and the values actually reflected by the behaviors of the employees at workplace.Such disparities if not checked would result in chaos, affect the production, and would have a direct impact on sales and revenues of the business.An ethical organization would focus its attention on imparting ethical programs to employees so that there is proper alignment between the preferred values at the organization and values reflected by the behaviors of employees.In simple, a focus on the expected values in the workplace would build integrity among employees.When employees feel the strong alignment between their values and those that are expected by the organization, they react with strong motivation and performance this leads to the high productivity and strong teamwork (Larry, 2006: 111). Ethical policies and practices create a strong public image of the company: Ethical organizations establish policies and practices that ensure all employees are treated fairly and ethically.For the existence of an ethical environment, companies, usually develop an ethical philosophy and implement it with specific guidelines to ensure that the policies and practices are uniformly followed (Mohanty, 2008: 421). Ethical practices help to resolve ethical dilemmas at a workplace: Often, employees are encountered with the risk of facing the ethical dilemma.Ethical dilemmas can be more challenging and critical.An organization practicing ethical policies and procedures focuses on overcoming ethical dilemmas by assisting employees through educational programmes, which insist employees consult company code of ethics for formal guidance and this will avoid confusion and provide clarity.Further, ethical programmes provide the opportunity to the employees to seek guidance from their immediate supervisor to learn from their experiences and knowledge, ethical programmes create the ethical climate.A.C. Fernando, (2010: 9.8) suggests that " It is necessary that each company puts in place an ethics programme and makes it known to all its employees so that they know its values, mission and vision and comply with the policies and code of conduct, all of which create its ethical climate". Corporate reputation: Ethical practices help firms to achieve a good corporate reputation.A firm is said to have achieved corporate reputation, particularly, when its customers prefer doing business with the firm, even though similar products and services are available in other companies' at similar cost and quality.Corporate reputation is earned by the companies with the support of all the stakeholders, most particularly, the customers and employees.Riccardo (2002: 28) defines "Organizational scholars see corporate reputation as rooted in the experience of employees.A company's culture and identity shape a firm's business practices as well as the kinds of relationships that managers establish with key stakeholders". Section II: Why Leadership Fails? Leadership is the result of what a leader does.The action taken by the leader is actually the inner vision and basic character the leader has.The success of leadership depends on various factors arguably; one of the essential factors is influencing the employees because influencing employees infuses commitment among them.The commitment would help employees to give their free time in solving day-to-day problems, serving and satisfying customers, and to think creatively both for the betterment of the organization and for themselves.With influential leadership, employees perform better.Better leadership will allow employees to work collaboratively across the organizational lines, it will allow liberty to voice their opinions and authorize them for healthy open discussions on important matters involving health and future benefits in an organization. Leaders who are great at listening to diverse opinions can create a positive working environment, such leaders can remove obstacles and provide effective means to employees to perform their jobs effectively.Influential leaders create environments that are trustworthy, collaborative with openness and sharing.Sadly, the derailment of leadership would take place in the organization, if a leader gets distracted from his/her objective for the numerous reasons.Some of the reasons that may result in the failure of leadership are presented in this study. Figure 2 shows the major causes for leadership failure. Lack of emotional intelligence: Emotional intelligence helps an individual to monitor and control one's feelings; helps to deal matters with wit and etiquette that would be liked and appreciated by one and all. Mastering emotional intelligence helps augment normal intellectual workings.Essentially, a leader could be a genius, but handling subordinates concerns and grievances without emotional intelligence he/she can unsurprisingly fail as a leader (Adam Lee, 2002: 267).Leaders being human often may face difficulties with their thinking abilities, but they are often assumed by subordinates to think big.But when their emotionality does not correlate with subordinates sensibilities this would have an adverse effect on leadership. Ambiguous communication: When the attention of leaders shift, this will result in being unclear about their own and organizational purposes.When the leaders lack understanding for their own purpose they communicate ambiguously with their followers, this will create confusion to followers as the intent of the leader is not easily understood by them.Confusion among followers will become the main reason for the lack of commitment among them.When leaders fail to effectively communicate, organization's valuable time and energy are wasted because the task or assignment will have to be redone.As the organization pays to its employees for the work done, redoing the work will be a costly affair it would adversely impact the financial results of the organization (Gregory, 2016: 21-22). Failure to take risks: The human tendency is to fear while facing the pressures in life.Leaders who lack the desire to succeed would develop a fear of failure.When engrossed with fear of failure, leaders lose the ability to take reasonable risks.Lack of risk taking abilities will result in redundant leadership skills and would not support for innovations and this will be the cause of leadership derailment.Leadership derailment occurs, when there is no connectivity between the skills and competencies of the leader and the qualities required for higher job responsibilities (Benjamin, 2013: 80). Causes for leadership failure Lack of integrity: Leadership is highly dependent on a leader's integrity.Integrity is possible through a leader's competency and good character.When the character of a leader diminishes his/her integrity ceases.When integrity ceases this is the sure sign of failure.Arif and Brian (2013: 107) opine that "Leaders are always seen as strong figures and examples to the employees.Therefore, leaders who carry good leadership characteristics will benefit the organization and become an icon within the team".When leader integrity is strong he/she can influence individuals and groups, motivates them to achieve common goals.With integrity, he/she can define values, norms and maintains the organization's persona. Poor managerial skills: Leaders are often perceived to be humans with unlimited energy levels.Leaders will meet their disasters when they lack enthusiasm.Lacking enthusiasm will fail them to take care of their physical and emotional needs, which would result in a failure of leadership and would badly affect the team spirit.Ade et al (2016: 262) suggest that "Leadership is a relationship, authority and respect, which can be improved in various ways.Effective leaders maintain teamwork.That refers to all skills as a 'people person'".Therefore, the effective human, social and technical skills that a manager exhibits in teamwork will confirm about his/her efficient leadership. Deficiency in assuming responsibility: A successful leader will delegate only the task but not the responsibility, because delegating both the work and responsibility will not reduce the leader's responsibility. Even if a leader has delegated responsibility along with authority for motivating the worker yet the leader remains accountable for the results.A leader when delegates the task and assume that he/she is not responsible for the results is not an efficient leader.John et al (2004: 323) suggest that "the simple act of delegating the work, and the authority and responsibility for performance, does not reduce the leader's responsibility or overall authority for the success of the team". External delight instead of internal satisfaction: Leaders who prefer external pleasure instead of internal satisfaction will lose their ground and followers.Such leaders who work not for their inner satisfaction but rather for their personal means will reject the constructive suggestions by honest critic and love to be surrounded by people who tell them what they prefer to listen.In organizations, when the results are achieved, and if the leader begins to believe that, the success is because of his/her efforts rather than collaborative efforts, this is the sure sign of distorted leadership.For the successful leadership, leaders shall possess a strong future orientation, shall remain unbiased, show commitment to both personal and organizational improvement, shall remain focused towards innovation, ensure sustainability and continuity of proven practices, uncompromising approach to drive necessary changes (Mark, 2009: 85). Lack of self-leadership skills: It is very essential on part of the leader to exercise self-leadership.A leader cannot be an effective person to guide others if he/she lacks self-leadership skills.To master over selfleadership skills, a leader shall first know him/her, should be able to control and communicate core organizational values, expectations and beliefs to employees.A leader cannot influence his followers if he/she has not done self SWOT-strengths, weaknesses, opportunities and threat analysis.The success of leadership is possible when employees' engagement is having a positive association with their perception of leadership styles in their immediate boss.Positive perception of leadership styles among employees is possible when leaders embrace visionary leadership instead of classical or transactional leadership style.Ahmad Zairy et al (2013: 94) argue that "Employee engagement is perceived as subsuming negative outcome from the employees when the supervisors are adopting classical or transactional leadership styles.Whereas when the leaders are embracing visionary and organic leadership, employee engagement is regarded as having a positive association with the employees' perception". Lacking dynamism: An effective leader would watch for changing marketing trends, technological changes and its impact on business, a potential threat to the organization and difficulties that may hinder an organization's success, otherwise he/she is not an effective leader.The success of a leader is measured from the point of view as to how much influence he/she could make on individuals or group to achieve common goals.Naser et al (2016: 130) argue that "Leadership is defined as an organized process that influences an organizational group to achieve a common goal or specific targets". Acting unethically towards their followers: Some leaders lack the skills and abilities to make the right choice.Ineffective leaders do not know that their doings are wrong.Some leaders do not understand the significance of virtues in business and may not distinguish between a right and wrong act.These poor leadership skills will result in acting unethically with their followers.Sometimes leaders face challenging motivations or weakness of will and would develop a narrow framing of what leadership and ethics supposed to be.Some leaders do not understand the connective link between them and followers, when leaders could not inspire their followers they cease to be leaders (Olsson Center for Applied Ethics, 2012).Ineffective leaders lack the understanding to identify the cause of problems in the organization.They blame their followers for any difficulties that arise in the organization; their ineffective managerial actions could unreasonably blame innocent employees as accused. Section III: Prerequisites of Ethical Leadership This section presents an understanding of the prerequisites of ethical leadership it provides the basic preconditions to consider a person's leadership as ethical leadership. Figure 3 represents the basics prerequisites of ethical leadership.Integrity and its consistency: Employees' integrity towards works represents the company's integrity towards society.Integrity means having a consistent personality that is validated through proper alignment of standard expectations and attitude towards work, customers, superiors, suppliers, and in whole towards society.Employees' integrity is directly related to leaders' integrity.Sam Eldakak (2014: 32) says, "Integrity is the pillar of the ethical code of conduct which assists a leader in maintaining balanced relationships and also motivates the leader to uphold moral values in daily activities". Promises keeping and building trust: Organizations are expected to build trust by fulfilling the promises made to other businesses such as suppliers and vendors.The trust the company builds and commitment that the company honors attracts the other businesses to do business.The trust of the business is earned and retained only when the leaders practice and display trustworthy acts.Jeffrey (2015: 89) opines promise-keeping as "Adhering to the organization's stated values is, in ethical terms, promise keeping". The sense of loyalty: Ethical businesses through effective and ethical leadership instill the sense of loyalty among their employees, so that, the employees remain loyal to both the company and to their team.When employees demonstrate loyalty, the trust is build up at the workplace.Jean and Gully (2014: 85) "Loyalty refers to a person's commitment to another person, task, or organization".Apparently, loyalty drives a person to remain ethically sensitive and motivates them to follow ethical policies, rules and behavior. Fairness, justice and equity: Ethical leadership creates an environment, where no superior would take undue advantages of subordinates difficulties and mistakes.The ethical organization focuses on achieving fairness, equity and justice.Pravin, (2010: 620) opines "when individuals practice ethical values within an organization, it does not mean that the entire organization is ethical.Only when the entire organization practices fairness and justice in a systematic way can it be called an ethical organization.The foundation of the ethical organization is mutual trust and respect". The sense of caring: Businesses through ethical leadership always consider the financial, emotional and longterm business consequences of an action.When businesses take care for the welfare of their stakeholders, sense of caring is developed this would render the firm ethical.Caring employees would boost the morale of employees.The sense of caring is the important aspects of an ethical conduct (David, 2013: ix). Respecting law: Law is above all, organizations progressing through ethical leadership will never break the rules; an ethical organization always obeys the law related to the business activities.For the survival of businesses, profits are essential, the inflow of revenues is required to meet not only the day-to-day expenses of the business but also for the future expansion plan.However, profits should be earned through legal and ethical manner and this is possible if the leadership is ethical.Praveen & John (2013: 522) observes that "Businesses must conform to laws and regulation as part of the contract between business and society that allows them to operate.This is now called a license to operate". Ethical practices and respect towards cultures: Businesses through ethical leadership delivers the highest quality of service or products to their customers and focuses on constant endeavors for improvement in product and services.These practices brand the businesses as ethical firms.Being ethical signifies about pursuing excellence in everything that business does.Businesses should respect cultural diversity and treat customers, suppliers, employees and vendors with respect.Giving respect is essential because every stakeholder deserves dignity.Advances in communication and technology has minimized the world's borders.This has created a new global economy, which has resulted in bringing people together from countries with different cultures, values, laws, and ethical standards.International businesses must understand the values, cultures and ethical standards of their own countries and shall also remain sensitive towards other cultures.This will result in the sense of trust and respect (Ferrell et al, 2015: 276). Good leadership qualities: Businesses shall create an ethical environment, in which decisions are made on values and ethical standards; this is possible if the leaders are ethical.Katarina et al, (2010: 33) recommend about the qualities the ethical leaders supposed to have, according to them, "The appropriate and desired behaviour is enhanced through culture and socialization process of the newcomers.Employees learn about values from watching leaders in action.The more the leader "walks the talk", by translating internalized values into action, the higher level of trust and respect he generates from followers".Therefore, leaders must and should have ethical qualities to inspire their employees. Accountable for stakeholders: Businesses to remain ethical shall endure accountability.Ethical leadership allows the firm to take up accountability for every of their action that relates with their employees, suppliers and community.Mollie and Patricia (2008: 38) say "Accountability is not an obsolete concept in contemporary business life.The notion of accountability will remain meaningful and significant in business ethics discourses". Corporate Social Responsibility: Ethical leadership through its ethical practices supports and develops transparency in all matters; including employee-related matters, and this will favor companies in attracting and retaining the best talents.Atiya et al (2015: 112) observe that "The role of ethical leadership in influencing the performance of the employees rests on the pedestal of behavioral motivation, inspiration and individualized consideration".It is to be noted that, to retain the best talent, the ethical leaders have to focus on job satisfaction of employees.Although, motivating and inspiring employees is an essential task of companies but on a broader perspective, the leadership must and should satisfy all its stakeholders such as lenders, government, employees, suppliers and customers etc. Satisfying all the stakeholders is possible if businesses concentrate to contribute to acceptable development by initiating strategies related to corporate social responsibility (CSR).The CSR is an innovative business concept that contributes to justifiable development by delivering economic, societal and ecological benefits for all its stakeholders.Ramon Mullerat (2010: 14) elucidates that, "Through CSR, companies voluntarily decide to respect and protect the interests of a broad range of stakeholders while contributing to a cleaner environment and a better society through an active interaction with all".CSR is essential to businesses for its success and existence, even though if a company proceeds with a superficial understanding of social responsibility, yet it has to include the topics of environmental protection, economic indicators, sponsoring quality control, occupational health and safety (Tóth, 2009: 18).To the businesses, the importance of CSR remains essential because it emerges from the ethical responsibility of the organization. Section IV: Role of Ethical Leadership in Achieving Employees' Job Satisfaction Ethical leaders are perceived as having a broad ethical awareness and concern for all stakeholders, in this context, most particularly the employees.For developing responsibility among employees, leaders must create a friendly work environment, communicate ethical issues and shall serve as a role model for an ethical environment to remain and continue.Essentially, those leaders who exhibit ethical behavior would be more likely to consider the needs and rights of employees and treat them with equity and justice (Shukurat, 2012: 234).Certainly, employees prefer to work for a responsible employer or leader.The satisfaction of employees is achieved by the leaders by winning the confidence of employees.However, the confidence of employees is achieved by the leaders only after fulfilling ethical obligations towards their employees. Employees' job satisfaction: It denotes how cognitively the employee is content with his/her job.This includes liking of the job, nature of work environment, nature of work supervision etc. Employee satisfaction is not that simplistic, as it is perceived instead it is focused on multidimensional psychological responses, it reflects the emotional state of the employee towards various factors involved in his/her job.The study conducted by Kooskora, M. and Mägi, P. (2010) to find the relations between "ethical leadership behavior and employee satisfaction", conclude that ethical leadership is positively associated with employee job satisfaction. According to their study, the impact of ethical leadership behavior has a positive impact on employee satisfaction.The study suggests that, the impact of ethical leadership can result in employee trust, loyalty towards organization and leaders, pride about the company, organizational commitment, workplace environment and organizational climate, employee recognition and empowerment, awareness of the organizational activities, and participation in decision making (Kooskora, M. and Mägi, P., 2010: 10).In order to guarantee employees satisfaction, leaders should remain aware of the specific causes of employees' job dissatisfaction and shall also know about the practices that will contribute to the employees' job satisfaction. This present section IV elucidates the causes of employees' job dissatisfaction and also focuses on the best ethical practices that are to be followed to achieve employees' job satisfaction. Causes of employees' job dissatisfaction: Essentially, leaders should remain aware of the causes which result in employees' job dissatisfaction.Knowing the causes and overcoming the causes help leaders instilling confidence among employees. Figure 4 represents the main causes that result in employees' job dissatisfaction. Indifferences with staff: The major imperfections that are associated with poor leaders are contradictory promises, indecision, failure to deliver the expectations that were benchmarked by themselves, inflexibility in thoughts and actions and developing indifferences with staff etc. Poor leaders can easily frustrate even the motivated employees (Ludger, 2012: 52).Leaders shall remain flexible in their thoughts and actions and shall avoid developing indifferences with their staff. Underpayment: Payment of salaries and other benefits should correspond to educational qualifications, work experience, skills and abilities of employees.If employees are underpaid, they would have to undergo stress in paying monthly bills with a limited source of income; this will surely cause job dissatisfaction.Ranulfo and Orlando (2003: 25) suggest that "The employee may be satisfied when pay is commensurate with the relative difficulty and importance of the job".To ensure employees remain motivated, it is essential that, the payment of salaries and other benefits should be proportionate to the hard work the employees undergo for completing the tasks at workplace. Lack of career advancement: Employees feel motivated when their boss include them in long-term plans and give them valued appreciation through promotions.Promotions given to employees will help them to remain associated with the company and also help them to remain successful in achieving their long-term plans.But when there is a lack of career advancement employees would get dissatisfied and become less motivated to achieve productivity.Career advancement is an important tool for motivation when employees are promoted, they are motivated.It not only provides them with additional financial benefits but would also help them to develop a feeling that they have mastered the previous levels (Gary, 2001: 214).Organizations should follow transparent employees' appreciation policies at workplace, as appreciation policies are the basis for the career growth of the employees.interaction, and sense of accomplishment through challenging jobs, utilization of a variety of skills of employees and self-evaluation of performance (Onimole, 2015: 206). Ethical practices: Leaders should focus on the following ethical practices at workplace, so that, the employees' job satisfaction is achieved: Figure 5 represents the essential ethical practices that are to be followed by leaders for ensuring employees' job satisfaction.Focus on a friendly and respectful environment: Leaders shall focus on creating a friendly and respectful environment, often money is not only the criteria for employees' job satisfaction, though arguably an important criteria to motivate and retain the staff.Reporting to the duties enthusiastically by employees is possible when employees work in a low-stress work environment and more importantly when they find friendly people at work and a respectful environment, these factors would induce them to report to the duties each day.However, motivating the staff starts with being a good manager.Ironically, a company may offer the best pay and benefits, employee-friendly policies and other perks, but if a staff is headed by a bad manager, all these features get neutralize and employees would be demotivating.Respectful treatment of all employees at all levels would be cited as a leading contributing factor to employees' job satisfaction (Harvard Business Review, 2017, Chapter 7).Leaders need to practice and demonstrate standard principles and ethics to ensure the employees are respected, appreciated and are rightly motivated.For leaders, the best way to enforce the ethical mentality in the organization is, to lead it by example. Focus on commonalities among employees at different levels of management: For ensuring employees' satisfaction at work, leaders shall devise policies for commonalities among peers and reporting managers.The Human resource department during recruitment process should ensure that the commonalities criteria among peers and reporting managers is followed.Commonalities among the staff/managers would result in generating Ethical Practices more satisfaction to employees at work.When employees get easily connected with co-workers and immediate supervisors a personal relationship is developed.This would help employees to communicate fearlessly the matters concerning the job, and the difficulties that creep cause of diversity among employees can be controlled.In order to mitigate the challenges faced through diversified workforce, organizations usually follow affinity groups strategy.Jeffery (2011: 53) elucidates that, "One popular strategy organizations are using for recruiting and retaining diverse talent is the support of employee network or affinity groups.Affinity groups can be formed around any commonality shared by employees, including ethnicity, age, disability, family status, religion, sexual orientation, and usually, have some association with culture or perspective that has faced challenges in either the society or the organization".However, care should be taken by the management that, the personal relationships do not result in undue leniency by immediate supervisors toward their staff.This challenge can be mitigated by introducing policies that clearly identify the duties and responsibilities of staff and managers working at various levels of management.Leaders, in order to earn trust from their staff, shall introduce the sense of integrity through standard and ethical practices. Focus on dynamism at work: Leaders shall focus on the autonomy of employees, dynamism in functioning and diversity in the work environment.Dynamism often creates own challenges to the employees and would help them to find out the ways and means to overcome work difficulties.Management should look for different ways to increase the challenges to employees so that new skills are acquired by them.The work that is provided to employees should be in contrast to their daily routines to allow a wide range of responsibilities and subsequently helps them to learn.Developing a challenging environment to employees and permitting them to have a certain amount of autonomy would help them lift their morale and would create a dynamic work environment.Bob and Peter (2010: 74) opine that "All employees need to have a say in how they do their work, to make it more meaningful.When employees find their work meaningful, they become more engaged and effective".In order to ensure dynamism in the organization, leaders are expected to offer out of box thinking solutions to the diverse problems of an organization. Focus on financial and non-financial rewards: Ethical leaders give due importance to financial and nonfinancial rewards while granting it to their employees.Rationally, when employees receive rewards for performing better, they feel satisfied with their jobs.Non-financial benefits such as a spacious office and other perquisites would significantly increase job satisfaction.Financial benefits such as paid vacations, profit sharing would generate ownership feelings among employees.Both the financial and non-financial incentives would induce employees to work hard and like their job and the organization.Sampath & Sanjid (2005: 125) say that "The term 'incentive' means an inducement which rouses or stimulates one to action to the desired direction.An incentive has a motivational power; it influences the decision of individuals on putting in efforts towards task performance".The organization should maintain, transparency and fairness consistently while granting financial and non-financial rewards to the employees as this would not only result in settling matters in the most amicable manner with a fairness, justice and equity but would also avoid demotivating the concerned staff. Focus on the stress-free environment: In organizations, often the cause of stress for employees is the fear of losing their job.For ensuring an ethical environment, leaders shall focus on the stress-free environment.A stress-free environment is bliss to employees.Its characteristics include transparent overall policies, transparent appreciation policies, timely implementation and communication of revised policies.Manikanta Belde (2016: 52) elucidates that, "Many big companies know the importance of relaxation, and so they provide free time to take naps for employees.Companies at present are providing restrooms for relaxation, and this has helped the company's employees become more productive and in turn, helping organizations accomplish success".A stress-free environment would allow employees relieve from undue stress, fatigue, tension and high blood pressure etc. stress-free environment provides not only good health to employees but also contribute a healthy and safe work environment.A stress-free organization is possible only when leaders exhibit care and remain kind to their employees. Conclusions As a conclusion, the study depicts that ethical businesses help in enhancing social welfare.Ethical businesses focus its attention on imparting ethical programs to employees, so that, there is an alignment between organizations preferred values and the values reflected by the behaviors of employees.Ethical businesses always contribute its effort to achieve good corporate governance and help in overcoming employees' ethical dilemmas.Employees' job satisfaction is essential for the success and longevity of the organizations.For achieving employees' job satisfaction, the study suggests that leaders shall remain flexible in their thoughts and actions and shall avoid developing indifferences with their staff; they should promote payment of salaries and other benefits that are proportionate with the hard work that the employees undergo for completing the tasks at workplace.Further, leaders shall emphasis on the autonomy of employees, dynamism in functioning and diversity in work environment.Dynamism often creates own challenges to the employees and would help them to find out the ways and means to overcome work difficulties and this will help them to remain motivated. For ensuring an ethical environment, leaders shall focus on the stress-free environment.A stress-free environment is bliss to employees and surely contributes employees' job satisfaction. Figure 1 . Figure 1.Benefits to Businesses Figure 2 . Figure 2. Causes for Leadership failure Figure 3 . Figure 3. Prerequisites of ethical leadershipHonesty in actions: Organizations/Businesses need to be honest in all their actions particularly, in every communication.Society appreciates the fact when businesses are trustworthy and do not deceive their customers by either misrepresenting the facts or exaggerating the facts and this is possible when leaders leading the firm are honest and ethical.Richard, (2015: 40) believes, "Effective leaders are ethical leaders.One aspect of being an ethical leader is being honest with followers, customers, shareholders, and the public, and maintaining one's integrity.Honesty refers to truthfulness and non-deception". Figure 4 . Figure 4. Causes of employees' job dissatisfaction Figure 5 . Figure 5. Ethical practices to ensure employees' job satisfaction Most employees prefer to perform duties better when the work itself is challenging.Overcoming challenges at work provides self-esteem and confidence among employees.But, if work is monotonous it would cause boredom and would even affect productivity.The study conducted on "Work design and Job Satisfaction" concludes that workers performance is effective when they are satisfied.Essentially, satisfied workers contribute productivity.Therefore, to enhance productivity and quality of work life, the organization shall emphasize on job design, which should include workers autonomy, social
v3-fos-license
2019-11-19T14:05:01.928Z
2019-11-18T00:00:00.000
208142162
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://academic.oup.com/psychsocgerontology/article-pdf/75/3/474/32470440/gbz149.pdf", "pdf_hash": "87e16f70f64bdf9b6639ea1f5f464635898fa735", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42733", "s2fieldsofstudy": [ "Psychology" ], "sha1": "06e4799efc882be826535fef1d85853229bb75a0", "year": 2019 }
pes2o/s2orc
Playing Analog Games Is Associated With Reduced Declines in Cognitive Function: A 68-Year Longitudinal Cohort Study Abstract Objectives Playing analog games may be associated with better cognitive function but, to date, these studies have not had extensive longitudinal follow-up. Our goal was to examine the association between playing games and change in cognitive function from age 11 to age 70, and from age 70 to 79. Method Participants were 1,091 nonclinical, independent, community-dwelling individuals all born in 1936 and residing in Scotland. General cognitive function was assessed at ages 11 and 70, and hierarchical domains were assessed at ages 70, 73, 76, and 79 using a comprehensive cognitive battery of 14 tests. Games playing behaviors were assessed at ages 70 and 76. All models controlled for early life cognitive function, education, social class, sex, activity levels, and health issues. All analyses were preregistered. Results Higher frequency of playing games was associated with higher cognitive function at age 70, controlling for age 11 cognitive function, and the majority of this association could not be explained by control variables. Playing more games was also associated with less general cognitive decline from age 70 to age 79, and in particularly, less decline in memory ability. Increased games playing between 70 and 76 was associated with less decline in cognitive speed. Discussion Playing games were associated with less relative cognitive decline from age 11 to age 70, and less cognitive decline from age 70 to 79. Controlling for age 11 cognitive function and other confounders, these findings suggest that playing more games is linked to reduced lifetime decline in cognitive function. , as well as impaired decision making and everyday functional abilities (Tucker-Drob, 2011). In the search for interventions that could reduce the rate of cognitive decline, brain training, and other digital cognitive games have come under particular and extensive study (Anguera et al., 2013;Ball et al., 2002;Ngandu et al., 2015). But, whether digital cognitive games and so-called brain training have a protective effect on cognitive functions is controversial (Kelly et al., 2014;Kueider, Parisi, Gross, & Rebok, 2012;Lampit, Hallock, & Valenzuela, 2014;Simons et al., 2016); effects are often not robust (Lampit et al., 2014) or do not transfer beyond the context of the games (Melby-Lervåg, Redick, & Hulme, 2016), and positive effects that are demonstrated do not always last (Zhang & Kaufman, 2016). However, evidence also suggests that more traditional, analog games such as board games (Dartigues et al., 2013), cards (Kuo, Huang, & Yeh, 2018), crosswords and Sudoku (Ferreira, Owen, Mohan, Corbett, & Ballard, 2015) all protect somewhat against cognitive decline. Nevertheless, previous studies of analog games have had limitations. In cross-sectional analyses, more game playing was associated with higher cognitive functions (Ferreira et al., 2015;Brooker et al., 2018), but these studies could not examine cognitive change over time, or control for many confounding effects. Crucially, they do not test for confounding by cognitive ability from earlier in life. A longitudinal examination of the association between board game playing and dementia found a protective effect among more regular players, but this effect disappeared when mental health was controlled for (Dartigues et al., 2013). A lone randomized controlled trial found evidence that cognitively stimulating card and board games improve executive functions (Kuo et al., 2018), but the trial was limited by small sample size and brief follow-up times; moreover, they found no associations between playing games and reasoning or episodic memory abilities. Overall, evidence for positive effects of playing analog games is no stronger than for digital games. In the present study, we use data from participants in the Lothian Birth Cohort of 1936 (LBC1936) to address the question: Does playing analog games protect against cognitive decline in older age? The LBC1936 are unusually valuable for seeking an answer; they provide historical data on early-life cognitive function and other life course variables, as well as repeated and detailed measurements of several cognitive functions in later life, analog games playing habits, and potential confounders. We preregistered hypotheses and analyses that evaluated game playing's relationship with: (a) change In general, cognitive function from age 11 to 70 years; and (b) its associations with change in four specific cognitive subdomains measured on four occasions between age 70 and 79: visuospatial or fluid ability, processing speed, memory, and crystallized ability. Study Sample The Lothian Birth Cohort 1936 (LBC1936) is a communitydwelling sample of 1,091 initially healthy individuals. All were born in 1936 and were at school in Scotland on 4 June 1947, when most took part in a group-administered intelligence test: the Moray House Test (MHT) No. 12. They were followed up in four waves of one-to-one cognitive and health testing between 2004 and 2017, at mean ages 70 (N = 1,091), 73 (N = 866), 76 (N = 697), and 79 (N = 550) years. Further details on the background, recruitment, attrition, and data collection procedures are available Cognitive Functions The MHT is a broad cognitive ability test that includes word classification, proverbs, spatial items, and arithmetic. The test correlated about 0.8 at age 11 years with the Terman Merrill revision of the Stanford-Binet test, providing concurrent validity (Deary, Whalley, & Starr, 2009). At Wave 1 of follow-up, in older age (mean age 70 years), participants repeated the MHT where concurrent validity was independently established (Deary, Johnson, & Starr, 2010). In four waves of data collected in older age, 14 individually administered cognitive tests were used to assess three subdomains of cognitive function that decline with age: fluid, processing speed, and memory ability, as well as crystallized ability. A general factor of cognitive function was also hierarchically modeled from the subdomains. The tests are fully described and referenced in an open-access protocol article , and the details of how the tests are associated with different domains is described in Supplementary Appendix 1. Eleven participants were excluded from our analyses of cognitive change from age 11 to 70 because they either had a history of dementia or scored less than 24 on the MMSE at age 70. Thirty-seven participants were excluded from our analyses of change from age 70 to 79 because they had a history of dementia or scored less than 24 on the MMSE at any point between age 70 and 79. Playing Games As part of a larger questionnaire on social and physical activities (Gow, Corley, Starr, & Deary, 2012), LBC1936 participants indicated in Wave 1 at age 70 how often they generally engaged in each activity. The particular item investigated here was "Playing games (like cards, chess, bingo, or crosswords)." Participants endorsed one of "Every day or about every day," "Several times a week," "Several times a month," "Several times a year," or "Less than once a year/never." Responses were assigned ordinal values of 1-5 with, "Every day or about every day" registering as a 5. Using a repeated assessment of this question in Wave 3 at age 76, we also created binary and ordinal variables, the first indicating which individuals reported any increase in game playing frequency between age 70 and 76, and the second indicating the degree of change in games playing frequency between 70 and 76. Covariates Three sets of variables were evaluated as potential confounders. The first set defined our sociodemographic covariates: sex, years of education, and the participant's adult social class. The second set of covariates consisted of variables describing other activities, assessed by questionnaire as described previously in this sample by Gow et al. (2012). 984 questionnaires were received at age 70. An aggregate variable representing overall sociointellectual activity was generated using principal components analysis. The complete details of these analyses are available in Supplementary Appendix 2. The third set of covariates included major medical diagnosis risk factors for cognitive decline. At age 70, participants self-reported if they had history of high blood pressure, stroke, diabetes, or cardiovascular disease. Each was coded as a binary variable, with a 1 indicating that the individual had a history of the disease. Statistical Analyses Unless otherwise noted, all analyses were preregistered on the Open Science Framework (OSF) before the data were requested from the LBC1936 database manager and downloaded-see https://osf.io/wdgm3/. All analyses were carried out in the R statistical programming language (version 3.5.1). Playing Games, Cognitive Change from Age 11 and 70, and Confounding by Life Circumstances Our first set of analyses examined cognitive change between age 11 and 70 using data collected at age 70. These data were detailed health and cognitive testing information at age 70 as well as historical data from each individual's life. The independent variable (exposure) of interest in our regression models was the ordinal playing games variable, and the other covariates were also included as independent variables. The dependent (outcome) variable was either cognitive function, assessed with the MHT, at age 70 or the change in MHT score from age 11 to 70. Linear regressions were modeled with base R functions. A life course path model was specified after the initial preregistration and data download, but no analysis was carried out prior to preregistration. This approach allowed us to model all downstream associations of age 11 cognitive function, education, and social class (Scarmeas & Stern, 2003), on games playing behavior, age 70 cognitive function, and each other- Figure 1 illustrates the paths modeled in this way; at this stage, the numbers in the figure may be ignored. We also specified a path model wherein we controlled for the confounding effect of these same life course variables, and included other sociodemographic and health variables as controls. This model is fully described and presented in Supplementary Appendix 3. These models were fitted using the "psych" package (version 1.8.10) using the Preacher and Hayes (2004) bootstrap method. Path modeling used the "lavaan" package (version 0.6-3). Playing Games and Trajectories of Cognitive Change in the Eighth Decade The second set of analyses examined contributions to cognitive change between age 70 and 79. This was done using latent growth curve models. Latent growth curve models allowed us to introduce a hypothesis-testing approach to longitudinal, correlational data. The correlational aspect of the data was captured by a known hierarchical model (Altschul et al., 2018;Ritchie et al., 2016) that consisted of a general variable of cognitive function and four subdomains of function beneath it. Latent variables representing each subdomain were specified using their cognitive tests, as mentioned in the Cognitive Function section. For example, the crystallized ability latent variable captured the common variance of National Adult Reading Test, Wechsler Test of Adult Reading, and a phonemic verbal fluency test score. Each subdomain captured common variance among its constituent tests in this way, and the general function latent variable captured the common variance among the four subdomains. A representative path model diagram is presented in Supplementary Figure S1. The longitudinal aspect of the analysis was achieved by specifying latent variables that capture change in Figure 1. Life course path diagram of the regressions among sociodemographic variables, cognitive functions, and playing games. Arrows indicate direction of the regression paths, with the numbers indicating std β weights and std errors (in parentheses). All paths are significant at p < .05, except for the path from education to playing games, printed in italics. the cognition latent variables across the four waves. For each of the general cognitive function and four cognitive subdomain variables, we modeled an intercept (i.e., the baseline performance of each individual) and slope (i.e., trajectory of change between age 70 and 79) latent variables. These latent variables could be analyzed as if they were directly measured variables. Thus, cognitive intercept and cognitive slope were regressed onto the same covariates as used in our analyses of cognitive change from age 11 to 70. We were thereby able to assess whether playing games was associated with the baseline (age 70) level of general cognitive ability or any of the specific cognitive subdomains, and decline in any of these cognitive functions. The cognitive intercepts' and slopes' latent variables were regressed on all covariates simultaneously, except for the variables measuring change in playing games. Each of those two variables was added as a dependent covariate in distinct additional growth curve models and only associated with the slope variables of cognitive functions. Model coefficients were estimated using full information maximum likelihood, that is, the estimation optimizer attempted to use all data from all participants, even individuals who did not complete all waves. Standard errors were bootstrapped from 1,000 bootstrap draws, and p values were computed using the χ 2 test statistic. For each model, univariate or multivariate, a critical value of α < 0.05 was considered significant. All growth curve models were fit using "lavaan." Playing Games at Age 70 and 76 The study sample is described in Table 1. Frequency of playing analog games at age 70 was generally high, with 320 participants (33% of 961) reporting that they played games every day or nearly every day. The distribution was U-shaped, as the second most frequent category (n = 195, i.e., 20%) was the lowest, that is, those participants who played games less than once per year or never. At age 76, the responses were similar, with 222 participants (again, 33% of 682) reporting that they played games every day or nearly every day, and the second largest category (n = 123, i.e., 18%) reported playing games less than once a year or never. In between, the distribution was similarly U-shaped. However, some individuals did change their games playing habits: 160 increased their frequency of playing games to some degree. Reliability of the item was generally good (ρ = .63, ICC(3,1) = 0.64), which is consistent with previous work suggesting the individuals' selfreporting of playing games is accurate (Waris et al., 2019). Reliability is discussed in greater depth in Supplementary Appendix 4. Playing Games and Cognitive Function From Age 11 Years to Age 70 Our first hypotheses predicted that playing games would predict higher cognitive function at age 70 and positive change in cognitive function from age 11 to age 70. The differences in cognitive function between age 11 and 70 are shown in Figure 2. Average performance generally increased for individuals from age 11 to 70, but the differences visibly increase between individuals who are less and more frequent games players. Formal regression modeling demonstrated that playing games was positively associated with age 70 cognitive function (std β = 0.094, t = 4.07, p < .001; Supplementary Table S1). Higher age 11 cognitive function, being female, having higher social class, and having had more education were all associated with higher age 70 cognitive function (Supplementary Table S1). A key result is that playing games was also associated with positive cognitive change between age 11 and age 70 (std β = 0.095, t = 4.07, p < .001; Supplementary Table S1). Also associated were lower age 11 function, being female, having higher social class, and having had more education. In both of the above models, the association between playing games and cognitive test score was equivalent to a gain of approximately 1.42 IQ-like points per standard deviation increase in playing games. This would be like increasing one's frequency of playing games from monthly to several times a week. Our second hypothesis predicted that, if we control for the possible confounding pathways of age 11 cognitive function, education, and social class, then playing games will still have a positive association with age 70 cognitive function. We modeled the expected life course relationships among these variables, as demonstrated by the path model in Figure 1. Age 11 cognitive function has a positive downstream association with education, social class, and age 70 cognitive function, as well as playing games. Education and social class have their own positive downstream associations, in addition, and social class is slightly associated with games playing, albeit negatively. Thus, despite controlling for the direct and indirect associations of age 11 function, education, and social class with playing games, playing more games was still associated with higher cognitive function at age 70 (std β = 0.083, z = 3.24, p = .001, Supplementary Table S2). In this model, there was a 1.25 IQ-like point gain from age 11 to age 70 per standard deviation increase in playing games. Playing Games and Cognitive Function Change From Age 70 Years to Age 79 Our third set of hypotheses concerned cognitive change across the eighth decade. We predicted that more games playing reported at age 70, and any increases in playing games between age 70 and 76, will predict less relative decline in cognitive functions between age 70 and age 79. The differences in cognitive decline across the eighth decade, as indicated by playing analog games at age 70, are shown in Figure 3. Across general cognitive function and all cognitive subdomains, intercept differences are apparent; these show that individuals who played more games appear to have higher baseline cognitive functions at age 70. Slope differences among different levels of game playing are notable in general cognitive function and in the memory subdomain. Mean decline occurs among all levels of playing games across the eighth decade, but decline in these variables was more severe among less frequent players. Latent growth curve models found that playing games was associated with higher general and subdomain cognitive function intercepts (Supplementary Table S4). That is, individuals who played games more often had better baseline cognitive performance from age 70 (β = 0.338, z = 5.886, p < .001), even controlling for age 11 cognitive function and other sociodemographic and health variables. This reproduces the results derived from the age 70 test scores,-reported above; however, in this analysis, the outcome was a comprehensive, multi-test, multi-domain, hierarchical model of cognitive function. A key result was that playing more games was associated with less decline in general cognitive function from age 70 to age 79 (β = 0.068, z = 2.523, p = .012; Table 2). The association between playing games and less cognitive decline was also significant for the memory subdomain (β = 0.204, z = 3.114, p = .002). For the other cognitive subdomains, all estimates were in the same direction, but not significant. In IQ-score terms, a standard deviation more games playing is associated with 1.02 points less reduction in general ability, and 3.06 points less reduction in memory ability, over the eighth decade. Increasing playing games between age 70 and 76 was associated with reduced decline in the processing speed Full color version is subdomain (β = 0.110, z = 2.689, p = .007; Supplementary Table S5). This result was only significant for the ordinally constructed variable that captured relative behavior change. The binary variable, which only identified if an individual played more games or not, likely lost too much information in the transformation process and did not have the power to detect this effect, though the binary variable always trended in the expected direction (Supplementary Table S6). Sensitivity Analyses in Growth Curve Models We carried out non preregistered sensitivity analyses that included all individuals in our growth curve models, that is, including the 37 who were removed for having low MMSE scores that indicated cognitive impairment. Including these individuals generally did not alter our primary findings: Playing games was still associated with less general cognitive function decline (β = 0.057, std error = 0.022) and increasing playing games was associated with less speed decline (β = 0.095, std error = 0.052). Effect sizes were generally the same albeit smaller, though the association between playing games and memory slope was much reduced (β = 0.017, std error = 0.007). This is not surprising as the memory subdomain is particularly linked to mild cognitive impairment (Allaire, Gamaldo, Ayotte, Sims, & Whitfield, 2009); including impaired individuals appears to attenuate our models' ability to determine reliable estimates of associations with cognitive domains. Discussion In this study, we found consistent evidence that playing more analog games is associated with significantly less relative cognitive decline from age 11 to age 70, and also less cognitive decline from age 70 to age 79. In our models we introduced a consistent set of well-validated sociodemographic and health variables as potential confounders. Whereas there were some confounding influences from these variables, the association between cognitive variables and playing games was robust to the inclusion of all covariates. Our results revealed that there were particularly strong positive relationships with general cognitive function and with memory. Those LBC1936 participants who increased their games playing frequency from age 70 to age 76 appeared to experience positive associated benefits, but only in the speed subdomain. This is likely due to the greater sensitivity of speed to age-related decline (Ritchie et al., 2016;Verhaeghen & Salthouse, 1997). The association between playing games and cognitive function cannot be entirely explained by cognitive reserve (Scarmeas & Stern, 2003); that is, if we account for the associations with hallmarks of cognitive reserve: early life cognitive function, education, activity, and social class, there is still a distinct association between playing more games and experiencing less cognitive decline. After controlling for these particular confounders, 64% (Supplementary Table S3) of the relationship between playing games and later life function appears to be due to playing the games themselves. This supports the use-it-or-lose-it hypothesis: in this sample, mental exercise in the form of playing games might help slow cognitive decline and even increase cognitive function. A strength of the present study is the longitudinal sample used. LBC1936 provided data on many life course factors including a validated measure of early life, premorbid cognitive function. The same cognitive test was given at ages 11 and 70, allowing for direct comparison of performance across nearly 60 years. From age 70 and beyond, LBC1936 has data from 14 validated cognitive tests that fit a four subdomain hierarchical model of cognitive function. The general factor and subdomains were all strongly associated Figure 3. Trajectories of cognitive change across groups with different games playing habits. Data are plotted only for completers, that is, those individuals who participated in all four waves of data collection. General and subdomain cognitive scores are derived from the first wave hierarchical model in our latent variable analyses: tests were standardized according to the characteristics of Wave 1, factor loadings were set by Wave 1, and factor scores for all waves were then estimated. Frequency of games playing increases with brighter lines, that is, dark maroon = "never/less than yearly, " light purple = "several times a year, " turquoise = "several times a month, " light green = "several times a week, " yellow = "every day/ almost every day. " with early life cognitive function, and by accounting for this and other variables' influence we could examine and identify the particular subdomains of cognitive function that were associated with playing games. An additional strength of our study is that all of our analyses were preregistered. Variables, model structures, fit and significance cutoffs were all specified in advance. Moreover, sociodemographic variables, particularly educational attainment and social class, made it possible for us to disentangle the association of playing games with later life intelligence, without confounding from these early life influences. We also had access to health and activity variables that allowed us to account for their potential relationships with lifestyle choices and cognitive function. This study had several limitations as well. For instance, our sociodemographic data were retrospective, and sociodemographic, activities, and health data were selfreported. Additionally, retention across waves has consistently been approximately 80% , and between waves the major causes of attrition, death and frailty, are insurmountable. Whereas we were able to use the larger Wave 1, age 70, sample for our univariate analyses of change between age 11 and 70, our sample size and power was hindered by a decrease in sample size across the waves. In particular, our analyses of increasing one's games-playing frequency may have suffered from insufficient power, as those analyses were limited by the reduced number of responses from Wave 3, at age 76, and the relatively small number of individuals who actually increased their frequency of playing games (n = 160). The recruitment process for samples of older people tends to self-select volunteers who are often better educated and aging well, which is true of LBC1936 . Findings might be biased toward affluent and/ or higher cognitive function individuals, who might be inclined to play games more often. Previous studies have also focused on particular games, for example, Sudoku (Ferreira et al., 2015) or board games (Dartigues et al., 2013). Our study was not so detailed, we could only examine playing games in aggregate, limiting the specificity of our conclusions. Previous research has also suggested that the social component to playing games may play an important role in the positive benefits of games (Kuo et al., 2018;Ngandu et al., 2015). From previous work (Gow et al., 2012) (and reproduced in our analyses), more social activity does not appear to be associated with reduced cognitive decline. Unlike many digital cognitive training studies (Anguera et al., 2013;Ball et al., 2002;Ngandu et al., 2015) and one analog games study (Kuo et al., 2018), this was not a randomized controlled trial. Without an experimental manipulation, we cannot conclude that we have controlled for all confounds that might bias our results, and causal effects cannot be inferred. Randomized controlled trials are extremely difficult to carry out when the entire life course is under study, but we were able to test whether multiple early life factors, including cognitive function from age 11, confound associations between playing games and later life cognitive function. This finding is in line with those of Staff et al. (2018), who studied the cognitive associations with a self-reported intellectual engagement trait whilst controlling for early life cognitive function. However, intellectual engagement and social activities are confounded by early life variables (Gow et al., 2012;Von Stumm & Deary, 2012). Our analyses controlled for these activities along with the same early life variables, and our results nevertheless suggest that playing games, a self-reported behavior, not an intellectual or personality trait, is associated with higher cognitive function and less decline. This study suggests that the capacities used while playing games generalize to the cognitive tests we used to assess cognitive function. The effects also have long-term consequences, as individuals who reported playing more games at age 70 performed better on cognitive tasks 3, 6, and 9 years later, and among individuals who changed their game playing habits, speed declines were not as large for those who increased their games playing. Playing more games is associated with less cognitive decline overall, though particularly in general cognitive function and in the memory subdomain. This finding links previous research between games, intellectual activity, and dementia prevention, as deficiencies in the memory subdomain are related to mild cognitive impairment (Allaire et al., 2009) and subsequent dementia (DeCarli et al., 2004). Whereas the memory subdomain was driving much of the overall relationship between playing games and cognitive change, the other subdomains all also trended in the same direction, though they were not significantly associated themselves. Future work ought to study individual games and their associations with cognitive functions. Apart from well documented factors like smoking, physical exercise, and fitness there are few known behavioral decisions people can make that will positively influence their cognitive functions. Previous studies on games (Dartigues et al., 2013;Kuo et al., 2018) and other intellectual activities (Staff et al., 2018) have not provided compelling evidence that these activities protect against subsequent cognitive decline. This study goes further by specifically suggesting that playing games is related to reduced decline. It will be a challenge to determine how best to apply these findings in practice, as our results suggest that a lifetime of playing games is the best way to capture the benefits. At present, evidence is still limited, but there do not appear to be any harmful effects of playing games. Thus, analog games are an affordable and fun activity that could protect against cognitive decline. In conclusion, this preregistered study of cognitive change from age 11 to age 70, and then age 70-79 has shown that playing more games might improve the long-term outlook for one's cognitive health. Additional longitudinal investigations, including randomized controlled trials, with larger numbers are needed to clarify the avenues through which analog games might protect against cognitive decline. Supplementary Material Supplementary data are available at The Journals of Gerontology, Series B: Psychological Sciences and Social Sciences online. Funding This work was supported by The University of Edinburgh Centre for Cognitive Ageing and Cognitive Epidemiology, which is funded by the Biotechnology and Biological Sciences Research Council and Medical Research Council (MR/K026992/1), and D. M. Altschul is funded by an MRC Mental Health Data Pathfinder award (MC_PC_17209). The research was independent from the funders, who had no role in the design or presentation of this study. The LBC1936 data were collected using a Research Into Ageing programme grant; this research continues as part of the Age UK-funded Disconnected Mind project. Data and Material Availability Data and analytic code used in this publication are available to bona fide researchers upon request to the Lothian Birth Cohorts study coordinator via a standard application procedure.
v3-fos-license
2021-04-03T13:34:00.157Z
2021-04-02T00:00:00.000
232766414
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00432-021-03605-7.pdf", "pdf_hash": "42f658a48ae89dbf41719ab0b32108ce896886bd", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42734", "s2fieldsofstudy": [ "Medicine" ], "sha1": "42f658a48ae89dbf41719ab0b32108ce896886bd", "year": 2021 }
pes2o/s2orc
Traits of cancer patients and CAM usage Background The use of Complementary Alternative Medicine (CAM) Methods is increasing and therefore gaining importance also in conventional western medicine. Identifying personal traits to make out by whom and why CAM is used can help physicians in successful physician–patient interaction, and thus improve patient’s compliance and trust towards their physician. Patients and methods A questionnaire was passed on to cancer patients in an ambulant clinical and a rehabilitation setting. Multiple regression analyses were run to examine possible predictors for CAM use, such as gender, age, level of education, spirituality, attentiveness, self-efficacy and resilience. To differentiate within CAM users, two dependent variables were created: “holistic and mind–body methods”, such as Yoga, meditation or Homeopathy and “material based methods”, such as food supplements or vitamins. Results Higher level of education, younger age and religion-independent attentiveness were significant predictors for the use of “material based methods”. Female gender, higher education and religious spirituality were detected as significant predictors for “holistic and mind–body methods”. Conclusion This study is among the first to take a more detailed look at how numerous personal traits are associated with the use of CAM methods and differentiate between the applied methods. Our finding should be considered by conventional health care providers and could be integrated into a holistic assessment, to offer information about complementary medicine and meeting patients’ needs. CAM-definition and usage Complementary medicine refers to a heterogeneous group of therapies that fall traditionally beyond the range of conventional medicine, but can be used alongside conventional treatment. In severe diseases that require aggressive therapies, such as chemotherapy in cancer treatment, complementary medicine can support the patients' well-being and compliance. It can contribute to a more wholesome approach in patient care and meeting patients' demands that cannot be satisfied by conventional medicine (Ernst 2000). A study in 2011 showed that 66.5% of cancer survivors state to have used complementary medicine alongside the conventional treatment of their disease (Mao et al. 2011). Furthermore, meta-analyses have shown that CAM use has been increasing in the last decades (Frass et al. 2012). This shows the growing importance of integrating CAM into traditional health care structures. CAM ranges from non-material methods, such as prayer, massage, music or meditation to material methods, such as vitamin supplements, homeopathy, Chinese teas and many more (Kang et al. 2014). It was found that especially biologically based therapies, relaxation techniques, prayer and meditation were the most frequently applied methods among participants of a German online survey in 2014 (Huebner et al. 2014). Research indicates that the wide range of CAM methods are used for different reasons (Huebner et al. 2014). Whereas prayer, meditation and music were found to be used in the intention to maintain a feeling of control over life, other typical aims of using CAM were immune enhancement and pain control (Mao et al. 2011;Kang et al. 2014). Traits associated with CAM use It has been shown in various studies that women, and patients with a higher level of education tend to be the typical CAM users among cancer patients (Molassiotis et al. 2005;Frass et al. 2012;Dubois et al. 2019). Nevertheless, only few studies have identified other characteristics that are significantly associated with CAM use in cancer patients. Research in the general population has shown that health behaviors, spirituality and openness are strong predictors of CAM use (Thomson et al. 2014;Dessio et al. 2004). Spirituality and the use of CAM could be identified as two associated concepts in various studies (Trinkaus et al. 2011;James and Bah 2014;Ellison et al. 2012). They stated that spirituality could be a good predictor for CAM use. Spirituality and spiritual needs cannot always be approached with religiousness, instead should be treated as two concepts, independent from each other (Thoresen and Harris 2002). The necessity for meeting patients' spiritual needs also in a non-religious way, especially for those patients who lack a religious community to turn to, has been highlighted. Studies have shown that spiritual requirements are often not satisfied by conventional medicine (Balboni et al. 2007), but can be met in an appropriate way by CAM methods (Hsiao et al. 2008). Research indicated that spirituality is positively correlated with an active coping style, quality of life and well-being in cancer patients (Holland et al. 1999;Peterman et al. 2002;Trinkaus et al. 2011). These findings provided convincing intension to encourage patients in their spiritual requirements. Measuring and approaching the spiritual needs of cancer patients with non-religious CAM methods, such as meditation, could support compliance and meeting patients' holistic needs during cancer treatment in a safe way. Attentiveness is a concept that can be perceived as a character trait; it can be enhanced by training, for example by meditation or prayer (Zale et al. 2018). Attentiveness is traditionally an essential teaching in ancient Asian religions, such as Buddhism (Baltzell and Cote 2017). The last decades show a surge in mindfulness research in Western societies with a focus on the attempt to define the concept without integrating pre-existing religious doctrines. A definition by Brown and Ryan (2003) described mindfulness as the ability to be in the present, paying attention to oneself and the moment. It has been established in various studies that mindfulness promotes well-being and health (Baer 2003;Creswell et al. 2019). Furthermore, it has been investigated as a predictor for social behavior (Lakey et al. 2007) However, the relation between CAM use and mindfulness is poorly studied. Self-efficacy is comparable with positive self-esteem, the ability to find solutions for personal challenges within oneself (Flammer 2015). Self-efficacy is positively correlated with health behavior (Strecher et al. 1986). Interestingly selfefficacy could be enhanced by training methods and plays a critical role in behavioral therapy (Bandura 2004). In this context, it seems reasonable to consider self-efficacy as a superior predictor of CAM use. Resilience is a concept that has been defined in various ways. This might be an explanation of why measuring resilience can be challenging (Rosenberg et al. 2013). An important principle of the definition of resilience is the ability to recover from adversities, such as severe disease (Cosco et al. 2016). Scores to measure resilience have been proven to be valid (Cosco et al. 2016), and the association between resilience and the use of alternative methods has been a field of interest (Davidson et al. 2005). Nevertheless, research using resilience as a predictor for CAM usage is still sparse. The interest has risen in identifying more predictors for CAM use in cancer patients, to provide better patient centered care. Focusing on the patient's personal traits, less tangible characteristics could gain relevance. Identifying the personal characteristics of cancer patients can help predict interest in CAM and CAM use. This can be relevant for health care providers to meet patient's needs for the information, prevent negative interactions between conventional and complementary therapies and improve compliance. The aim was to identify traits, such as spirituality, resilience, attentiveness and self-efficacy of cancer patients, and investigate their association with CAM use. Participants The questionnaire was distributed to patients of the outpatient oncology department at "Jena University Hospital" and the rehabilitation facility "Paracelsus-Klinik am See", between September and November 2018. The patients were informed that participation in the survey will be anonymous. For this study, we collected the data from 308 patients. The information regarding the demographic data was not completely filled in for one-third of the questionnaires. However, the data on other predictors for CAM use were included in regression analysis. Questionnaires The survey was a composition of six sections, one on personal data (age, gender, education), four questionnaires investigating personality properties and one questionnaire on CAM use. To measure resilience, the RS 11, a reliable short version of the RS-25 questionnaire was used (Schumacher et al. 2005), reliability: α = 0.86 (von Eisenhart Rothe et al. 2013). The resilience is defined as a protective personality property which has been positively correlated with healthy adaptation. Patients were supposed to rate how much their usual behavior applied to the given statements, concerning belief in their own abilities (Likert scale 1 = "doesn't apply at all" to 7 = "fully applies"). Studies of the questionnaire show a positive correlation with well-being, and a negative correlation with tendencies towards mood disorders (von Eisenhart Rothe et al. 2013). The ASKU (Allgemeine Selbstwirksamkeit Kurzskala, English: Short Scale for Measuring General Self-efficacy Beliefs) is a three-item scale by Beierlein et al. (Beierlein et al. 2012). It is a self-assessment instrument, used to investigate subjectively perceived expectation to personal competence to resolve difficulties in everyday life and to cope with critical situations. The items were measured on a 5-point Likert scale (Likert scale 1 = "doesn't apply at all" to 5 = "fully applies"). Reliability and validity were found to be sufficient with a reliability of ω = 0.81 to ω = 0.86 tested in two samples (Beierlein et al. 2012). The TPV (Transpersonelles Vertrauen, English: transpersonal trust) is a valid and reliable instrument to assess patients' spiritual and religious concepts using a 3-point Likert scale (Likert scale 0 = "doesn't apply at all" to 3 = "fully applies"), and consisting of 11 items. Reliability was tested to variate from α = 0.89 to α = 0.95 (Albani et al. 2002). The FFA-14 (Freiburger Fragebogen für Achtsamkeit, English: Freiburg questionnaire for mindfulness) asks patients to rate statements about inner attitudes towards themselves. We used a short version of the originally 30-item survey. This version has been shown to measure the construct of mindfulness, independently from pre-existing theoretical knowledge about meditation or Buddhist philosophy on a 4-point Likert scale (Likert scale 1 = "doesn't apply at all" to 4 = "fully applies"). A test for reliability showed a Cronbachs α of α = 0.93 (Walach et al. 2004). The 5th section originally consisted of the CAM-questionnaire developed by the working group Prevention and Integrative Oncology of the German Cancer Society (Huebner et al. 2014). To simplify, we shortened this section focusing on the questions if patients were interested in CAM, for what reasons and if the participants had used CAM during the past three months. Finally, a list of current complementary medicine therapy options and nutritional supplements was prepared. The patients were asked to indicate whether they had used the method in the past three months. The CAM section consisted of closed questions that could only be answered with "yes", "no" or "I am not sure" and questions with multiple possible answers and the option to add own experiences in an open text field. Statistical analyses Analyses were conducted in SPSS (version 25). Binary logistic regressions were run to test if sociodemographic variables (age, gender and education), resilience, self-efficacy, spirituality, transpersonal confidence and mindfulness were associated with the dependent variable CAM use (use vs. no use). To differentiate more precisely within the group of CAM users, two dependent variables for CAM use were created, one called "CAM use-biological-based methods", the other "CAM use-holistic and mind-body methods". The variable "CAM use-biological-based methods" contains complementary methods which are also frequently prescribed by conventional medical practitioners, such as vitamin b, c, d and e and trace elements, such as zinc and selenium. The variable "CAM use-holistic and mind-body methods" includes methods that would most likely not be prescribed by conventional medical practitioners. We categorized the use of medicinal plants, such as mistletoe, Chinese medicine, such as acupuncture and tees, prayer, meditation, yoga and other relaxations methods, homeopathy, consultation of a healer, the use of amygdalin ("vitamin b17") and various dietary methods into this variable. Two regression models were run, one for each dependent CAM variable. To assess the predictive value of the predictors, Odds Ratios (OR) were calculated in the logistic regression. An OR above one indicates a positive relationship between the predictor and the dependent variable. An OR below 1 implies a negative association. Tests for multicollinearity were run in the logistic regressions. The VIF (variance inflation factor) ranged between 1.10 and 1.51 indicating that multicollinearity was not an issue (Ziegel and Myers 1991). Outliers and influential cases could not be detected using Cook's distance, standardized DFBetas and standardized residuals. Ethical vote The survey was approved by the "Ethic Committee of Jena University Hospital of Friedrich-Schiller-University Jena". Demographic data For this study, we collected the data from 308 patients. Of these 51.1% (N = 101), participants were female, 48.5% (N = 95) male. The biggest part of participants (55.9%, N = 114) belonged to the age group of the 50-to 70-year-olds. A detailed overview of the characteristics of the study sample can be found in Table 1. Interest in CAM and CAM use Among the 55.9% (n = 160) of the patients who indicated they were interested in CAM, 48.1% (n = 77) stated to only having developed that interest since the diagnosis of their cancer disease. 47.5% of the participants stated to have used CAM in the past months. They were asked to specify the applied methods. The most common practice used by the study population was food supplements. 31.8% of the CAM users stated to have taken nutritional supplements, which we categorized as biological-based methods. The second most frequently applied practice, with 10.8%, was praying, followed by homeopathy with 10.5%, we categorized both of those methods as holistic and mind-body. Table 2 shows a detailed overview of the frequency of occurrence. Predictors for CAM use "biological-based methods" The first regression aimed to identify predictors for the use of "biological-based CAM methods". In this model, the effect of female gender was not a significant predictor for CAM use. Nevertheless, the results showed an increased tendency to use CAM methods by women (OR = 0.453, CI 0.190-1.079, p = 0.074). Patients with lower education used significantly less biological-based methods of complementary medicine than patients with higher education (OR = 0.238, CI 0.083-0.685, p = 0.008). The effect was weaker when comparing patients with mid-level education with participants with higher education (OR = 0.622, CI 0.221-1.757, p = 0.371). Patients of the lowest age group (younger than 50) used significantly more biological-based methods of CAM than the older participants (50-70 years) (OR = 6. 080, CI 1.636-22.599, p = 0.007). When comparing participants of the age group 50-70 with the oldest age group (older than 70 years), there was no effect on the use of CAM detectable (OR = 0.908, CI 0.323-2.547, p = 0.854). Of the other independent variables, only attentiveness showed a significant effect on the use of conventional methods of CAM (OR = 1.079, CI 1.001-1.163, p = 0.047). None of the other traits could be linked to CAM usage in the regression model (Table 3). Predictors for CAM use "holistic and mind-body methods" The aim of the second regression model was to identify predictors for the use of holistic and mind-body methods (8) of CAM. This time, female gender was a significant predictor. Women used significantly more holistic and mind-body methods of CAM than men (OR = 0.363, CI 0.153-0.864, p = 0.022). Also, higher education compared to basic education showed a significant effect on the use of holistic and mind-body methods of CAM. Patients with higher education stated to have significantly higher use of holistic and mind-body methods of CAM than patients with a lower level of education (OR = 0.242, CI 0.085-0.691, p = 0.008). Again, when comparing levels of CAM use within the group of participants with middle and higher education, no effect was detectable (OR = 0.995, CI 0.344-2.876, p = 0.993). Age was not a significant predictor in this regression model. However, a non-significant effect, contrary to results of the first regression, can be deduced out of the results, indicating that participants older than 70 years have higher levels of holistic CAM use than the younger participants (OR = 0.395, CI 0.143-1.090, p = 0.073). In this regression, another strongly significant predictor for the use of holistic and mind-body methods was spirituality (OR = 1.125, CI 1.064-1.189, p = 0.000). None of the other predictors were significant in the second model as shown in Table 4. Discussion This study investigated sociodemographic variables and personal traits as predictors for CAM use. By differentiating within the complementary methods and dividing them into the groups "holistic and mind-body methods" and "biological-based methods", different predictors were identified. The percentage of CAM users was 47.9%. This number is comparable with the results of other German studies (Micke et al. 2009;Dubois et al. 2019). Applied methods in the study population Confirming earlier reports, food supplements, such as vitamin B, C, D, E, selenium and zinc, were the most frequently applied methods in the study population (Sparber et al. 2000;Molassiotis et al. 2005;Huebner et al. 2014). The stated percentage share of use was also in line with previous research. Furthermore, prayer, sports, such as Yoga, meditation and relaxation methods, were reported to be popular CAM methods, respectively, used by approximately 10% of the participants. Those numbers also reflect the results of previous studies (Micke et al. 2009;Huebner et al. 2014). As these non-biologically based methods do not interfere with conventional cancer therapies, they could and should be supported by physicians. Contrary to this, methods like medicinal plants, which were also popular among the study population with 8. 5% indicated use, can have side effects and provoke interactions when not discussed with the physician. Sociodemographic variables as predictors By establishing two groups of CAM methods, different predictors could be identified. Concerning well-established sociodemographic predictors for CAM use, the study showed consistent results with various other studies that had investigated CAM use in the past (Richardson et al. 2000;Dubois et al. 2019;Frass et al. 2012). Younger age and higher education showed and significant association with the use of biological-based methods of CAM, female gender was not a significant predictor in this regression, but a positive correlation was detectable. Within the group of holistic and mind-body methods, female gender and higher education were identified as significant. Even though age was not a significant predictor in this group, the relation between CAM use and younger age, was positive. Characteristics like higher education and younger age might indicate better access to information about complementary methods or even a higher knowledge about the disease and possible therapy options. Apart from that, higher education is associated with higher economic status, which offers the possibility of using methods, and task alternative healers that might not be covered by health insurance. Another possible explanation for higher CAM use in young patients might be a higher level of social integration and thus more support from others, who might have made positive experiences with complementary methods. Spirituality It seems likely to assume that people with high levels of spirituality are interested and possibly more open to alternative therapies, seeking to satisfy their spiritual needs. One possible explanation for such an interest might be the correlation of spirituality with an active coping style (Holland et al. 1999). Choosing an alternative method and applying it might give patients a sense of control and active participation in the process of healing. Also, spiritual needs are often not met by conventional medicine, nor do patients have a religious community to find support (Balboni et al. 2007). However, CAM methods seem to give appropriate significance to the psychological aspect of healing and the spiritual needs of cancer patients (Hsiao et al. 2008). In this study, spirituality could be identified as a strong predictor for, what we defined as "holistic and mind-body methods" of CAM. Concerning the conventional methods, such as the use of vitamins and other food supplements, no positive relation with spirituality could be found. Other studies have identified spirituality, when differentiated from religiosity, as a predictor for both biological-based and holistic and mind-body methods (Hsiao et al. 2008;Smith et al. 2008). While at a first glance, this seems to be a contradiction to our data, the most probable explanation is that the TPV instrument is measuring the personal relation to Holy Spirit and not other dimensions of spirituality. In fact, the needs of patients with a high level of piety as measured in our study might well be only satisfied by mind-body methods, while other dimensions might be more related to social activities or a sense of responsibility for oneself which may entail use of social contacts or biological-based CAM methods. Attentiveness Attentiveness was identified as another significant predictor for CAM usage. The concept of mindfulness and attentiveness has been shown in various studies to promote health and well-being (Baer 2003;Brown and Ryan 2003;Baltzell and Cote 2017). Techniques, such as Yoga, meditation and other methods, with Buddhist origin have been promoting mindfulness, as an essential component of their practice, for centuries (Baltzell and Cote 2017). Relatively to that, western society has only recently discovered the benefits of mindfulness training. Literature is emerging in that aspect and plenty of definitions of attentiveness are developing. Not only can it be trained but there are individuals with higher tendencies of self-care, and self-awareness. If seen as a character-trade or personality property, mindfulness can be measured and has been identified as a predictor for social behaviors (Lakey et al. 2008;Ruedy and Schweitzer 2010). However, research on how mindfulness as a personality property affects decision-making in disease and the use of CAM is sparse. We identified attentiveness as a significant predictor for the use of biological-based methods of CAM, such as vitamins, or other food supplements like selenium and zinc, but not as a predictor for what we defined as "holistic and mind-body methods", e.g. Buddhist methods like Yoga, Thai Chi, Qi Gong. This is interesting when considering the origins of mindfulness trainings. However, the results go in line with the assumption that mindfulness promotes health. People with higher levels of mindfulness might have more significant interest in their body functions, and a greater knowledge about what strengthens their body and mind. It can be assumed that they have greater recourses, and capacities to inform themselves about possible treatment options. Furthermore our findings could go in accordance with a study published in 2008, indicating that higher levels of mindfulness promote a less defensive and more open communication style (Lakey et al. 2008). This could enable patients to communicate openly with their physician about possible alternative treatment options and thus explain the patients' choice of physician-approved alternative methods. Self-efficacy and resilience It is evident to assume that in line with attentiveness also other personal traits like self-efficacy and resilience promote interest in self-care and participation in a healing process. It has been established in various studies in the past that resilience and self-efficacy positively relate to positive health behavior and well-being (Strecher et al. 1986;Cosco et al. 2016). However, in this study, we were not able to prove a significant relationship between these two personal traits and the use of CAM. Our findings do go in line with a former study from our group (Ebel et al. 2015). Equally to our study, self-efficacy did not show a significant effect on CAM use. Yet research in this field is still very sparse, further research with a bigger study sample might be needed to validate our results and to find explanations for them. Limitations Our results were collected in a rehabilitation center, this might represent a special group of patients that have been living with their diagnosis for a while and have had time to reflect upon personal coping methods, e.g. the use of CAM. Another point that could be considered a limitation to the work might be the fact that the "TPV", which is the instrument we used to measure spirituality within our study population, does only measure one dimension of spirituality. It evaluates spirituality in the sense of religion, the relation 1 3 to the Holy Spirit. More studies are needed to learn more on the influence of all dimensions of spirituality including altruism, love, awe and gratitude on the needs and coping of cancer patients in the health care system and to develop support methods addressing these needs more differentiated. Another limitation point might be the missing differentiation between the tumor types. Furthermore, there might be systemic differences between participants and patients who decided not to participate, as participation was voluntary. Conclusion This study is among the first to take a more detailed look at how numerous personal traits relate with the use of CAM methods and differentiate between the applied methods. We showed that next to sociodemographic predictors, like age, sex, and education, also personal traits anticipate the use of CAM. Furthermore, it demonstrates that within the group of CAM users, there are unambiguous differences between the participants of the study. While the use of "holistic and mind-body methods" is associated with higher levels of spirituality, a predictor for "biological-based methods" is attentiveness. Our finding should be considered by conventional health care providers and could be integrated into a holistic assessment, to offer information about complementary medicine and meeting patients' needs. Physicians may need to improve their understanding of personal traits influencing the use of CAM methods, and therefore decision-making and health behavior. This might ease the way into more open communication, between patients and physicians, build mutual confidence and potentially facilitate patients' decisions in using health-wise viable CAM methods.
v3-fos-license
2020-03-31T13:09:54.293Z
2020-03-31T00:00:00.000
214714858
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2020.00530/pdf", "pdf_hash": "73dcc1d82ae45679256c853c0539310702fe019c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42735", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "sha1": "73dcc1d82ae45679256c853c0539310702fe019c", "year": 2020 }
pes2o/s2orc
Assessment of the Role of C3(H2O) in the Alternative Pathway In this study we investigate the hydrolysis of C3 to C3(H2O) and its ability to initiate activation via the alternative pathway (AP) of the complement system. The internal thioester bond within C3 is hydrolyzed by water in plasma because of its inherent lability. This results in the formation of non-proteolytically activated C3(H2O) which is believed have C3b-like properties and be able to form an active initial fluid phase C3 convertase together with Factor B (FB). The generation of C3(H2O) occurs at a low but constant rate in blood, but the formation can be greatly accelerated by the interaction with various surfaces or nucleophilic and chaotropic agents. In order to more specifically elucidate the relevance of the C3(H2O) for AP activation, formation was induced in solution by repeated freeze/thawing, methylamine or KCSN treatment and named C3(x) where the x can be any of the reactive nucleophilic or chaotropic agents. Isolation and characterization of C3(x) showed that it exists in several forms with varying attributes, where some have more C3b-like properties and can be cleaved by Factor I in the presence of Factor H. However, in common for all these variants is that they are less active partners in initial formation of the AP convertase compared with the corresponding activity of C3b. These observations support the idea that formation of C3(x) in the fluid phase is not a strong initiator of the AP. It is rather likely that the AP mainly acts as an amplification mechanism of complement activation that is triggered by deposition of target-bound C3b molecules generated by other means. INTRODUCTION The alternative pathway of complement (AP) is initiated by C3b and factor B forming a Mg 2+ -dependent complex as reviewed in Lachmann (1) and Harrison (2). This initial complex formation is followed by the cleavage of factor B by factor D into Ba and Bb to form the active labile enzymatic complex C3bBb, the AP C3 convertase, which in the fluid phase has a half-life of 90s as determined in vitro using purified components (3,4). The C3bBb convertase can then cleave native C3 molecules into C3a and C3b. These C3b molecules trigger a positive feedback loop reaction, with each new C3b molecule potentially being able to form a new AP convertase complex. Thus, in order for the AP to commence and form an initial AP convertase, C3b needs to be available in the fluid phase. As an indication of C3b formation, the anaphylatoxin C3a/C3a desArg is constantly generated with a half-life in plasma of ∼30 min (5). The C3a levels are elevated in proportion to the concentration of C3 (i.e., the C3a/C3 ratio is constant) which was evident in a normal/obese population with a wide range of C3 concentrations in the blood plasma (6). This turn-over of C3 has been explained by the tick-over theory, put forward by Lachmann et al. in the early 1970s (7,8). This theory states that low amounts of C3b are constantly generated in sufficient quantity to be able to interact with Factor B (FB) and initiate an initial fluid-phase AP convertase. The origin and configuration of this C3 species has not been fully elucidated, but in the early 1980s Pangburn et al. described the continuous hydrolysis of the internal thioester in C3, generating a "C3b-like" molecule with no hemolytic activity (9,10). Based on these findings, the tick-over of native C3 to C3(H 2 O) has been the prevailing mechanism explaining the tick-over theory and activation of the AP. The C3b-like activity of C3(H 2 O) was linked to its ability to bind FB and form an AP convertase and to its susceptibility to cleavage by Factor I in the presence of Factor H. The convertase forming properties were assessed in a purified system by mixing C3(H 2 O), FB and Factor D (FD) to form AP convertases. The generated AP convertases were allowed to cleave native C3. The remaining native C3 after cleavage of C3 to C3a and C3b with C3(H 2 O)Bb was measured in a sensitive hemolytic assay (10). One of the caveats in the earlier studies was that in all the presented experiments, the reaction was amplified by either isolated C3 nephritic factor (C3Nef) or by purified properdin, which both stabilize the AP convertase. None of these components are present under physiological conditions. C3Nef is related to C3 glomerulonephritidis (C3GN) and, as later shown, purified properdin preparations contain a large fraction of aggregates (P n ), which cause fluid phase complement consumption when added to serum in contrast to the physiological oligomer forms (P 2 , P 3 , and P 4 ) (11). More recently, the non-physiological activity of the aggregates of properdin were confirmed since P n and unseparated properdin were shown to bind to numerous surfaces, in contrast to the P 2 -P 4 forms which showed selectivity for zymosan and necrotic cells (12). Also, in later studies it was shown that C3(H 2 O) consists of a mixture of C3 populations; one which has a native C3-like configuration C3(H 2 O * ) and one that has a "C3b-like" form (13). The former one can return to the shape of native C3 with hemolytic activity, while the latter one is in an irreversible "C3b-like" state. This means that the preparation will contain various forms of C3 including contaminating native C3 that allows cleavage of C3 into C3b which will be able to participate in the generation of AP convertases thereby distorting an accurate evaluation of the true properties of C3(H 2 O). In order to more precisely elucidate the relevance of C3(H 2 O) for AP activation we prepared different forms of C3(H 2 O), which from now on will be called C3(x) where the x is the reactive nucleophilic or chaotropic agent. We separated the various populations of C3(x), and demonstrated that the conditions previously described to create C3b-like molecules were variable and insufficient in order to convert all native C3 to C3(x). Confirming previous studies (13,14), different fractions of C3(x) were identified and it was also shown that different agents [methylamine, KSCN, repeated cycles of freezing/thawing (F/T)] generated its distinctive populations. A schematic illustration of the structural rearrangement of the different forms of C3(x) is shown in Figure 1. In general, we show that all forms of C3(x) were able to form C3(x)Bb convertases and were susceptible to cleavage to Factor I in the presence of Factor H, but all forms were much more sluggish compared to C3b and no AP convertase activity was observed in the presence of Factor I and Factor H. Theoretically, only one initial C3(x) molecule is needed to start the positive feedback loop and to commence AP activation, but due to its inefficiency, this mechanism as demonstrated on various activating surfaces, is likely to be largely overruled by C3b generation mediated by the surface-bound classical pathway (CP) lectin pathway (LP) convertase. This implies that the AP mainly is an amplification mechanism (15), apart from situations when activation is dependent on insufficient regulation as in paroxysmal nocturnal hemoglobinuria (PNH), atypical hemolytic uremic syndrome (aHUS), age-related macular degeneration (AMD), etc. Purified C3 (18) in which the thioester had been disrupted by repeated (10 times) freezing (−20 • C) and thawing [room temperature (RT)] was dissolved in PBS (Phosphate buffered saline, pH 7.4) and used as standard in these assays. In hemolytic tests for the CP and the AP (19) this preparation of C3 was found to be devoid of activity, confirming that the thioester was disrupted. Assessment of the Rate of C3(x) Generation The C3(x) ELISA was then used to measure the rate at which C3(H 2 O) is formed in plasma. Freshly drawn human whole blood in K 2 -EDTA Vacutainers R (BD, Plymouth, UK) from healthy volunteers was centrifuged to obtain plasma. The plasma was then incubated continuously rotating at 20 rpm at 37 • C in PVC tubes pre-coated with heparin (Corline Systems AB, Sweden) without a bubble and samples were collected for C3(x) ELISA analysis at different time points from 0 to 180 min. Preparation of C3(x) With Nucleophilic and Chaotropic Agents Native C3 [purified in-house from human plasma, according to Hammer et al. (18)] was incubated with the nucleophilic agent methylamine (0.2 M) or potassium thiocyanate (KSCN; 0.33 M) for 30 min at 37 • C in VB ++ (Veronal-buffered saline containing 5 mM Na-barbiturate, pH 7.4; 145 mM NaCl; 0.15 mM Ca 2+ ; 0.5 mM Mg 2+ ) adjusted to pH 8.0. After the incubation the C3 was dialyzed back to VB ++ pH 7.3. Some of the methylamine treated C3 was subjected to repeated F/T cycles (10 times from −20 • C to RT) to remove any traces of native C3. Ion Exchange Chromatography Cation-exchange chromatography was used to identify different C3 populations within native C3, C3 (methylamine), C3 (methylamine F/T), and C3(KSCN). The chromatography was performed using a Mono S 5/50 GL column (GE Healthcare, Bio-Sciences AB, Uppsala, Sweden) coupled to the NGC TM Chromatography System. The flow-rate was set to 0.5 mL/min at RT and a gradient was established from 0 to 0.85 M NaCl in 20 mM Phosphate buffer pH 6.8. Fractions were collected for further analysis. Identification of C3(x) Forms in Human Serum Human whole blood from a healthy volunteer was collected in Serum/Cat BD Vacutainers R (BD, Plymouth, UK). The collected serum was incubated at 37 • C for 24 h, followed by protein precipitation using 15% polyethylene glycol (PEG)-4000 at 4 • C for 60 min and then centrifuged at 13,000 × g for 10 min. Finally, the pellet was resuspended in 20 mM PBS pH 6.8. The protein solution was diluted 1:3 before adding it to the NGC TM Chromatography System with the MonoS column. The same "Cation exchange MonoS C3" protocol as described above was used (Flow-rate 0.5 mL/ min, 85% gradient (0-0.85 M NaCl in 20 mM Phosphate buffer pH 6.8). The fractions were collected (25 fractions), for further analysis of C3 and C3(x). The C3c-ELISA was performed according to the protocol for detection of C3 levels described in Henningsson et al. (22). The fractions were diluted 1/200 and added to 96-well plate coated with polyclonal antibody anti-C3c (Dako, Glostrup, Denmark). This was followed by detection using biotinylated anti-C3c diluted 1/6,400 and streptavidin conjugated to HRP (GE Healthcare, Chicago, IL, US). A serum pool (consisting of 46 donors' serum) at a concentration of 500 µg/L was used as a standard. The C3(x) analysis was performed according to the assay described above. Wes Capillary Electrophoresis/Blotting Immunoassay An automated Western blot-like assay was used. Simple Western 12-230 kDa size assay cartridges under reducing conditions were carried out using a Wes R analyzer (Protein Simple, Santa Clara, CA, USA) according to the manufacturer's manual. In brief, C3/C3b/C3(met)/C3(KSCN) (2.8 µg) were incubated together with Factor B (4 µg) and Factor D (0.02 µg) (Complement Technology Inc, Tyler, TX, USA) in VB ++ at 37 • C and samples were collected after 1, 5, 15, 30, 60, and 120 min. Following incubation, the samples were diluted 5x in 0.1 x Wes Sample Buffer to the final concentrations seen in Table 1. In order to follow the cleavage of FB to Bb and Ba, a mAb mouse antihuman complement factor Bb (10 µg/mL) (Bio-Rad, Kidlington, UK) detecting both intact FB and the Bb cleavage product, was used as primary antibody for the Wes analysis together with the Wes anti-mouse detection module. The following electrophoretic protein separation and immunodetection were performed using the default SimpleWestern TM settings. The quantified immune-detected signal, i.e., area under the curve, was analyzed using the Compass software (version 4.0.0, ProteinSimple TM ), which also converted the electropherograms into virtual blots. The experiments are repeated four times. Factor I Cleavage Analysis Cleavage of native C3, C3b, C3(met), F/T C3(met) (167 µg/mL) by Factor I (17 µg/mL, Complement Technology Inc) in the presence of Factor H (33 µg/mL, purified in-house from human serum), was analyzed by SDS-PAGE electrophoresis on a 4-20% gradient gel (Mini-PROTEAN R TXG TM Precast Gels, Hercules, CA, USA, Bio-Rad), essentially according to Hammer et al. (18). The samples were boiled under reducing conditions using 100 mM DTT, and the proteins were visualized on the gel using Coomassie brilliant blue staining. Hemolytic Assay of the Alternative Pathway The hemolytic tests for the AP were performed as in Nilsson and Nilsson (19). In short, 50 µL of C3 depleted serum (Complement Technologies Inc., USA), were carefully mixed with 25 µL of C3 (positive control), C3b (negative control) or the C3(x) preparations, i.e., after treatment with methylamine and F/T. Native C3 was tested at 5 different concentrations (75-365 µg/mL final concentration) and C3b, C3(x) methylamine and C3(x) F/T were added at a final concentration that was in the middle of this concentration range (see Table 2). The serum samples were then added to 100 µL 50% rabbit erythrocytes (v/v) and agitated at 37 • C for 20 min and then stopped by the addition of VB-EDTA. The activity of the test serum was compared to that of a reference serum and the activity was expressed in percent. Estimation of C3(x) Formation at Different pH C3 (100 µg/mL) was incubated in Sodium Phosphate buffers with pH ranging from 4.3 to 7.3 for 60 min at 37 • C. The level of C3(x) formation was measured using multiplex xMAP according to the protocol described above. The ability of these preparations to cleave FB after addition of FD was measured by Wes immunoassay. Similarly, FB consumption was also measured after first neutralizing the C3 preparations to pH 7.4. The samples were prepared for the Wes immunoassay by mixing the C3 preparations treated at pH 4.3-7.3 (70 µg/mL), FB (100 µg/mL), and FD (0.5 µg/mL) and followed by incubation at 37 • C for 5, 30, and 60 min and a 5x dilution in 0.1 x Wes Sample Buffer. Wes immunoassay was performed under reducing conditions and for detection, the primary mAb mouse antihuman complement factor Bb (1 µg/mL) was used together with the Wes anti-mouse detection module. Generation of C3(x) in Plasma by Incubation With Nucleophilic Agents Blood from healthy volunteers, who had not been receiving any medication for a minimum of 10 days prior to donation, was collected in Vacutainer R tubes (BD, Plymouth, UK) in the presence of the specific thrombin inhibitor lepirudin (50 µg/mL, Refludan TM , Aventis Pharma) and centrifuged to obtain plasma. Aliquots (20 µL) of physiological relevant final concentrations of ammonium hydroxide solution (0-3.2 mM) were added to 480 µL of lepirudin plasma and incubated for 60 min at 37 • C. The levels of generated C3(x), C3a and sC5b-9 in the samples were analyzed by ELISA. C3a and sC5b-9 were assessed according to Nilsson et al. (21) and Mollnes et al. (23). Each experiment was performed separately using blood from different donors. Activation of the Alternative Pathway on Different Surfaces Four different surfaces, i.e., lipopolysaccharide (LPS) coated polystyrene (PS), bare PS, glass, and polypropylene (PP) tubes were selected for the evaluation of surface-induced activation of the AP in lepirudin plasma. The LPS coated tubes were prepared by adsorption of LPS (1 mg/mL) from Escherichia coli O55:B5 (Sigma Aldrich) to clean PS tubes during 1h at RT, followed by careful washing with PBS. All tubes were dried before use. The lepirudin plasma was added to the four different types of tubes and incubated stationary at 37 • C. Two parallel series of experiments were performed; one with the addition of Mg-EGTA (0.5 and 10 mM final concentration, respectively) and the other without, where the volume was compensated with the corresponding amount of PBS. Samples were collected after 0, 15, 30, 60, and 120 min. EDTA (10 mM final concentration) was immediately added to stop further activation. The surface mediated complement activation was monitored as the generation of C3a measured by sandwich ELISA. The experiment was repeated with three different blood donors. Statistics All experiments have been repeated at least 3 times. Data are presented as mean values ± SEM or as a representative image. Statistical calculations (one-way ANOVA with Bonferroni's multiple comparisons test) were made using GraphPad Prism version 6.0 (GraphPad Software, La Jolla, CA USA). P < 0.05 was considered significant. Correlation between the different parameters was calculated with the non-parametric Spearman correlation test. Differences between groups (patients vs. controls) were calculated using the Mann-Whitney U-test. Ethics Ethical approval for blood collection was obtained from the regional ethics committee in Uppsala with the diary number 2008/264. RESULTS Validation of an Assay for Non-proteolytically Activated C3 [C3(x)] An ELISA for detection of C3(x) (24) was used employing the monoclonal mAb 4SD17.3 against a neo-epitope in C3a for capture and polyclonal antibody anti-C3d for detection (Figures 2A,B). Since the epitope for mAb 4SD17.3 is exposed both in high molecular weight C3(x) and low molecular weight C3a, a mix-up between C3(x) and C3a is possible. Although this interaction occurs only to a small extent, an initial PEG precipitation step was included prior to analysis. In this study, the assay was transferred to the Mag-pix platform using the same pair of antibodies. Good correlation was found between the results obtained with the two techniques (r s = 0.976, p < 0.0001), albeit with higher nominal values found for the ELISA. When fully optimized, the intra-CV for the assay was 2.7% and the inter-CV 6.0%. Next, we tested whether the choice of detection antibody (anti-C3d or anti-C3c depicted in Figure 2B) would affect the levels of detection. The rationale is that bound C3b and C3(x) are susceptible to cleavage by Factor I in plasma which ultimately may cleave off C3c. This can potentially result in lower nominal values if anti-C3d is used for detection. As this was not the case, the values correlated closely (r s = 0.918, p < 0.0001) with only slightly higher values obtained when anti-C3c was used for detection ( Figure 2C). The ELISA was then used to monitor the spontaneous formation of C3(x) in plasma. The formation of C3(x) is easily affected by external factors such as the activating surfaces e.g., those presented by the walls of the reaction tubes or at the airliquid interface. We therefore carefully selected low-adsorbing polypropylene tubing and removed all air bubbles before onset of the experiment to ensure that it actually was the spontaneous formation of C3(x) that was analyzed with minimal influence from other factors. This may explain the slower rate of C3(x) formation obtained in our experiments compared to values previously published (10). There was an initial faster C3(x) generation which tended to reach a slower, stationary stage within 60 min. Overall it showed a slow but continuous C3(x) formation of ∼3 nM per hour (Figure 2D). Characterization of C3(x) Preparations Nucleophilic and chaotropic agents have been proven to accelerate C3 conversion to C3(x) in selected composition buffers (9, 10). C3(x) was prepared by treatment with either the chaotropic agent KSCN [C3(KSCN)] or the nucleophilic agent methylamine [C3(met)]. In addition, C3(x) prepared by repeated F/T and C3(met) also exposed to repeated F/T were included in the study. Size Exclusion Chromatography In order to characterize the C3(x) preparations, C3(x) prepared by methylamine treatment and repeated F/T were allowed to bind to mAb 4SD17.3. This antibody is directed toward a neoepitope of C3a only exposed on C3(x), but not in native C3. The proteins before and after complex formation were analyzed on a SEC column. Fibrinogen (molecular weight of 300 kDa) was used as a molecular weight marker as it eluted at approximately the same potential volume as the formed complexes. As seen in the Figure 3A (upper panel), C3(x) with a molecular weight of 180 kDa is eluted slightly before the antibody (150 kDa). However, after incubating C3(x) with mAb 4SD17.3, complexes were formed which appeared as a distinct peak near the 300 kDa molecular weight marker proving that the C3a domain was exposed and available for binding. It was confirmed that the 300 kDa peak contained complexes between C3(x) and mAb 4SD17.3, by detection of both C3 and IgG in the fractions collected from the SEC analysis using ELISAs (Supplementary Figures 1B-E). C3b, lacking the C3a fragment, was used as a negative control (Figure 3A, lower panel) and as expected, no complexes were formed after incubation with the mAb 4SD17.3. Similarly, complexes were allowed to form between C3(x)/C3b and mAb 7D84.1, which is directed against a neo-epitope in C3d,g, only available in the denatured form of C3. Separation of the SEC column shows no traces of complexes at the 300 kDa elution point, indicated by fibrinogen (red trace). Instead, C3 (blue trace), mAb 7D84.1 (green trace) and the mixture of the two after incubation (dark blue trace), were all eluted at the same position. The same pattern was seen for the corresponding analysis of C3b and mAb 7D84.1 (Supplementary Figure 1A). These results show that the C3a epitope is available in the C3(met) preparation, but the molecule has not undergone a conformational change that exposes the neo-epitope in C3d,g. Ion Exchange Chromatography The elution profile of native C3 (Figure 3B, red line) separated on a MonoS cation exchange chromatography column shows one distinct peak (peak 1) that contains native C3 followed by two small peaks (peak 2 and 3) that contain C3(x) since a small part of the C3 population usually undergoes spontaneous hydrolysis. In the corresponding analysis of a sample where C3(x) had been formed by repeated F/T (Figure 3B, blue line), the vast majority of the sample population was instead eluted in peak 3, a small part still appeared in the second peak, while only a minor part remained in the first peak. Separation profiles of the different C3(x) preparations, i.e., C3(met), F/T C3(met), and C3(KSCN) on the MonoS column are shown in Figure 3C. The methylamine treated C3 was mainly eluted in peak 2 and peak 3, but repeated F/T of this sample caused most of the population in peak 2 to be transferred into peak 3. In addition, there is a small peak between peak 1 and 2. This peak increases slightly after freezing/thawing cycles, while peak 1 that contains native C3 disappears (Figure 3C, green line). Although this peak has not been characterized due to the small amount, it is assumed to contain C3(x) or a derivative thereof. A similar pattern was detected for KSCN treated C3, with the difference that this preparation also contained a 4th peak at the end of the elution profile. However, further analysis showed that this peak contained C3(KSCN) with similar functional properties as those in peak 3 in the same preparation (Supplementary Figure 3C). Since this analysis of C3(x) indicates that there are two different populations C3(x) formed initially, separated as peak 2 and peak 3 on the MonoS column, these two fractions were collected for further characterization and will in the following text be named C3(x) 1 and C3(x) 2 respectively. Identification of C3(x) Forms in Human Serum In addition to studying C3(x) formation in a purified system, C3(x) generation in human serum was also analyzed after incubation at 37 • C overnight followed by PEG 4000 precipitation before separation on the cation exchange (MonoS) chromatography column (Supplementary Figure 2A). All collected fractions were analyzed for the total amount of C3 using a C3c ELISA. The complete analysis of all the fractions from the MonoS separation is found in Supplementary Figure 2B. As illustrated in this figure, most of the C3 was eluted in fraction 2-9. In order to identify the amount of C3(x) in these fractions the C3(x) assay was used. It turned out that C3(x) was mainly found in fraction 6-8 and to some extent also in fraction 9, but only traces in later fractions ( Figure 3D). This indicates that mainly the first form of C3(x) i.e., C3(x) 1 is present in the serum sample and only minute amounts, if any, of C3(x) 2 . One reason for this may be that, in contrast to C3(x) 1 , C3(x) 2 is easily cleaved by Factor I in the presence of Factor H and the split products will be both PEG precipitated and separated differently on MonoS columns compared to the intact C3(x) 2 molecules. Generation of Bb in Presence of Different C3(x) Preparations To get an idea of the properties of the two forms of C3(x) [i.e., C3(x) 1 and C3(x) 2 ] identified by the cation exchange (MonoS) chromatography separation, their ability to bind FB allowing its subsequent cleavage by FD into Bb and Ba and thereby forming a C3 convertase, was evaluated using a new method, similar to that employed by Pangburn previously (9), but which was modified in the detection stage. In our method, the cleavage of FB and the ensuing generation of Bb was measured with Wes immunoassay after 1, 5, 15, 30, 60, and 120 min. For comparison, FIGURE 4 | (A) Wes immunoassay monitoring the consumption of FB due to cleavage to Bb and Ba after the addition of FD to native C3, C3b and peak 2 and 3 isolated after separation of methylamine treated C3 on a MonoS chromatography column at different time points from 1 to 120 min. The decrease in FB due to cleavage to Bb and Ba was measured by Wes immunoassay using a specific mAb for Bb. The data is presented as mean ± SEM. (B) Cleavage by Factor I in the presence of Factor H of native C3, C3b and peak 2 and 3 isolated after separation of methylamine treated C3 on a MonoS chromatography analyzed by SDS-PAGE under reducing conditions. The samples before incubation are marked with "1" and the same samples after incubation with Factor I and Factor H are marked with "2" below each lane in the panel. native C3 and C3b were also included in the study. The results are summarized in Figure 4A. As expected, C3b readily binds FB which in the presence of FD is cleaved to Bb and Ba. Already after 1 min the amount of FB was reduced to 90%, after 5 min to ca. 60%, after 15 min only 25% remained and after 30 min almost all FB was consumed. The same scenario was seen in the native C3 sample, which can be explained by C3b contamination in the C3 preparation. Immediately after addition of FB and FD a C3 convertase forms that cleaves C3 and generates more C3b, leading to an efficient positive feedback cleavage of FB, demonstrating the accelerating nature of the AP. However, much slower FB cleavage profiles were observed with both C3(x) 1 and C3(x) 2 from the methylamine treated C3 samples separated on the MonoS column. In C3(x) 1 , practically all FB remains after 1 min, 85% after 5 min, 60% after 15 min and there are still 30-40% FB in the sample after 60-120 min. An even slower FB reduction was obtained in the sample with C3(x) 2 . In this case, all FB remains after 1-5 min, 90% after 15 min, and as much as 60% still has not been consumed after 120 min. It should be noted that the ability of the different C3, C3b, and C3(x) preparations to bind and cleave FB is easily affected by different factors that change the conformation of the C3 preparations. This is particularly true if the preparations are stored (e.g., at +4 • C or frozen/thawed), which is illustrated in Supplementary Figure 3A. It presents the corresponding blot views of Wes immunoassays in four serially repeated experiments performed over a week. The FB cleavage was increasingly slower for all C3 preparations, but the relative difference between native C3a (B), and sC5b-9 (C) levels were examined. Data was collected from 3 donors and presented as mean ± SEM. Significant difference compared to control plasma without addition are indicated as *P < 0.05 and ****P < 0.0001. C3/C3b and C3(x) 1 / C3(x) 2 was similar in all cases, where C3(x) 2 was by far the least effective. KSCN treatment of C3 did not result in any C3(x) 1 , but a large population of C3(x) 2 , which was collected from the MonoS chromatography separation and tested for its capacity to bind FB, which subsequently may be cleaved to Bb and Ba in the presence of FD (Supplementary Figure 3B). As expected, a very slow FB consumption was obtained with C3(x) 2 from C3(KSCN), almost identical to that observed with C3(x) 2 from C3(met), which confirms that this population of C3(x) is a poor initiator of the AP. Susceptibility of C3(x) to Factor I and Factor H Cleavage To further evaluate the specific properties of C3(x), the inactivation by Factor I in the presence of Factor H was analyzed by SDS-PAGE electrophoresis under reducing conditions and stained with Coomassie Brilliant Blue. Figure 4B shows the results from Factor I + Factor H cleavage of native C3, C3b, and C3(x) 1 and C3(x) 2 collected as peak 2 and 3 on the MonoS column. Native C3 was used as a control, since it should not be cleaved by Factor I in the presence of Factor H, but despite this, a weak 40 kDa band was visible after incubation. This indicated that the native C3 preparation contains a small amount of C3(x), which was clearly visible in the MonoS chromatogram. Here, in addition to the first large peak with native C3, there were also several succeeding small peaks containing C3(x) (Figure 3B). An almost complete cleavage of the α-chain to a 76 kDa and a 40 kDa fragment was seen in the C3(x) 2 sample as well as with C3b (67 and 40 kDa). On the other hand, C3(x) 1 was only partly cleaved, since some intact α-chains remained after the incubation with Factor I and Factor H, although a weak 40 kDa band also started to appear. The same was observed in the methylamine treated C3 sample before F/T, whereas C3(KSCN) and C3(met) preparations that had been exposed to repeated F/T showed a complete cleavage of the α-chain to a 76 kDa and a 40 kDa fragment after incubation with Factor I and Factor H (Supplementary Figure 3C). Moreover, the addition of Factor I and Factor H to native C3 and C3b followed by addition of FB and FD completely inhibited the conversion of FB to Bb (Supplementary Figure 3D). Hemolytic Activity In the AP hemolytic assay, native C3 added at fixed concentrations ranging from 75 to 365 µg/mL to C3 depleted serum, showed a linear relationship between the increasing C3 concentration and an upturn in activity, where the highest added C3 concentration reached 95% activity. But the C3(x) prepared either by treatment with methylamine [C3(met)] or by repeated F/T [C3(x)] was found to be devoid of activity, confirming that the thioester bond had been disrupted. The same was found for C3b. The results from the hemolytic assay are summarized in Table 2. C3(x) Formation as a Function of pH To evaluate the sensitivity of the C3 thioester integrity to changes in pH, the generation of C3(x) was measured after incubating C3 in phosphate buffers with pH ranging from 4.3 to 7.3, which were selected to cover a range from an acidic intracellular milieu (e.g., lysosomes) to the neutral pH of blood plasma. The lowest pH (4.3 and 4.6) induced significant levels of C3(x) formation, while the pH of 4.9 and above did not seem to affect the C3 conformation to any measurable extent ( Figure 5A). Also, the ability of these C3 preparations to form a C3 convertase and cleave FB to Bb after addition of FD within the same pH intervals was evaluated using Wes immunoassay ( Figure 5B). As described above, a few molecules of C3b in the native C3 preparation are sufficient to start the convertase formation, which rapidly leads to the formation of more C3b by cleavage of C3 and a fastcontinued convertase formation, measured as the conversion of FB to Bb. If there is a large proportion of C3(x) in the sample, it cannot be cleaved by the convertase and generate more C3b, which results in a slower conversion of FB to Bb. Interestingly, the most efficient generation of Bb was found at a pH between 6.3 and 6.8. The slightly higher pH (7.3) still generated Bb, but at a slower rate, whereas a pH of 5.8 and below seemed to prevent Bb formation. The lack of activity at low pH may be due to both a large formed proportion of C3(x) or that the convertase activity and complex formation is not functioning in that environment. In addition, low pH may affect the enzymatic properties of the serine proteases Bb and FD, since their active sites are likely exhibit pH sensitivity. Therefore, the same experiment was performed after first neutralizing the C3 preparations, and thereby only the correlation between C3(x) formation and the efficiency of FB cleavage/Bb generation was studied ( Figure 5C). After incubation at the lowest pH (4.3) more C3 was transformed to C3(x), which resulted in slower FB cleavage/Bb generation. When the pH increased, less C3(x) was formed and the rate of FB reduction became faster. No measurable difference was seen in the C3 samples after treatment in pH 5.8, 6.3, 6.8, and 7.3, and in these cases most of the FB had been converted to Bb after 5 min. Generation of C3(x) in Plasma During Incubation With Nucleophilic Agents As our initial experiment pointed to C3(x) in solution being a poor initiator of AP in a pure system, we decided to further investigate its functionality in plasma. This was tested by the addition of physiologically relevant amounts of ammonia (0-3.2 mM) in our model with human lepirudin plasma followed by immediate measurement of C3(x) generation. The C3(x) generation was induced by the presence of ammonia in a dose dependent manner, and a distinct increase of C3(x) was already detected at 0.2 mM ammonia (Figure 6A), which was a clinically relevant level that can be found in acute liver failure patients (26). However, a small increase in the level of complement activation marker C3a was induced (p = 0.0324) by the highest amount of ammonia (3.2 mM) ( Figure 6B) while no effect was detected by measuring sC5b-9 (Figure 6C), suggesting that C3(x) formed by ammonia is a poor trigger of complement activation. Surface Induced Complement Activation in Plasma Initiation of complement activation in human plasma after contact with a number of different types of surfaces was studied by measuring the generation of C3a at different time points from 0 to 60 min. In order to evaluate how much AP contributes to this activation, the C3a levels in lepirudin plasma after surface contact were compared with corresponding plasma samples where CP and LP had been turned off by the addition of EGTA. As seen in Figure 7, the C3a levels generated after incubation with solid surfaces were generally higher at all time points in the samples where activation was allowed via CP and LP, compared to those where activation was via AP (+ EGTA). As expected, highest C3a levels were elicited by the LPS surface, a well-known complement trigger. Even though the values were significantly elevated in the EGTA-plasma, they were twice as high in plasma without EGTA. The other three tested solid surfaces, i.e., glass, PS and PP, resulted in an increase in C3a generation over time, and the measured concentrations were slightly lower when activation occurred via AP in all cases. DISCUSSION In the present study, we investigate C3(x) both in purified systems and in serum and plasma and we confirm that C3(x) exists in several forms (13). The AP convertase-forming properties and the sensitivity to inactivation by Factor I of C3(x) is much more sluggish and varies between all of these forms compared with the corresponding activity of C3b both in purified systems and in plasma. These observations support the idea that formation of C3(x) in the fluid phase is not the main mechanism by which C3b is made available during AP activation (2,25). A crucial question at issue was how "C3b-like" C3(H 2 O) [i.e., C3(x)] is. A caveat in the early studies is that in all the presented experiments, the reaction was amplified either by C3 nephritic factor (C3Nef), properdin or by using Ni 2+ instead of Mg 2+ (3). All these actions taken were natural in order to be able to characterize the C3(x)Bb convertase but make the assessment of the C3b-like activity difficult. The conclusion from these early studies is that C3(x) obtains "C3b-like" properties and that this molecule has similar activity as C3b (3,9,10,27,28), giving the impression that it is a sufficient mechanism for generating C3b and initiating AP activation. There is clear evidence that C3(x) forms a complex with FB but except for the original observations using C3Nef or purified properdin, only few examples exist that it alone actually forms an efficient active convertase (3,9,10,27,28). In the initial studies, the rate at which the thioester was hydrolyzed was estimated to be 0.2-0.4% per hour (10) which has been confirmed in later studies including in this paper using specific ELISAs (24,29). However, we found a lower rate of C3(x) formation in plasma (ca 3 nM/h), most likely by minimizing the influence of interfering surfaces such as the walls of the reaction tubes and the air-liquid interface. In the present study we used two western blot-like assays: first capillary electrophoresis in order to test C3(x)'s ability to bind FB, which subsequently can be cleaved by FD into Bb and Ba, and thereby form an AP convertase and secondly SDS-PAGE in order to test its inactivation by Factor I in the presence of Factor H. We found that C3b and native C3 (being converted to C3b by the formed AP convertases) both were able to almost completely convert factor B to Bb within 1 min in a fluid-phase assay. By contrast, using fully converted C3(x) i.e., no hemolytic activity, only sluggish convertase activity (without the presence of C3Nef or properdin) and Factor I inactivation was obtained with methylamine, F/T and KSCN-treated C3 compared with C3b. Another issue in the previous literature is that "C3(x)" was generated by incubating the native C3 in the presence of 0.1 M methylamine, pH 7.4 at 37 • C for 60 min, which may be a too mild condition for full conversion to C3(x) (9, 10). Supporting the hypothesis that native C3 may be remaining in the mixture is that the experiments showed a large proportion of remaining C3 α-chain after cleavage with Factor I in the presence of Factor H, given that cleavage of the α-chain by Factor I is a way to differentiate native C3 from C3(x). In the present paper, we confirm that C3 treated with methylamine according to older protocols only leads to partial cleavage in the α-chain by Factor I. Similarly, KSCN treatment of native C3 generates a similar preparation. This implicates that previous protocols do not allow full conversion of native C3 to C3(x), although this was contradicted by the low hemolytic function of the preparation. This issue was clarified in later studies by Pangburn et al. (13) where they show that C3(NH 3 ) preparations contain two major populations of C3(x); one which is able to return back to native C3 [C3(x) 1 ], and the other that remains irreversibly in a C3(x) state [C3(x) 2 ]. These forms have been demonstrated to have the anaphylatoxin domain (ANA) at different positions (14). The first one has properties that makes it elute at close to the same position as the native C3 position in the cation exchange (MonoS) chromatogram while the latter one, with the ANA sliding through a gap formed by the macroglobulin domains, is eluted much later (13). In our investigation we find similar populations in all C3(x) preparations in addition to smaller widely distributed populations and to native C3. The fact that C3(x) 1 can be converted back into native C3 makes it possible that C3b can be present or formed in the C3(x) preparations and thereby could link the C3b-like property to C3b instead of C3(x). In order to clarify if this was the case we made a similar separation on MonoS and tested the activity of the C3(x) 1 and C3(x) 2 population derived from the methylamine treated C3. It revealed that C3(x) 1 had AP convertase forming capacity although to a lower extent than C3b, while it was fairly resistant to cleavage by I and H. C3(x) 2 had even slower AP convertase activity but was much easier to cleave by Factor I and H. These experiments imply that C3(x) has C3b-like activity but is less active. This was confirmed in serum during forced C3(x) formation by adding ammonia, where only a small amount of C3a was generated. These results indicate that fluid-phase C3(x) is not a very efficient partner in the AP convertase and therefore may be a slow contributor of C3b activity. AP activation is a surface-oriented reaction and the tick-over of C3 to C3(x) in the fluid phase would theoretically only be able to provide minute amounts of C3b to the surface, since the deposition is highly dependent on the distance during which nascent C3b is active for covalent binding. However, in order to avoid a long lag phase before the activation takes off, a large amount of initial C3b molecules bound to the target surface is needed to speed up the activation. Without available C3b molecules, the lag phase would be very extended since the C3b generation is exponentially starting from one potential molecule (30). This is very well illustrated on a biomaterial surface both in vitro and in vivo where this lag phase may last up to 5-10 min (31)(32)(33). It is also important to take the target surface into consideration. The surface needs to allow AP activation, a function that is regulated by Factor H, Factor H-related (FHR) proteins, properdin etc. If it does not, there will be no AP activation (34). But if these molecules bind to the surface in a favorable proportion, disrupting the homeostasis between activation and regulation, AP activation may be triggered, which is the case in PNH, aHUS, AMD etc. Here a long lag phase does not have any importance for the pathology. CP and LP activation is part of the specific attack initiated by antibodies, pentraxins, collectins, and ficolins. They are able to provide the necessary C3b molecules, which allows AP activation to start immediately as part of our defense against foreign substances such as bacteria, viruses etc. They also provide specificity to the AP which is governed by the specificity of the CP and the LP and the properties of the target surface. This also explains that the vast majority of C3b molecules, particularly in inflammatory reactions, are generated by the AP convertases, even if complement activation is initiated by the CP or LP. This indicates that the AP is mainly an amplification loop which is the essence of the AP and to a lesser extent an activation pathway per se (15). This notion is illustrated in Figure 7 and has recently been discussed in Ekdahl et al. (25). DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Regional Ethics Board in Uppsala with the diary number 2008/264. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
v3-fos-license
2019-04-02T13:13:57.529Z
2018-04-06T00:00:00.000
90931643
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://doi.org/10.7554/elife.41714", "pdf_hash": "113dfcb038e9cbc7bf913f06dcc899f19de13f38", "pdf_src": "BioRxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42738", "s2fieldsofstudy": [ "Biology" ], "sha1": "c1704a191fa3590d46ec9174d6f223cfe688ec62", "year": 2018 }
pes2o/s2orc
A cellular basis of human intelligence Department of Integrative Neurophysiology, Amsterdam Neuroscience, Center for Neurogenomics and Cognitive Research (CNCR), Vrije Universiteit Amsterdam, De Boelelaan 1085, Amsterdam 1081 HV, the Netherlands. 2 Molecular, Cellular, and Network Excitability Lab, Dept. of Biomedical Sciences, University of Antwerp, Antwerp, Belgium 3 Department of Computer Science, University of Sheffield, S1 4DP Sheffield, UK 4 Lab of Neural Microcircuitry, Brain Mind Institute, EPFL, CH-1015 Lausanne, Switzerland Department of Psychology, Stichting Epilepsie Instellingen Nederland (SEIN), Achterweg 5, 2103 SW Heemstede & Dr. Denekampweg 20, 8025 BV Zwolle, the Netherlands. Department of Neurosurgery, Amsterdam Neuroscience, VU University Medical Center, De Boelelaan 1117, Amsterdam 1081 HV, the Netherlands. Department of Medical Psychology, Amsterdam Neuroscience, VU University Medical Center, De Boelelaan 1118, Amsterdam 1081 HZ, the Netherlands. Introduction A fundamental questions in neuroscience is what properties of neurons lie at the heart of human intelligence and underlie individual differences in mental ability. Thus far, experimental research on the neurobiological basis of intelligence has ignored the neuronal level and has not directly tested what role human neurons play in cognitive ability, mainly due to the inaccessibility of human neurons. Instead, research has either been focused on finding genetic loci that can explain part of the variance in intelligence (Spearman's g) in large cohorts (Lam et al., 2017;Sniekers et al., 2017;Trampush et al., 2017) or on identifying brain regions in whole brain imaging studies of which structure or function correlate with IQ scores (Choi et al., 2008;Deary et al., 2010;Karama et al., 2009a;McDaniel, 2005;Narr et al., 2007). Some studies have highlighted that variability in brain volume and intelligence may share a common genetic origin (Hulshoff Pol et al., 2006;Posthuma et al., 2002;Sniekers et al., 2017), and individual genes that were identified as associated with IQ scores might aid intelligence by facilitating neuron growth (Sniekers et al., 2017) and directly influencing neuronal firing (Lam et al., 2017). Intelligence is a distributed function that depends on activity of multiple brain regions (Deary et al., 2010). Structural and functional magnetic resonance imaging studies in hundreds of healthy subjects revealed that cortical volume and function of specific areas correlate with g (Choi et al., 2008;Karama et al., 2009a;Narr et al., 2007). In particular, areas located in the frontal and temporal cortex show a strong correlation of grey matter thickness and functional activation with IQ scores: individuals with high IQ show larger grey matter volume of, for instance, Brodmann areas 21 and 38 (Choi et al., 2008;Deary et al., 2010;Karama et al., 2009b;McDaniel, 2005;Narr et al., 2007). Cortical grey matter consists for a substantial part of dendrites (Chklovskii et al., 2002;Ikari and Hayashi, 1981), which receive and integrate synaptic information and strongly affect functional properties of neurons (Bekkers and Häusser, 2007;Eyal et al., 2014;Vetter et al., 2001). Especially high-order association areas in temporal and frontal lobes in humans harbour pyramidal neurons of extraordinary dendritic size and complexity (Elston, 2003;Mohan et al., 2015) that may constitute variation in cortical thickness, neuronal function, and ultimately IQ. These neurons and their connections form the principal building blocks for coding, processing, and storage of information in the brain and ultimately give rise to cognition (Salinas and Sejnowski, 2001). Given their vast number in the human neocortex, even the slightest changes in the efficiency of information transfer by neurons may translate into large differences in mental ability. However, whether and how the activity and structure of single human neurons support human intelligence has not been tested. To investigate whether structural and functional properties of neurons of the human temporal cortex associate with general intelligence, we collected a unique multimodal data set from human subjects containing single cell physiology, neuronal morphology, pre-surgical MRI scans and IQ test scores (Fig 1). We recorded action potentials (APs) from human pyramidal neurons in superficial layers of temporal cortical tissue resected during neurosurgery (Brodmann areas 38 and 21) and digitally reconstructed their complete dendritic structures. We tested the hypothesis that variation in neuronal morphology and function can explain variation in IQ scores and used computional modelling to understand underlying principles of efficient information transfer in cortical neurons and networks. IQ scores positively correlate with cortical thickness of the temporal lobe Cortical thickness of the temporal lobe has been associated with IQ scores in hundreds of subjects (Choi et al., 2008;Deary et al., 2010;Hulshoff Pol et al., 2006;Karama et al., 2009a;Narr et al., 2007) and we first asked whether this applies to the subjects in our study as well. From T1-weighted MRI scans obtained prior to surgery, we determined temporal cortical thickness in 35 subjects using voxel-based morphometry. We selected temporal cortical areas corresponding to surgical resections (Fig 2A) and collapsed the measurements for temporal lobes to one mean value for cortical thickness for each subject. In line with previous studies (Choi et al., 2008;Deary et al., 2010;Hulshoff Pol et al., 2006;Karama et al., 2009b;Narr et al., 2007), mean cortical thickness in temporal lobes positively correlated with IQ scores of the subjects (Fig 2B). IQ scores positively correlate with dendritic structure of temporal cortical pyramidal cells Cortical association areas in temporal lobes play a key role in high-level integrative neuronal processes and superficial layers harbour neurons of increased neuronal complexity (DeFelipe et al., 2002;Elston, 2003;Scholtens et al., 2014;van den Heuvel et al., 2015). In rodents, the neuropil of cortical association areas consists for over 30% of dendritic structures (Ikari and Hayashi, 1981). To test whether human temporal cortical thickness is associated with dendrite size, we used 68 full reconstructions of biocytin-labelled temporal cortical pyramidal neurons from human layer 2, 3 and 4 previously reported (Mohan et al., 2015). Surgically obtained cortical tissue was non-pathological, and was resected to gain access to deeper structures containing disease focus (Mohan et al., 2015;Testa-Silva et al., 2010;Verhoog et al., 2013; 2016) (typically medial temporal sclerosis or hippocampal tumour; Table S1). In line with the non-pathological status of tissue, we observed no correlations of cellular parameters or IQ scores with the subject's disease history and age ( Fig S1&S2). We calculated total dendritic length (TDL) for each neuron and mean TDL from multiple cells for each subject and correlated these mean TDL values (n=23 subjects) to mean temporal cortical thickness from the same subject. We found that dendritic length positively correlated with mean temporal lobe cortical thickness, indicating that dendritic structure of individual neurons contributes to the overall cytoarchitecture of temporal cortex ( Fig 3A). TDL is in part determined by the soma location within cortical layers: cell bodies of pyramidal neurons with larger dendrites typically lie deeper, at larger distance from pia (Mohan et al., 2015). Thus, differences in soma locations of targeted neurons from subject to subject may result in biased sampling. To exclude a systematic bias in sampling, we determined the cortical depth of each neuron recorded from relative to the subject's temporal cortical thickness in the same hemisphere. There was no correlation between IQ score and relative cortical depth of pyramidal neurons used in this study, indicating that TDL was determined from neurons at similar depths across subjects ( Fig 3B). Finally, we tested whether mean TDL and complexity of pyramidal neurons relates to subjects' IQ scores. We found a strong positive correlation between individual's pyramidal neuron TDL and IQ scores ( Fig 3C) as well as between number of dendritic branch points and IQ scores (Fig 3D), thus revealing a significant association between human intelligence and dendritic length and complexity. Average total dendritic length in pyramidal cells in superficial layers of temporal cortex positively correlates with cortical thickness in temporal lobe from the same hemisphere (area highlighted in A, n subjects=20; n cell=58). Inset shows a scheme of cortical tissue with a digitally reconstructed neuron and the brain area for cortical thickness estimation (red) (b) Cortical depth of pyramidal neurons, relative to cortical thickness in temporal cortex from the same hemisphere, does not correlate with IQ score (n subjects=21). Inset represent the coritical tissue, blue lines indicate the depth of neuron and cotical thickness (c) Pyrmidal cell TDL and (d) number of dendritic brainch points positively correlate with IQ scores from the same individuals (n subjects=23, n cells=68). Larger dendrites lead to faster AP onset and improved encoding properties Dendrites not only receive most synapses in neurons, but dendritic morphology and conductances act in concert to regulate neuronal excitability (Bekkers and Häusser, 2007;Eyal et al., 2014;Vetter et al., 2001). Increase in the size of dendritic compartment in silico was shown to accelerate APs and improve encoding capability of simplified model neurons (Eyal et al., 2014). Further, human neocortical pyramidal neurons, which are three times larger than rodent pyramidal neurons (Mohan et al., 2015), have faster AP onsets compared to rodent neurons and are able to track fast-varying inputs with high temporal precision (Testa-Silva et al., 2014). We asked whether the observed differences in TDL between human pyramidal neurons affected their encoding properties. To this end, we incorporated the 3-dimensional dendritic reconstructions of the human pyramidal neurons into in silico models and equipped them with excitable properties (see the Supplemental Methods). Larger TDL led to faster AP onsets in the model neurons ( Fig. 4 a,b). Faster APs would imply that neurons can respond faster to synaptic inputs, thereby can translate higher frequencies of synaptic membrane potential fluctuations into action potential timing and ultimately encode more information. We tested this by simulating current inputs of increasing frequencies into modelled neurons and studied how firing of the modelled neurons followed input changes. Human neurons with larger TDL could reliably transfer high frequency ranges, with cut-off frequencies up to 400-500 Hz, while smaller neurons had their cut-off frequencies already at 200 Hz (Fig 4c, d). Furthermore, there was a significant positive correlation between the dendritic length and the cut-off frequency (Fig. 4d). Finally, given the same input -composed of the sum of three sinusoids of increasing frequencies -larger neurons were able to better encode rapidly changing temporal information in firing output, compared to smaller cells (Fig. 4e). Thus, we find that in silico, structural differences in dendritic length of reconstructed human neurons lead to faster APs and thereby to higher frequency bandwidths of encoding synaptic inputs in AP output. Higher IQ scores associate with faster AP kinetics and lower firing threshold We next asked whether human cortical pyramidal neurons from individuals with higher IQ scores show faster AP kinetics during repeated AP firing. To test this, we made whole-cell recordings from pyramidal cells in acute slices of temporal cortex (26 subjects, 101 cells, 10,538 APs; Fig 5A) and recorded APs at different firing frequencies in response to depolarising current steps. We analysed the kinetics of all APs and grouped them based on instantaneous firing frequency of each AP to investigate the changes in waveforms of APs during increasing neuronal activity. Next, we split all AP data into two groups based on IQ score -above and below 100 -and observed more pronounced changes in AP kinetics in individuals with lower IQ score: their APs showed a substantial slowing of kinetics starting at instantaneous firing frequencies as low as 10 Hz (Fig 5B). To statistically test the differences between the IQ groups in AP kinetics, we extracted key AP parameters -peak voltage, amplitude, threshold, afterhyperpolarization, maximum rise speed, maximum fall speed and half-width (Tabel S2)and normalized these parameters for each AP to the first APs in the traces. Since the slowing of APs was already prominent at low firing frequencies, we grouped all APs for instantaneous frequencies of 11-50 Hz, and ran MANOVA analysis with the seven AP parameters. MANOVA confirmed significant difference in AP properties between higher and lower IQ groups (F(7,18) = 3.037, p = 0.027), with univariate post hoc tests narrowing the significant results to AP threshold, rise speed, fall speed and half-width (Fig 5C). These AP parameters were more affected by higher frequency firing in subjects with lower IQ than in subjects with higher IQ. Specifically, AP threshold increased significantly more compared to subjects with higher IQ, indicating that it is progressively more difficult to evoke APs during repeated neuronal activity in these neurons. In addition, APs from subjects with lower IQ slowed significantly more: their rise and fall speeds decreased while AP duration (half-width) increased ( Fig 5C). We further investigated whether these differences at the group level reflected correlations between individual IQ scores and AP kinetics. We correlated IQ scores to mean relative AP thresholds, rise and fall speeds and half-widths at 11-50 Hz from all neurons of the same subject. All four parameters showed strong correlations with IQ ( Fig 5D). During repeated firing, AP threshold and half-width relative to the first AP negatively correlated with IQ scores, while AP rise and fall speed showed strong positive correlations (Fig 5D). These findings indicate that higher IQ scores link to low AP firing threshold and fast kinetics during repeated AP firing, while lower IQ scores associate with increased AP fatigue during elevated neuronal activity. Our results indicate that neurons from individuals with higher IQ scores are better equipped to process synaptic signals at high rates and at faster time scales, which is necessary to encode large amounts of information accurately and efficiently. Average relative AP threshold, rise and fall speeds and half-width in neurons from subjects with lower IQ (black, n subjects=11, n cells=49) and subjects with higher IQ (blue, n subjects=15, n cells=52) displayed against instantaneous firing frequency. In subjects with higher IQ AP threshold and AP kinetics were less affected by higher frequencies: there was less increase in AP threshold, less slowing of rise and fall speeds and less elongation of AP half-width. ( Discussion Our findings provide a first insight into the cellular nature of human intelligence and explain individual variation in IQ scores based on neuronal properties: faster AP kinetics during neuronal activity, lower firing thresholds and more complex, extended dendrites associate with higher intelligence. AP kinetics have profound consequences for information processing. In vivo, neurons are constantly bombarded by high frequency synaptic inputs and the capacity of neurons to keep track and phase lock to these inputs determines how much of this synaptic information can be transferred (Testa-Silva et al., 2014). The brain operates at a millisecond time-scale and even sub-millisecond details of spike trains contain behaviourally relevant information that can steer behavioral responses (Nemenman et al., 2008). Indeed, one of the most robust and replicable findings in behavioural psychology is the association of intelligence scores with measures of cognitive information-processing speed (Barrett et al., 1986). Specifically, reaction times (RT) in simple RT tasks provide a better prediction of IQ than other speed-of-processing tests, with a regression coefficient of 0.447 (Vernon, 1983). In addition, high postitive correlations between RT and other speed-of-processing tests suggest the excistence of a common mental speed factor (Vernon, 1983). Our results provide a biological cellular explanation for such mental speed factors: in conditions of increased mental activity, neurons of individuals with higher IQ are able to sustain fast action potentials and can transfer larger cellular information content from synaptic input to AP output. Pyramidal cells are integrators and accumulators of synaptic information. Larger dendrites can physically contain more synaptic contacts and integrate more information. Indeed, human pyramidal neuron dendrites receive twice as many synapses as in rodents (DeFelipe et al., 2002) and cortico-cortical whole-brain connectivity positively correlates with the size of dendrites in these cells (Scholtens et al., 2014;van den Heuvel et al., 2015). A gradient in complexity of pyramidal cells in cortical superficial layers accompanies the increasing integration capacity of cortical areas, indicating that larger dendrites are required for higherorder cortical processing (Elston, 2003;van den Heuvel et al., 2015). Our results align well with these findings, suggesting that the neuronal complexity gradient also exists from individual to individual and could explain differences in mental ability. Larger dendrites have an impact on excitability of cells (Bekkers and Häusser, 2007;Vetter et al., 2001) and determine the shape and rapidity of APs (Eyal et al., 2014). Increasing the size of dendritic compartments in silico lead to acceleration of AP onset and increased encoding capability of neurons (Eyal et al., 2014). In the present study, by modeling detailed morphological reconstructions of neurons from human subjects, showed that individuals with larger dendrites are better equipped to transfer synaptic information at higher frequencies. Remarkably, dendritic morphology, AP kinetics and firing threshold are also parameters that we have previously identified as showing pronounced differences between humans and other species (Mohan et al., 2015;Testa-Silva et al., 2014). Human pyramidal cells in layers 2/3 have 3-fold larger and more complex dendrites than in macaque or mouse (Mohan et al., 2015). Moreover, human APs have lower firing threshold and faster AP onset kinetics both in single APs and during repeated firing (Testa-Silva et al., 2014). These differences across species may suggest evolutionary pressure on both dendritic structure and AP waveform and emphasize adaptations of human pyramidal cells in association areas for cognitive functions. Recent genome-wide association studies (GWAS) have pinpointed genes associated with intelligence that provide potential biological links to neuron development and neuronal activity (Lam et al., 2017;Sniekers et al., 2017;Trampush et al., 2017). For example, GWASbased pathway analysis identified two gene targets of drugs that affect voltage-gated ion channels associated with general cognitive ability: a T-type calcium channel and a potassium channel (Lam et al., 2017). Both ion channel types play a critical role in determining AP shape and kinetics. T-type calcium channels are involved in action potential initiation and switching between distinct modes of firing (Cain and Snutch, 2010), while potassium channels are responsible for rapid repolarization during AP generation (Hodgkin and Huxley, 1952). The strongest emerging association of genes with intelligence is an intronic region of the FOXO3 gene and its promoter (Sniekers et al., 2017), involved in insulin growth factor 1 (IGF-1) signalling pathway (Costales and Kolevzon, 2016). Low IGF-1 levels have been associated with poor cognitive function during aging (Tumati et al., 2016) and a less integrated functional network of connected brain areas (Sorrentino et al., 2017). Notably, one of the effects of IGF-I on cortical pyramidal cells is the increased branching and total size of dendrites (Niblock et al., 2000). Thus, individual differences in gene polymorphisms involved in neuronal development and associated with intelligence could result in larger and more complex dendrites contributing to faster firing of cortical neurons. Ultimately, these genes provide a genetic disposition for a higher encoding bandwidth and information transfer of pyramidal neurons in association areas such as the temporal cortex, gaining a speed advantage in mental processing leading to faster reaction times and higher IQ scores. Human subjects and brain tissue All procedures were performed with the approval of the Medical Ethical Committee of the VU University Medical Centre, and in accordance with Dutch licence procedures and the Declaration of Helsinki. Written informed consent was provided by all subjects for data and tissue use for scientific research. All data were anonymized. Human cortical brain tissue was removed as a part of surgical treatment of the subject in order to get access to a disease focus in deeper brain structures (hippocampus or amygdala) and typically originated from gyrus temporalis medium (Brodmann areas 21 or 38, occasionally gyrus temporalis inferior or gyrus temporalis superior). Speech areas were avoided during resection surgery through functional mapping. We obtained neocortical tissue from 37 patients (19 females, 18 males; age range 18-66 years, Supplementary table 1) treated for mesial temporal sclerosis, removal of a hippocampal tumour, low grade hippocampal lesion, cavernoma or other unspecified temporal lobe pathology. From 35 of these patients we also obtained pre-surgical MRI scans, from 26 patients we recorded Action Potentials from 101 cells (10,538 APs) and from 23 patients we had fully reconstructed dendritic morphologies from 68 cells. In all patients, the resected neocortical tissue was not part of epileptic focus or tumour and displayed no structural/functional abnormalities in preoperative MRI investigation, electrophysiological whole-cell recordings or microscopic investigation of stained tissue. IQ scores Total IQ scores were obtained from the Dutch version of Wechsler Adult Intelligence Scale-III (WAIS-III) and in some cases WAIS-IV. The tests were performed as a part of neuropsychological examination shortly before surgery, typically within one week. MRI data and cortical thickness estimation T1-weighted brain images (1 mm thickness) were acquired with a 3T MR system (Signa HDXt, General Electric, Milwaukee, Wisconsin) as a part of pre-surgical assessment (number of slices=170-180). Cortical reconstruction and volumetric segmentation was performed with the Freesurfer image analysis suite (http://freesurfer.net) (Fischl and Dale, 2000). The processing included motion correction, transformation to the Talairach frame. Calculation of the cortical thickness was done as the closest distance from the grey/white boundary to the grey/CSF boundary at each vertex and was based both on intensity and continuity information from the entire three dimensional MR volume (Fischl and Dale, 2000). Neuroanatomical labels were automatically assigned to brain areas based on Destrieux cortical atlas parcellation as described in (Fischl, 2004). For averaging, the regions in temporal lobes were selected based on Destrieux cortical atlas parcellation in each subject. Electrophysiological recordings Cortical slices were visualized using infrared differential interference contrast (IR-DIC) microscopy. After the whole cell configuration was established, membrane potential responses to steps of current injection (step size 30-50 pA) were recorded. None of the neurons showed spontaneous epileptiform spiking activity. Recordings were made using Multiclamp 700A/B amplifiers (Axon Instruments) sampling at frequencies of 10 to 50 kHz, and lowpass filtered at Morphological analysis During electrophysiological recordings, cells were loaded with biocytin through the recording pipette. After the recordings the slices were fixed in 4% paraformaldehyde and the recorded cells were revealed with the chromogen 3,3-diaminobenzidine (DAB) tetrahydrochloride using the avidin-biotin-peroxidase method (Horikawa and Armstrong, 1988). Slices were mounted on slides and embedded in mowiol (Clariant GmbH, Frankfurt am Main, Germany). Neurons without apparent slicing artifacts and uniform biocytin signal were digitally reconstructed using Neurolucida software (Microbrightfield, Williston, VT, USA), using a ×100 oil objective. After reconstruction, morphologies were checked for accurate reconstruction in x/y/z planes, dendritic diameter, and continuity of dendrites. Finally, reconstructions were checked using an overlay in Adobe Illustrator between the Neurolucida reconstruction and Z-stack projection image from Surveyor Software (Chromaphor, Oberhausen, Germany). Layer 2/3 pyramidal neurons were identified based on morphological and electrophysiological criteria at cortical depth within 250-1200 µm from cortical surface, that we previously found to correspond to cortical layers 2/3 in humans (Mohan et al., 2015). For each neuron, we extracted total dendritic length (TDL) and number of branch points and computed average TDL and average number of branch points for each subject by pulling data from all cells from one subject (1 to 8 cells per subject). Neuronal modelling We constructed multicompartmental spiking neuronal models of human L2/3 pyramidal neurons for each the digitally reconstructed 3-dimensional morphologies considered in this study. Models were simulated using NEURON (Carnevale and Hines, 2006). As in the experiments of Köndgen et al. (Köndgen et al., 2008) and Testa-Silva et al. (Testa-Silva et al., 2014), we probed the dynamical transfer properties of each model neuron. We injected a sinusoidally oscillating input current in the soma, allowing us to temporally modulate the instantaneous output firing rate of each model neuron and quantify its output 'transfer gain' (Fig. 4a). When studied in this way, the transfer properties of model neurons resemble those of electronic filters, whose low-pass performances in the Fourier domain define how fast they can follow input changes (for more detailed information see Supplementary Methods). Action Potential waveform analysis Action Potential (AP) waveforms were extracted from voltage traces recorded in response to intracellular current injections and sorted according to their instantaneous firing frequency. Instantaneous frequency was determined as 1/time to previous AP. Subsequently all APs were binned in 10Hz bins, while first APs in each trace were isolated in a separate bin. The following AP parameters were calculated for each AP in a train: the AP threshold was calculated as the membrane potential at the point of maximum acceleration of AP (peak of second derivative (Sekerli et al., 2004). The AP peak voltage was determined as the absolute membrane potential measured at the peak of the AP waveform. The AP amplitude was calculated as the difference in membrane potential between the AP peak voltage and afterhyperpolarization (AHP). AHP was estimated as the lowest membrane potential between AP peak and the initiation of consecutive AP. Maximum rise speed was defined as the peak of AP derivative (dV/dt) and maximum fall speed as the trough of AP derivative. Half-width was estimated as the duration of AP between the voltage points of its half-amplitude (exact voltage at half-amplitude was extrapolated from neighbouring sampling points). For each analysed cell, representative APs with all parameters were plotted for visual check to avoid errors in the analysis. For each neuron, the mean values of AP parameters in a given frequency bin were obtained by averaging all APs within that frequency bin. Relative AP parameters were calculated by dividing the mean AP parameter in each frequency bin by the mean first AP parameter. For relative threshold estimation, this formula was adjusted to 1 -AP threshold for a given frequency bin/first AP threshold +1 to compensate for negative values. To obtain AP values for each subject, AP parameters within each frequency bin were averaged for all neurons from one subject. Statistical analysis Statistical significance of all correlations between parameters was determined using Pearson correlation coefficients using Matlab (version R2017a, Mathworks) For statistical analysis of AP data, we divided all subjects according to their IQ into 2 groups: group with IQ>100 and a group with IQ<100. Differences between 2 IQ groups in individual AP parameters at different instantaneous frequencies were statistically tested using repeated measures ANOVA (using Matlab). Since the adaptation of AP parameters at different instantaneous frequencies relative to first AP started already at 10 Hz, we pulled all relative AP parameters for 11-50 Hz bins to have one measure of each AP parameter per subject. We further performed MANOVA on 7 calculated AP parameters for these 2 groups as dependent variables and IQ scores as independent variables (using IBM SPSS Statistics). We ran subsequent univariate post-hoc tests and used Bonferroni corrected p-value to account for multiple comparisons.
v3-fos-license
2023-01-24T16:12:11.250Z
2023-01-20T00:00:00.000
256167970
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/gch2.202200098", "pdf_hash": "819d020b2196f25f072a53f7d06188f1aadb7b4e", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42739", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "d7a72556fe067105300f28bd73b06730815a7e12", "year": 2023 }
pes2o/s2orc
Maximizing Use of Pelagic Capture Fisheries for Global Protein Supply: Potential and Caveats Associated with Fish and Co‐Product Conversion into Value‐Add Ingredients Abstract Globally, capture fisheries contribute significantly to protein supply and the food security of a third of the world's population. Although capture fisheries production has not significantly increased in tonnes landed per annum during the last two decades (since 1990), it still produced a greater tonnage of protein than aquaculture in 2018. Policy in the European Union and other locations favors production of fish through aquaculture to preserve existing fish stocks and prevent extinction of species from overfishing. However, aquaculture production of fish in order to feed the growing global population would need to increase from 82 087 kT in 2018 to 129 000 kT by 2050. The Food and Agriculture Organization states that global production of aquatic animals was 178 million tonnes in 2020. Capture fisheries contributed 90 million tonnes (51%) of this. For capture fisheries to be a sustainable practice in alignment with UN sustainability goals, ocean conservation measures must be followed and processing of capture fisheries may need to adapt food‐processing strategies already used extensively in the processing of dairy, meat, and soy. These are required to add value to reduced fish landings and sustain profitability. Introduction Approximately 7135 kT of crude protein results from harvest of capture fisheries every year [1] used for food, fishmeal, and animal feed production. Indeed, the Food and Agriculture Organization (FAO) estimated that capture fisheries contributed 17% of global food protein for human consumption and nearly 20% for two fifths of the global population. [2] More recently, the FAO stated that global production of aquatic animals was 178 million tonnes (figure for 2020) and capture fisheries contributed 90 million tonnes (51%). 112 million tonnes was harvested in marine waters, 70% of which came from capture fisheries (FAO, 2022). In addition, according to the Pelagic fish including blue whiting (M. poutassou), mackerel (S. scombrus), herring (C. harengus), and horse mackerel or scad (Trachurus trachurus) are small in size and known to be rich in lipids and poly-unsaturated fatty acids (PUFAs) including eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA). [9] They are also a source of digestible protein with percentage protein contents ranging from 16% to 20% on a wet weight basis. Blue whiting is normally sold whole frozen. Mackerel and herring may produce fillets-butterfly fillets or flaps and coproducts or side streams from this process step include backbones, heads, viscera, belly flaps, and tails. Co-products from mackerel and herring processing were previously reported to contain between 37 and 51 g protein/100 g co-products (head or backbone). [10] These co-products contain a significant source of nutritious protein and represent starting material for the generation of bioactive peptide hydrolysates using different proteolytic enzymes, lactic acid bacteria (LAB), or acids. Other methods for improved utilization of pelagic fish by-products as human foods include the production of lower-cost fast foods from trimmings and other by-products. Traditional ways to utilize by-products and by-catch involve isolation of meat and the development of secondary products such as surimi and surimi-based products, sausages, and fermented products. [11] Stevens and colleagues [12] recently covered this topic in relation to aquaculture of selected species. Albrektsen and colleagues [13] elegantly reviewed the use of marine by-products for the production of fishmeal and feed for use in aquaculture of salmonids recently. Regardless of the processing method used, the handling of by-products must be in a food-grade manner to produce hydrolysates, surimi, or other ingredients suitable for human application. This paper focuses on the use of FPH generation as a strategy to add value to pelagic fishery by-catch/ by-products. Potential products that result from hydrolysate generation include protein hydrolysates containing bioactive peptides with health benefits, concentrates, oils, calcium, and mineral-rich bone fractions that may find application in the food, animal, and companion animal fields and functional food markets. These markets are lucrative. For example, the global functional food market was valued at $180.58 billion in 2021 and was projected to grow to $191.68 billion in 2022, [14] and the fish protein hydrolysate market was worth $244.2 million in 2021. Although lucrative, these markets are competitive. Within the FPH market, targeted at humans, by-products from salmon processing are the most established with several key products and players. Table 1 highlights existing hydrolysate, oil, and other fish-derived products sold globally, made from different fish species and co-products. Bioactive peptides and PUFAs are the building blocks of the fish protein hydrolysate/marine-derived functional ingredient markets. They are the bioactives responsible for the observed health benefits of commercial fish protein hydrolysates currently available ( Table 1) and consist of sequences of amino acids 2-30 in size. Their potential health benefits depend on the inherent amino acid sequences and location of amino acids within the sequence of the peptide. Most bioactive peptides are known to be bioavailable (i.e., cause a health benefit in the body once consumed), as they exist as either pro-peptides which, following gastrointestinal digestion, may be subsequently cleaved into active, shorter bioactive peptide sequences, sometimes referred to as "cryptides." Peptides with amino acid sequences less than five amino acids in length can pass through the gut axis to reach their target sites. In addition, the targets for some bioactive peptides may be outside the gut. Caveats associated with use of hydrolysate products include compliance with different legislation and regulations regarding their marketing and sale in different markets including the EU, USA, Japan, China, and Asia-Pacific. The European Food Safety Authority (EFSA) in Europe and the Food and Drug Administration (FDA) in America govern human functional foods. Companion animal feed ingredient claims are governed by EU legislation but organizations such as the European Pet Food Industry Federation (FEDIAF) provide nutritional guidelines for complete and complementary pet food for cats and dogs. The FEDIAF also works closely with the American Association of food control officers (AAFCO) in order to keep up to date with the latest recommendations for nutrition and health claims in pets. This paper discusses the potential of pelagic fisheries as a biomass source for protein hydrolysate ingredient generation. It looks at their use in different markets, and it collates details concerning processes and companies currently operating in this sphere. In addition, potential directions that could be pursued to help companies' transition from production of commodity products to added-value functional food ingredients for use in different markets are discussed. Raw Material Protein hydrolysates can be made from any protein source using enzymatic or acid hydrolysis, high-pressure processing (HPP), or fermentation with LAB. However, the dry weight (DW) yield of hydrolysate generated from the starting material depends largely on the protein and moisture content of the starting material. Pelagic fish include blue whiting (M. poutassou), mackerel (S. scombrus), herring (C. harengus), and horse mackerel or scad (T. trachurus). These fish species have varying content of protein, lipid, moisture, and ash. Blue whiting is harvested from February to March in the North East Atlantic using both trawl and purse sein methods and can contain up to 18% protein and 5% lipids depending on the month of harvest (https://pelagia.com/cm/products/blue-whiting/). It also contains vitamin B12 and selenium. Mackerel contains up to 2670 mg of omega-3 per 100 g fillet and up to 20% protein containing all essential amino acids. Horse mackerel is harvested in October and November and can contain up to 24% lipid rich in omega-3, and herring is an excellent source of protein containing about 15.2%. It is also rich in omega-3. The aim of generating FPHs is to enhance the nutritional and consumer acceptance of fish protein first and second to increase the health benefits and thereby unique selling points (USPs) for fish species. Bioactive peptides are the building blocks of proteins and are the bioactives present in FPHs. They range in size from 2 to 30 amino acids and have a myriad of health benefits as discussed in the following sections. Many are commercialized but not sold for the human functional food market in Europe due to lack of evidence to substantiate health claims. However, they do have markets in the US, Canada, and Asia. Method of Production of FPH FPHs are made by solubilizing the protein present in whole fish or fish co-products such as skins, heads, tails, fins, or viscera in water using either acids, alkali, or proteolytic enzymes like Alcalase, Papain, Bromelain, Protamex, Endocut, or a combination of two or more enzymes. This process takes place in a bioreactor or hydrolysers, which is a tank equipped with stirrers, a source of heat, and pH control. The temperature, pH, and stirring (revolutions per minute, RPM) conditions are optimized in accordance with the agent used to generate the hydrolysate-in other words, an acid, base, or enzyme source. Proteins present in tissue are solubilized and broken down into smaller proteins and peptides that can range in size from 2 to 30 amino acids in length. The enzyme or acid/alkali is deactivated using either heating or an increase or decrease in pH values, and the degree of hydrolysis (DH) is calculated. The liquid and soluble fractions are separated by filtration, and water is removed using evaporation and drying processes including spray and/or freeze-drying. The lipid fraction can be separated before or after hydrolysis using centrifugation to remove lipids, and the desired lipid content of any FPH is usually less than 0.5%. Plate and frame filtration processes can be used in combination with micro-, ultra-, or nanofiltration. It is important to keep in mind that, over time, filtration columns may clog, and effective cleaning in place (CIP) protocols should be part of any production process. Clogging can result from use of filters with inappropriate pore sizes or pressure, flow rates, pH, and the salt concentration of the solution (Petrova et al., 2018). [15] Figure 1 shows an overall scheme for production of FPHs. It should be noted that use of co-products of fish processing such as skins can result in a cleaner end hydrolysate product with uniform peptides due to lack of contamination from other proteins present in the whole fish muscle. Acid and alkali hydrolysate productions have several disadvantages compared to use of proteolytic enzymes. Acid hydrolysis is performed using HCl or sulfuric acid and results in degradation of protein to small peptides and amino acids, and the destruction of tryptophan. Alkali hydrolysis results in the degradation of serine and threonine. Both methods use high temperatures, and disposal of acid/waste by-products can cause problems. Enzymatic hydrolysis is viewed as a gentle process, and a number of groups have looked at the use of different enzymes applied to either blue whiting, mackerel, or herring raw biomass to date ( Table 2). Enzyme Choice Enzyme selection is important as it dictates what peptides will be produced and enzyme selection has an impact on the end FPH in terms of bitterness and sensory characteristics. Table 3 highlights commonly used microbial and plant-derived enzymes used to generate hydrolysates, their optimum pH, temperature, and time, and examples of where they were applied previously to whole fish or fish by-products. Several companies also have agreements with enzyme suppliers and additionally, by-products from fisheries including viscera are also being explored as a source of enzymes for use in FPH generation. Concentration and Drying of FPH Following hydrolysis, soluble-protein-containing material is separated from bone and insoluble material, which may consist of a fat fraction. The fat fraction may be separated before hydrolysis occurs, which allows for recovery of a high-value oil product rich in omega-3 PUFAs. A secondary oil fraction may be removed following hydrolysis using either centrifugation FPHs are dried using spray driers following the concentration phase. Prior to concentration, hydrolysates can be further concentrated using microfiltration and nanofiltration processes or molecular weight cut off (MWCO) filtration using ceramic membranes to recover proteins/peptides and to enhance the protein concentration of the hydrolysate and remove salt. Bones can also be dewatered and are a valuable source of collagen/gelatine peptides. The moisture content of bones varies and these can be dried using specialist equipment such as Coctio Ltd. bone dryer. Caveats Regarding Enzyme Use in Pelagic FPH Generation There are several limitations associated with use of enzymes to generate FPHs. The first is the cost of enzymes in terms of purchase price but also associated costs to consider regarding deactivation costs related to the amount of energy required to stop the enzyme and the duration of time hydrolysis with a given enzyme required. Use of proteolytic enzymes in hydrolysis of fish biomass enables retention of nutritional value of the source protein and allows for precise hydrolysate production (Zamora-Sillero et al., 2018). [28] At the end of an enzymatic hydrolysis process, there are no organic solvents or chemicals to dispose of. Plant and microbial enzymes often used in hydrolysis production are shown in Table 3. Other issues include process control, which depends on the enzyme used, and additionally resultant bitterness in the hydrolysate product. [29] Previously, Slizyte and colleagues assessed the bitterness of peptides produced using enzymatic treatment and found that the use of enzymes Bromelain and Papain resulted in a less bitter FPH from herring. [30] The bitterness of FPH is often due to hydrophobicity, degree of hydrolysis, molecular weight, proline residues, type of enzymes, and amino acid sequences. Peptides with bulky hydrophobic groups toward at the C-terminal of the peptide may be responsible for bitterness. [31] Bitterness may be removed using extraction of bitter peptides with alcohol, use of activated carbon treatment, the Maillard reaction, use of cyclodextrin, chromatographic separation, and enzymatic hydrolysis with exopeptidase, and plastein reactions can remove bitterness at an extra cost. These methods reduce bitterness and may improve FPH taste, but some bioactivities may be lost depending on what peptides are removed using individual processes. [32] Results in terms of mass balance, content and molecular weight of peptides, and bitterness are often omitted from papers. However, bioactivities in terms of health benefits are often reported, albeit, studies relate usually to potential health benefits identified using in vitro bioassays or small-scale animal trials in mice models. Methods to Improve Production of FPHs: Use of In Silico Strategies In silico methodologies can be used to select enzymes for use in the FPH process, aid in vitro bioactivity screening of resultant bioactive peptides, and select hydrolysates for screening in vivo based on peptides identified using mass spectrometry in the discovery stage of functional food ingredient development. Recently, several studies have highlighted the benefits of using in silico methodologies to generate FPHs from shellfish and fish. [39,40] Figure 2 highlights useful in silico database and methods that can be applied prior to initiation of FPH generation at lab scale, for selection of biomass or enzymes. It also shows how post-bioactive peptide amino acid sequence identification using mass spectrometry, in silico methods, can be used to identify/assign potential health benefits to the fish hydrolysate. In addition to these methods, multivariate correlation of infrared fingerprints and molecular weight distributions of protein hydrolysates could be used in real time to determine potential bioactivities of hydrolysates as detailed by Nofima for milk and chicken hydrolysates previously. [41] Bioactivities and Nutritional Quality of FPHs: Model Selection Matters The topic of food-derived bioactive peptides for health is of great interest to industry due to evolving drivers in food product innovation, including health and wellness for the elderly, infant nutrition, and optimum nutrition for sports athletes as well as associated benefits for companion animals/pets. The bioactivities of fish protein hydrolysates generated from pelagic fish species are largely characterized to date using in vitro bioassays. To date, antioxidant, angiotensin 1-converting enzyme (ACE-1; EC 3.4.15.1) inhibitory activity, dipeptidyl peptidase IV (DPP-IV; EC 3.4.14.5) inhibition, and Glucagon-like peptide 1 inhibition indicate that blue whiting hydrolysates may have a myriad of health benefits (Table 2). Health benefits may include a positive impact on heart health and hypertension, prevention of type-2 diabetes (T2D), and satiety. However, no studies to date were carried out in human or larger mammal studies beyond mouse models. This is limiting in relation to companies making novel health claims, especially in Europe, related to blue whiting, herring, and mackerel hydrolysates for use as functional foods. In terms of their use in companion animals, fish protein hydrolysates could provide an excellent source of amino acids and omega fatty acids. Previously, Folador and colleagues (Folador et al., 2006) identified that Pollock milt, red salmon hydrolysate, and smallmouth bass provided the best source of PUFAs. [42] Pollock liver and viscera had high total fatty acid and were highly palatable when assessed in a dog feeding trial suggesting that they could make effective palatants in pet foods. It is important to have the chemical composition, protein quality, and palatability tests concluded for any FPH as fish substrates differ significantly and are affected by the fish part used. However, FPHs have potential for use in dry extruded pet products as well as canned foods. The nutritional quality of FPHs depends on several functional attributes of the hydrolysate including solubility, digestibility, and bioavailability. These are linked to the amino acid profile of the hydrolysate, which depends largely on the source fish protein material used to make the hydrolysate and the DH which depends on enzyme efficiency. In terms of use for human foods, the protein digestibility corrected amino acid score (PDCAAS) value dictates protein quality. This method was replaced by the digestible indispensable amino acid score (DIAAS) method recently by the FAO. The nutritional quality of FPHs depends on the amino acid content and the bioavailability of these amino acids once consumed by a human. A scale of 0-1 indicates protein quality, and the closer the protein is to 1 the better the nutritional value of the protein source. The PDCAAS value of an FPH can be determined in vitro using the Megazyme assay kit as reported recently. [43] Several human-simulated digestion models exist that claim to be able to determine bioavailability and bioaccessibility of proteins including hydrolysates, but how closely these methods or the in vitro Megazyme assay kit methods mimic the real situation in the human gastrointestinal tract is not known. However, a recent study by Mulet-Cabero and colleagues found that using the International network of excellence on the fate of food in the gastrointestinal tract digestion method, similar scores were obtained for proteins passed through this simulated model in terms of bioavailability compared to proteins trialed using the PDCAAS method in costly rat and mouse models. [44] In terms of FPH use in animal, fish, or companion animal models, use of animal models to trial the nutritional benefits of the hydrolysate is preferred. However, for ethical reasons, it is better to use in vitro methods initially to assess FPH quality and in vitro methods to do this include the International Fishmeal and Oil Manufactures Association pepsin digestibility assay (used to determine the quality of fishmeal) and methods such as in vitro static simulated gastrointestinal digestion model adapted to the dogs system. Recently, the promising effects of a tilapia by-product hydrolysate on the regulation of food intake and glucose metabolism were determined using a simulated dog gastrointestinal digestion model, and new bioactive peptides with antidiabetes/antiobesity benefits were identified. [45] 3. Legislation and Regulations Human If sold in the EU27 FPHs for use as nutritional or health beneficial ingredients must comply with relevant regulations including Novel Food Regulation ( The European Union regulation (EU 2017/2470) maintains the Novel Food Catalogue, which includes all authorized novel foods. The list is nonexhaustive and contains information collected from the EU27. Items consumed commonly prior to 15 May 1997 are not subjected to the novel food regulation. For example, collagen, which can be derived from pelagic fish skins and bones, is listed in the catalog with the description "The request concerns the use of hydrolyzed collagen of animal origin in beverages. Such use is not novel" (https:// webgate.ec.europa.eu/fip/novel_food_catalogue/#). Omega-3 fatty acid lysine salt is also listed in the catalog with the description "The request concerns whether Omega-3 fatty acid-lysine salt for use as an ingredient in food supplements falls within the scope of the novel food regulation. The conclusion is that the product is not a novel food. This is on the basis that the food is not a new molecular structure within the meaning of Article 3(2) (2) (a) (i) of Regulation (EU) 2015/2283." Ethyl esters (EE) concentrated from fish oils are also listed with the description "The long-chain n-3 polyunsaturated fatty acids (PUFA) are characteristic of marine fat and commonly occur in triacylglycerol's and phospholipids of fish. Effects of marine fat are known and almost exclusively attributed to the most ubiquitous of the n-3 fatty acids in fish which are EPA (C20:5n-3) and DHA (C22:6n-3) both originating from the polyunsaturated α-linolenic fatty acid (ALA 18:3n-3)." The novel status is listed as FS for ethyl esters meaning that ethyl esters were used prior to 15 May 1997 as or in a food supplement and any other food use of this product have to be authorized pursuant to the Novel Food Regulation. Interestingly, another ingredient listed is a peptide extract from hydrolyzed parts of pacific cod (Gadus macrocephalus) and this is considered a novel food. This means that before this product is placed on the market in the EU, a safety assessment under the Novel Food Regulation is required. In addition to the novel food regulation, the health benefits of FPHs for potential sale in the EU27 is governed by the nutrition and health claims for foods regulation and includes the use of food supplements. The EFSA decides, using a weight of evidence approach if there is enough scientific evidence available to support and grant a nutrition and health claim to a product. Nutrition claims are permitted if they are listed in the Annex. [46] Claims such as "contains a source of vitamins or minerals" can be used if the FPH contains 15% of the recommended daily allowance (RDA) of 100 g/100 mL or a portion. RDAs are defined in Annex XIII of the Nutrition Information Regulation (EU) 1169/2011. [46] Often, FPH occupies the legislative area between foods and cosmetics. In the EU, the Cosmetics Directive defines a cosmetic as "any substance or preparation intended to be placed in contact with the various external parts of the human body or with the teeth and the mucous membranes of the oral cavity with a view exclusively or mainly to cleaning them, perfuming them, changing their appearance and/or correcting body odours and/or protecting them or keeping them in good condition". However, nutricosmetics are food ingredients or supplements that have a beneficial effect on the external parts of our body by changing their appearance. They are consumed usually as capsules, but are claimed to affect positively the skin. Examples are the products Collactive and Hydro MN-produced through hydrolysis of fish-derived collagen and elastin protein and claim to have "antiwrinkle action." [47] At present, no pelagic-derived FPHs have obtained health claims in the EU27, and usually FPH ingredients developed by companies such as those listed in Table 1 are sold outside of the EU27. There is a particularly strong market for FPH in Canada and Asia, for example. In the USA, Canada, Japan, and Asia, legislation and regulation also apply to the sale of any FPH as a nutritional, health, or novel food. The Japanese Ministry of Health, Labour and Welfare (MHLW) introduced the Foods for Specified Health Uses (FoSHU) system in Japan as the approval system for the regulation of all health claims on packages of food products launched in Japan. [47] In addition, the FDA and the Nutritional Labelling and Education Act (NLEA), which regulates health claims and food labeling, govern the US procedures. Companion Animal Within the EU, use of functional foods for companion animals could see growth in the future due to the limitations regarding prescription drugs for animals under the EU 2019/6 and 2019/4 Veterinary Medicinal Products Regulations that came into effect on 28 January 2022. The FEDIAF represents the European pet food industry. The FEDIAF has a Pet Food Labelling Code that is approved by the EU Standing Committee for Animal Nutrition. This code can guide utilization of FPH as pet ingredients. Several claims regarding ingredients for pets now include an environmental dimension such as "Zero Paw prints by 2050." Claims such as this must provide evidence of clean label ingredients and sustainability certification from marine bodies such as Marine Trust of Aquaculture Stewardship Council (ASC). In the USA, it is estimated that 10-33% of dogs and cats are given supplements, a very lucrative market. The FDA, specifically the Centre for Veterinary Medicine (CVM), is responsible for the regulation of animal food products and has procedures to approve food additives for use, which apply to any product unless it is generally recognized as safe for that intended use (i.e., forages, grains, and most mineral and vitamins). [48] If an FPH claims: "to cure, treat, prevent, or mitigate disease, the product should be considered a "new animal drug". Pet food, including pet treats but not pet supplements, falls under the AAFCO and these are regulated at federal and state levels. [48] FPHs marketed as nutritional supplements must adhere to set guidelines. They must, for example, state a known need for each nutrient ingredient represented to be in the product. The label of the product can only be used in supplementation and not as a substitute for good daily rations. Thirdly, the product must provide meaningful but not excessive amount of each of the nutrients that it is supposed to contain. Fourthly, the labelling should bear no disease prevention of therapeutic, including growth promotional, representations. The label should not be otherwise false or misleading in any particular, and the product must not be either over-potent or under-potent or pose a hazard to the health of the target animal." [48] If producing and marketing a product to improve the health of a companion animal, however, companies can state the word "health" on the label of the product. Conclusion The application of pelagic FPH in functional foods for nutritional and health benefits as well as in feed and treat products for companion animals offers many opportunities and challenges and may help to supply an excellent source of protein to the growing global population. Sustainable supply of pelagic fish is critical to the development of these food and feed ingredients and hydrolysis offers a method to ensure total utilization of pelagic fish catch and generation of ingredients that consumers are likely to accept. The use of mesopelagic fish species for FPH production and PUFA oil production is also a possibility and is being researched currently as part of the ecologically and economically sustainable mesopelagic fisheries (MEESO) project (https://www.meeso.org/). However, the economic cost in terms of capital expenditures for plant establishment to generate pelagic FPH can be considered a limiting factor for some processors currently. The science regarding identification of health benefits of FPHs is growing but there is a need to educate processors regarding the full potential of their pelagic catch. There is also a need to educate consumers regarding the health and nutritional benefits of pelagic fish and to continue to create high-value functional food products for health. Importantly, fish stocks must be maintained and pelagic biomass used in generation of FPHs much have certification by bodies like ASC or Marin Trust. As described previously by Dragøy-Whitaker and colleagues [49] the use of accessible demonstration plants is a valuable resource for companies/processors to demonstrate what is possible before committing to investment in FPH equipment and plants. A useful resource for companies interested in pursuing commercial production of pelagic FPH is Pilots4U 2020 available at https://biopilots4u.eu/. This website lists available pilot plants to trial FPH production prior to proceeding to commercial scale. Pelagic FPHs have potential to supply high quality, nutritious protein to the global population and enhance the health of communities due to their marine bioactive peptide contents and nutritional benefits. They may form part of a health maintenance/preventative healthcare strategy that could reduce medicine costs for Governments and society. Maria Hayes is a senior scientific research officer at Teagasc, Dublin, Ireland. Her research interests include utilization of marine processing by-product/co-product streams for the development of functional foods for health and nutrition. She works on several marine projects including IDEA+, ALGAE4IBD, and the BIM/EMFF funded projects BRAVO, MUSSELS, Pet Aging, and Wellness (PAW) and the EU-funded MEESO project. Her specific area of interest is the isolation, characterization, and discovery of novel proteins and peptides with potential to impact positively on human, companion animal, and ruminant health.
v3-fos-license
2021-07-03T06:17:04.414Z
2021-06-29T00:00:00.000
235708065
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4409/10/7/1637/pdf", "pdf_hash": "fb497172e6d9f070a95133aa50c85b69c3c663ec", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42741", "s2fieldsofstudy": [ "Biology" ], "sha1": "53d2f04e751b1d3a4cafd36903bc4606781c4bd9", "year": 2021 }
pes2o/s2orc
Molecular Biology of the WWOX Gene That Spans Chromosomal Fragile Site FRA16D It is now more than 20 years since the FRA16D common chromosomal fragile site was characterised and the WWOX gene spanning this site was identified. In this time, much information has been discovered about its contribution to disease; however, the normal biological role of WWOX is not yet clear. Experiments leading to the identification of the WWOX gene are recounted, revealing enigmatic relationships between the fragile site, its gene and the encoded protein. We also highlight research mainly using the genetically tractable model organism Drosophila melanogaster that has shed light on the integral role of WWOX in metabolism. In addition to this role, there are some particularly outstanding questions that remain regarding WWOX, its gene and its chromosomal location. This review, therefore, also aims to highlight two unanswered questions. Firstly, what is the biological relationship between the WWOX gene and the FRA16D common chromosomal fragile site that is located within one of its very large introns? Secondly, what is the actual substrate and product of the WWOX enzyme activity? It is likely that understanding the normal role of WWOX and its relationship to chromosomal fragility are necessary in order to understand how the perturbation of these normal roles results in disease. Chromosomal Fragile Site Genes-The Precedent of FRA3B/FHIT Chromosomal fragile sites are of interest for a number and variety of reasons [1]. They are non-staining gaps in chromosomes that can be induced to appear by specific chemicals in cell culture medium. They differ in their frequency in the population. Rare fragile sites are only found in some individuals in the population. The rare fragile sites are due to expanded DNA repeats, with those individuals expressing the fragile site having a copy number above the threshold for cytogenetic appearance. A relationship of some sort exists between the chemistry of induction and the DNA sequence composition-ATbinding/substituting chemicals have AT-rich expanded repeats. Common fragile sites can be induced to appear in everyone's chromosomes; however, they vary in their sensitivity to induction. Inhibitors of DNA polymerase induce most common fragile sites and their appearance is, therefore, related to replication. The common fragile sites vary in the frequency with which they respond to induction-the FRA3B site on human chromosome 3 being most readily observed, followed by FRA16D on chromosome 16, then others [1]. A correlation between cancer cell DNA instability and chromosomal fragile sites had been noted long ago [2], although the notion that such a relationship was causal had been greeted with some scepticism [3,4]. A relationship of some sort received a boost of interest with the finding that the most readily observed common chromosomal fragile site, FRA3B, was located within a region on human chromosome 3 that exhibited DNA instability in cancer [5]. Furthermore, the FHIT gene was found to span the FRA3B common chromosomal fragile site and aberrant transcripts of the FHIT gene were found in cancer cells [6]. A functional role for FHIT as a tumour suppressor, however, also turned out to be controversial with conflicting data regarding the contribution of FHIT to cancer [7,8]. In one study using neoplastic cells that had FRA3B deletions and, therefore, were deficient in FHIT protein, "replacement" with stable, over-expressed FHIT protein did not alter in vitro or in vivo properties of these cells [9]. In another study [10], replacement of FHIT protein in cancer cells suppressed their tumorigenicity. To further add to the mystery, "enzyme inactive" mutant FHIT was just as effective as the normal active FHIT at suppressing tumourigenicity [10], implying that its 5 ,5"-P 1 ,P 3 -triphosphate hydrolase activity is not required for all of its functions. The review by Glover et al. [11] details recently reported mechanisms of the cytogenetic expression of common chromosomal fragile sites and some of the controversy around whether or not WWOX and FHIT are actually tumour suppressor genes. Chromosomal Fragile Site FRA16D, Cancer and the WWOX Gene From the outset, the WWOX gene was unusual. Indeed, the experiments leading to the identification of the WWOX gene are noteworthy as they reveal a number of significant (and unexpected) characteristics, which have an impact on the encoded protein and its function [12][13][14][15][16]. Some of these are yet to be explained. A major reason for interest in this region of the genome was based upon the consequences of the Knudsen hypothesis [17]-that inherited cases of cancer were higher than sporadic cases because the latter required two somatic mutations to a tumour suppressor gene while the familial cases only needed one. One of the forms of second mutation that validated Knudsen's hypothesis was loss-of-heterozygosity [18] and so the search for regions of the genome that exhibited loss-of-heterozygosity in cancer was thought to be a means of tracking down novel tumour suppressor genes. The FRA16D region had been found to be within overlapping regions of loss-of-heterozygosity in breast [19] and prostate [20] cancers, suggesting the presence of a tumour suppressor. The presence of FRA16D, the second most readily observed common chromosomal fragile site in the human genome, contributed to speculation of a causal relationship between DNA fragility and instability and the presence of a tumour suppressor gene. Indeed, both Mangelsdorf et al. [12] and Paige et al. [13] identified homozygous deletions within several cancer cell lines that coincided with the location of FRA16D. The gene that has come to be known as WWOX (WW-containing Oxidoreductase, Bednarek et al. [14]) was first located in the FRA16D region by Paige et al. [13] as HHCMA56, an oxidoreductase encoding sequence that had been deposited in GenBank in 1994 by Gmerek, R.E. and Medford, J.I. Paige et al. [13] had excluded HHCMA56 from contention on the basis of a PCR (D16S432E) that located (the final exon) of this gene hundreds of kilobases in distance from the "minimal homozygously deleted region in cancer cells" identified by Mangelsdorf et al. [12] and Paige et al. [13]. Oxidoreductases were not amongst known tumour suppressors at the time and HHCMA56, therefore, appeared to have unlikely credentials as such, although some members of this protein family have since been found to have modifying roles in cancer [21,22]. The FRA16D minimal deletion region was in due course sequenced by Ried et al. [15] (GenBank accession number AF217490) as it was expected to contain one or more exons of a tumour suppressor gene. Instead, the gene responsible for HHMCA56 was found to be huge and indeed span FRA16D. With hindsight, such a possibility might have been considered given that the huge FHIT gene spanned the FRA3B fragile site. HHCMA56 was a partial cDNA sequence from one of a number of alternatively spliced RNA transcripts. D16S432E is located in its unique 3 exon (corresponding to exon 9 of WWOX). Exon 8 of WWOX is shared between two alternatively spliced transcripts (named FOR I and FOR II by Ried et al. [15]). The common exon 8 sequences were found at the very beginning of the AF217490 sequence (indeed, prior to the minimally deleted region) and the alternative exon 9 from the FOR I transcript at the other end. Finnis et al. [23] subsequently found that some homozygous deletions in cancer cells are, indeed, only intronic, which appears at odds with such deletions knocking out a tumour suppressor gene. This intron is 260 kb in length in the minor FOR I transcript and a massive 780 kb in length in the major FOR II (WWOX) transcript. Furthermore, both transcripts share intron 5, which is also a massive 222570 bases in length ( Figure 1). The gene was subsequently named WWOX (WW domain containing Oxido-reductase) by Bednarek et al. [14], while Ried et al. [15] had given the gene the name of FOR (Fragile site FRA16D Oxido-Reductase). Chang et al. [16] identified the gene by virtue of its induction by hyaluronidase and named it WOX1. The WWOX gene has, therefore, accumulated multiple alternate names (FOR, WOX1, DEE28, EIEE28, FRA16D, SCAR12, HHCMA56, PRO0128, SDR41C1, and D16S432E). The relationship between long genes and chromosome fragility is noteworthy, as the FHIT gene spanning FRA3B is also very large (at~1.5 Mb). The orthologous mouse Fhit gene also spans a common chromosomal fragile site [24], as does the mouse WWOX gene [25]. The biological pressure during evolution to maintain chromosomal susceptibility to environmental agents within very long genes is, therefore, intriguing. Indeed, the unusual length of the WWOX gene has been retained through evolution-even amongst species such as Fugu and Drosophila that typically have very much shorter introns in their gene orthologues ( Figure 1). Given the risk that common chromosomal fragile sites confer as target sites for DNA instability, the conservation of WWOX gene/intron length suggests some form of biologically advantageous relationship; however, the basis for this, and what (if any) benefit it may confer, are not yet apparent. The WWOX gene is of remarkable length for a protein of only 414 amino acids. Its primary transcript is over 1.1Mb in length, of which >98% is intron. For comparison, another member of the SDR family to which WWOX belongs is hydroxysteroid 17-beta dehydrogenase 1, which is encoded by the gene HSD17B1. This protein of 328 amino acids is translated from a mature mRNA of 1274 nucleotides having been spliced from a primary gene transcript of 2292 nucleotides in length. Not only does the human WWOX gene have vastly larger introns that another member of the same enzyme encoding gene family, but this extreme length of introns has been conserved through evolution, even in organisms that typically have short introns, i.e., Fugu and Drosophila ( Figure 1). The parallels with the FHIT gene that spans the FRA3B common chromosomal fragile site are striking and of relevance to further properties of WWOX, as discussed later in this review and elsewhere [26]. Steady-state protein abundance is determined by four rates: transcription, translation, mRNA decay and protein decay [27]. The ability of a gene to produce a protein product is determined by transcription, which takes time. Transcription rates vary widely; however, it is safe to assume that a primary transcript of the WWOX gene takes several, if not many, hours to complete and is significant in relation to the time necessary for the cell cycle in dividing cells. FRA16D-associated intronic deletions might be expected to hasten the process; however, introns and their splicing can enhance gene expression [28]. Adding to the enigma, WWOX primary transcripts undergo alternative splicing with only one form encoding the full-length protein. Typically, alternatives to the full-length transcript are subject to non-sense mediated decay and contribute to a reduction in the steady-state level of mRNA for the full-length protein. Driouch et al. [29], however, report a substantially elevated level of an alternatively spliced transcript (designated FORIII by Ried et al. [5]) iñ 50% of breast cancer tissues and cell lines. This perturbed splicing occurs in the absence of detectable DNA deletions within the WWOX gene, further contributing to the enigma. Its relevance to cancer cell biology is, as yet, unknown. The enormous length of the WWOX gene and its alternative splicing would appear to be two of the contributing factors to WWOX protein having a low steady-state level. Indeed, Drosophila go one step further with the presence of an intron in the 3 untranslated region (UTR) in some WWOX transcripts (see Wwox-RB versus Wwox-RA transcripts in Fly-Base (https://flybase.org/reports/FBtr0343384 (Date last reviewed: 15 November 2018)). Bignell et al. [30] report that such mRNAs with 3 UTR introns are subject to non-sense mediated decay after only a single round of translation, indicating another mechanism for keeping WWOX protein levels low. Whether human WWOX RNA transcripts also have such 3 UTR introns may warrant further investigation as, according to Bignell et al. [30], it is often assumed that such sequences are non-functional. The properties of the mutation that gives rise to homozygous deletion at the FRA16D fragile site in cancer cells have been explored and are noteworthy [23]. First, the early timing of the deletion event in the neoplastic process, as it is assumed that early events are more likely to be causal rather than consequential. Secondly, the lack of a relationship between FRA16D-associated deletion and another form of deletion (loss-of-heterozygosity, LOH) known to occur at high frequency in certain cancers in the 16q23.2 region [19,20]. Thirdly, the nature of the deletion endpoints suggests a specific form of DNA deletion repair mechanism [31]. Fourthly, the extent of "genome-wide" instability that occurs in FRA16D deleted cell lines. Finally, the lack of impact of FRA16D-associated DNA deletions on the ability to cytogenetically express the FRA16D fragile site. The relationship between FRA16D homozygous deletion and the loss-of-heterozygosity is particularly noteworthy. Finnis et al. [23] detail the experimental basis for the conclusion that the FRA16D-associated homozygous deletion events observed in cancer cell lines in this manuscript are distinct from the loss-of-heterozygosity observed by others in breast [19] and prostate [20] cancers. In brief, the polymorphic genetic markers D16S518 and D16S504 that define the boundaries of the loss-of-heterozygosity regions identified in cancer [19,20], were found by Finnis et al. [23] to be heterozygous in all of the cancer cell lines that exhibited FRA16D homozygous deletion. This indicates that the homozygous deletions observed, at least in the particular cancer cells under investigation (i.e., AGS, HCT116, CO-115, KM12C and KM12SM), are not able to be the boundaries of any loss-of-heterozygosity that may have occurred in the vicinity in these cells. Whether this nexus is broken in other instances is yet to be determined. It is perhaps noteworthy that the KM12 lines had a common origin (being derived from the primary, KM12C and metastasis, KM12SM of the same cancer) and had identical homozygous deletions at FRA16D yet exhibited different DNA instabilities elsewhere (including a chromosomal translocation). Fragile site DNA instability is, therefore, not always tied to other instances of DNA instability that presumably have different causes. Common chromosomal fragile sites exhibit a hierarchy of cytogenetic expression, with FRA3B being more readily observed than FRA16D. DNA instability in cancer cells at FRA3B is also more frequent than that at FRA16D. This finding contributes to a growing body of evidence suggesting that common fragile sites are regions of particular sensitivity to DNA instability and that there is a correlation between the level of in vitro chromosomal fragility and in vivo DNA instability in cancer cells [26,32,33]. The localised multiple-hit nature of the homozygous mutation, together with its subsequent (relative) stability, suggests that it is most likely that a transient interaction between environmental factors plays a determining role in the common fragile site-associated mutation mechanism. WWOX in Metabolism Despite more than twenty years of research on the WWOX protein, the substrate and product of the enzyme reaction that it catalyses are yet to be discovered. A growing body of evidence in various model systems supports a role for WWOX in metabolism (see [32,33] for extensive reviews). Drosophila deficiency in WWOX displays no phenotypic consequences [34] and, therefore, might be considered a poor model for those species (including humans) for which WWOX is necessary. On the contrary, the ability of Drosophila to compensate for the lack of WWOX indicates that pathology caused by deficiency of WWOX is likely to be treatable, with identification and targeting of the compensating pathway(s). A combination of Drosophila genetics and biochemical approaches was utilised to discover the normal function of the WWOX gene [34][35][36][37]. Genetically altered levels of WWOX resulted in the identification by proteomics and microarray analyses of multiple components of aerobic metabolism. Functional relationships between WWOX and two of these, isocitrate dehydrogenase or Cu-Zn superoxide dismutase, were confirmed by genetic interactions. In addition, altered levels of WWOX resulted in altered levels of endogenous reactive oxygen species. Similarly to FHIT, WWOX contributes to pathways involving aerobic metabolism and oxidative stress, providing an explanation for the "non-classical tumour suppressor" behaviour of WWOX. Fragile sites, and the genes that span them, are therefore part of a protective response mechanism to oxidative stress and likely contributors to the differences seen in aerobic glycolysis (Warburg effect) in cancer cells [32,34]. In support of these findings in Drosophila, experiments in human HEK392T cells have demonstrated that WWOX has an interrelationship with metabolism-WWOX is both a regulator of metabolism and is regulated by metabolism [35]. Alteration of growing conditions from oxidative phosphorylation to glycolysis alters the expression levels of WWOX. Under hypoxic conditions where metabolism is steered towards glycolysis, the expression of WWOX transcript is markedly decreased, whereas a switch to oxidative phosphorylation has the opposite effect. WWOX not only contributes to the regulation of homeostasis, its steady-state levels are linked to the state of cellular metabolism. An insight into the contribution WWOX plays in cancer was revealed by competition experiments that showed a role for WWOX in the elimination of tumourigenic cells [36]. WWOX was first shown to modify TNF-mediated cell death phenotypes, which was reflected in changes to Caspase 3 staining and provided evidence for WWOX in the promotion of cell death. These TNF-mediated cell death phenotypes were shown to correspond to increased levels of reactive oxygen species (ROS), which have also previously been shown to be regulated by WWOX [36]. Together, these data suggested a protective role for WWOX in the promotion of cell death in response to increased ROS levels, which could correlate with altered metabolism that is observed in cancer. Indeed, decreased levels of WWOX within clones of tumorigenic cells resulted in fewer of them being eliminated by the surrounding wild-type cells and worse outcomes at later stages [36]. These studies provided a molecular basis for WWOX acting as a suppressor of tumor growth by mediating cell death pathways. Together, these results provide a molecular basis for the non-classical tumour suppressor functions of WWOX and the better prognosis observed in cancer patients with higher levels of WWOX activity. Furthermore, WWOX acts to moderate the mitochondrial respiratory system-a likely contribution to the Warburg effect [37]. An in vivo genetic study using Drosophila melanogaster revealed a role for WWOX in a mitochondrial-mediated pathway dependent on its SDR enzyme function. Reduced levels of WWOX were found to result in further perturbation of cellular dysfunction caused by mitochondrial deficiencies, leading to increased frequency of phenotypes such as loss of tissue, cellular outgrowths and presence of ectopic structures. Conversely, the tissue disruption phenotypes were suppressed by increasing WWOX levels, with the SDR enzymatic active site required for the suppression. Amino acid Y288 in Drosophila is an essential component of the catalytic active site in the SDR region, with Y288F mutation abolishing its function [38], and similar mutations shown to completely abolish enzymatic activity of other SDR proteins [38][39][40]. The orthologous tyrosine amino acid is position 293 in human WWOX. The WWOX proteins of different species vary in length due to the presence/absence of additional amino acids. Drosophila experiments utilising the Y288F mutation therefore demonstrated that the catalytic activity of WWOX is required for its cellular response to mitochondrial defects. These experiments indicate the participation of WWOX, through its SDR enzyme activity, in the maintenance of cellular homeostasis in response to mitochondrial defects. Reduction in WWOX levels leads to a lessened cellular response to metabolic perturbation of normal cell growth caused by mitochondrial damage-induced glycolysis (Warburg effect). Experiments from Aqeilan et al. assign contributions of WWOX to various components of general metabolism [33]. WWOX regulates glucose metabolism via HIF1α modulation [41], while loss of WWOX activates aerobic glycolysis [42] and the somatic ablation of WWOX in skeletal muscles alters glucose metabolism [43]. Furthermore, the WWOX gene modulates high-density lipoprotein and lipid metabolism [44]. Pathway analysis of WWOX interactors by Lee et al. [45] identified a significant enrichment of metabolic pathways associated with proteins, carbohydrates, and lipids breakdown. WWOX Genomic Region Is a Risk Factor in Metabolic Disorders Metabolic dysfunction is a defining feature of chronic human diseases including Type 2 Diabetes, hypertension, heart disease, chronic obstructive pulmonary disease, obesity and numerous forms of cancer. It is clear that there are genetic risk factors that predispose individuals to such diseases and/or affect the course of disease progression. Genetic risk is indicative of roles for specific proteins and the pathways in which they are rate-limiting determinants. The genomic region containing the WWOX gene has been identified as a genetic risk factor in each of these metabolic diseases [46][47][48][49][50][51][52][53][54][55][56][57]. The maintenance of metabolic homeostasis is vital to health and its disruption is central in many of the most costly human diseases. Cellular metabolism is highly integrated with multiple mechanisms in place to monitor and restore homeostasis. A key element of intracellular metabolism is the balance between glycolysis and oxidative phosphorylation in the generation of ATP from carbohydrate. WWOX is both regulated by perturbation in the oxidative phosphorylation/glycolysis balance, as well as being a regulator of this balance [34][35][36][37]. This integral role of WWOX in homeostasis therefore provides a plausible explanation for metabolic disorders where genetic variation of WWOX is a risk-factor. Why Is the Chromosomal Fragile Site FRA16D Located within the WWOX Gene? While a great deal of focus has understandably been on the contribution of WWOX to disease [46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61], the normal role of the WWOX gene and its encoded protein remain mysterious. The WWOX gene has not one but two massive introns, the larger of which contains the common chromosomal fragile site-a region of sensitivity to environmental agents ( Figure 1). The presence of large introns is conserved through evolution. Indeed, organisms that typically have much shorter introns than mammals (e.g., Fugu rubripes and Drosophila melanogaster) also have uncharacteristically large introns in their WWOX genes ( Figure 1). This relationship between fragile site containing genes and the very large length of their primary transcripts suggests a role for transcription timing in the regulation of WWOX expression. Genes of 1Mb in length will take many hours to produce a full-length transcript, raising a curious relationship to the cell cycle during replication. The location of regions of greater environmental sensitivity within certain genes is also curious. DNA damage occurs at greater frequency at common chromosomal fragile sites and is a hallmark of cancer. The common fragile site genes are responsive to changes in metabolism and at least WWOX and FHIT contribute to homeostasis [16,[34][35][36][37]. Therefore, the possibility exists that the evolutionarily conserved presence within certain genes of DNA sequences that are sensitive to DNA damage is part of a biologically advantageous response mechanism to environmental damage-not merely conferring risk to cancer. What Does WWOX the Enzyme Normally Do? Given the rate-limiting role of WWOX in metabolism and its high degree of conservation during evolution, it is surprising that the enzymatic reaction catalysed by WWOX is unknown. By sequence homology, WWOX protein encodes a small chain dehydrogenase (SDR) enzyme with a requisite NAD(P)[H] co-factor binding site [38][39][40]. SDR enzymes typically have small molecule substrates and catalyse the interconversion of C-O-H with C=O and the resultant generation of NAD(P)+ or NAD(P) [H]. The WWOX orthologues of all species show significant, and as yet unexplained, homology in their C-terminal sequences (see Figure 2). Whilst conserved during evolution, these sequences do not exhibit any detectable homology to known protein motifs. They look to be a unique property of the WWOX protein and, therefore, somehow related to its unique biological function. Kavanagh et al. [62], in their review entitled "The SDR superfamily: functional and structural diversity within a family of metabolic and regulatory enzymes", state the following: "The common mechanism is an underlying hydride and proton transfer involving the nicotinamide and typically an active site tyrosine residue, whereas substrate specificity is determined by a variable C-terminal segment". Whatever the biological basis of the very high level of conservation during evolution in the C-terminal region of WWOX, it is reasonable to speculate that sequences outside of the immediate catalytic motif (and C-terminal to it) have a role to play in contributing to conformation of the catalytic site and/or the access of molecules to that catalytic site and, therefore, the specificity of the enzyme. In addition to WW domains and SDR canonical sequences, the putative substrate specificity sequences are indicated (green shading). Not shown but noted, the WWOX orthologue in the evolutionarily distant sea sponge (Amphimedon queenslandica) has WW domains and also has substrate binding domain homology. Furthermore, also not shown but noted, the closest SDR family members in the Opisthokonts (Casaspora owczarzaki) and Caenorhabditis elegans (NP_503155. 4 Of relevance, homozygous mutations that diminish WWOX function are found in families with recessive spinocerebellar ataxia 12 (SCAR12). The G372R mutation located within the putative substrate specificity region indicates that this highly conserved Cterminal segment is vital for WWOX function, having the same clinical consequences as that of the P47T mutation located in the first WW domain of WWOX [57]. Given that these conserved C-terminal sequences do contribute to WWOX substrate-specificity, they are potential targets in the identification of WWOX-based therapeutics. The presence of two WW domains that are known to act as protein-protein interaction sites with PPY containing proteins has focussed much attention on the identity of the protein binding partners of WWOX in an effort to ascertain which biological pathways and processes WWOX contributes to [63]. In addition to the intriguing findings revealed by phylogenetic analysis of the WWOX gene, a similar analysis of WWOX protein also adds to the mystery. Most organisms have a single clear orthologue of WWOX with the characteristic two WW domains. However, Caenorhabditis elegans and Opisthokonts (Casaspora owczarzaki) are notable exceptions in that their closest WWOX orthologues are devoid of WW domains. A host of PPY containing proteins have been identified by various means; however, the extent to which these contribute to functional interactions in vivo is not yet clear [63]. Another striking feature of the comparison of WWOX proteins from different species is the conservation of 15 amino acids that include part of the first WW domain (Figure 2). This sequence has the defining characteristics of a PEST domain [64]. Proteins containing PEST domains are rapidly degraded and are found in proteins associated with the cell cycle [65]. PEST domains are well known regulators of enzyme activity [66] and are typically found in metabolically unstable proteins. These characteristics, along with that of an exceptionally long primary transcript described above, are consistent with a dynamic role for WWOX enzyme activity in metabolic processes integral to the cell cycle. These factors likely combine with alternative splicing and non-sense mediated decay of WWOX transcripts (as described above) to produce a low steady-state level of WWOX protein. Notably, similarly to its FRA3B/FHIT counterpart, the role of FRA16D/WWOX in cancer has been controversial [16], resulting in their categorisation by some as "nonclassical" tumour suppressor genes. Loss of a single allele (LOH, loss-of-heterozygosity) is typical in cancer, resulting in reduced WWOX levels rather than its absence due to a second-hit mutation. Therefore, the altered abundance of WWOX metabolites appears sufficient for biological consequences, including poorer prognosis for cancers with reduced WWOX [32,58]. Reduced WWOX enzyme activity suggests a build-up on one side of the equation, which leads to elevated levels of a metabolite and, therefore, has biological consequences. The identity of this metabolite, together with targeted methods to reduce its abundance, represent a plausible target for treating metabolic dysfunction due to perturbation of WWOX. Alternatively, if the product(s) of WWOX act as negative regulators or rate-limiting determinants of a metabolic process, then the activation of compensatory pathways such as those acting in Drosophila [34] may provide a means of reducing the clinical impact of WWOX deficiency. Author Contributions: All co-authors contributed to the concept of this review, R.I.R. assembled the first draft and all others contributed detail and correction. R.I.R. drafted responses to reviewer's comments and all co-authors moderated responses and amendments in the final revised manuscript. All authors have read and agreed to the published version of the manuscript. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2020-10-11T11:11:08.489Z
2020-01-01T00:00:00.000
204743614
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevC.101.044907", "pdf_hash": "49b4e4e3e40142a761f166d86e9bb88de53ef589", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42745", "s2fieldsofstudy": [ "Physics" ], "sha1": "2879ab5a3b5e98e9aa58b5b28afa5931e2475424", "year": 2020 }
pes2o/s2orc
Production of charged pions, kaons, and (anti-)protons in Pb-Pb and inelastic pp collisions at √sNN = 5.02 TeV Midrapidity production of π ± , K ± , and ( p ) p measured by the ALICE experiment at the CERN Large Hadron Collider, in Pb-Pb and inelastic pp collisions at √ s NN = 5 . 02 TeV, is presented. The invariant yields are measured over a wide transverse momentum ( p T ) range from hundreds of MeV / c up to 20 GeV / c . The results in Pb-Pb collisions are presented as a function of the collision centrality, in the range 0–90%. The comparison of the p T -integrated particle ratios, i.e., proton-to-pion ( p /π ) and kaon-to-pion ( K /π ) ratios, with similar measurements in Pb-Pb collisions at √ s NN = 2 . 76 TeV show no significant energy dependence. Blast-wave fits of the p T spectra indicate that in the most central collisions radial flow is slightly larger at 5.02 TeV with respect to 2.76 TeV. Particle ratios ( p /π , K /π ) as a function of p T show pronounced maxima at p T ≈ 3GeV / c in central Pb-Pb collisions. At high p T , particle ratios at 5.02 TeV are similar to those measured in pp collisions at the same energy and in Pb-Pb collisions at √ s NN = 2 . 76 TeV. Using the pp reference spectra measured at the same collision energy of 5.02 TeV, the nuclear modification factors for the different particle species are derived. Within uncertainties, the nuclear modification factor is particle species independent for high p T and compatible with measurements at √ s NN = 2 . 76 TeV. The results are compared to state-of-the-art model calculations, which are found to describe the observed trends satisfactorily. I. INTRODUCTION Previous observations at the Relativistic Heavy-Ion Collider (RHIC) and at the CERN Large Hadron Collider (LHC) demonstrated that in high-energy heavy-ion (A-A) collisions, a strongly interacting quark-gluon plasma (sQGP) [1][2][3][4][5] is formed. It behaves as a strongly coupled near-perfect liquid with a small viscosity-to-entropy ratio η/s [6]. The experimental results have led to the development and adoption of a standard theoretical framework for describing the bulk properties of the QGP in these collisions [7]. In this paradigm, the beam energy dependence is mainly encoded in the initial energy density (temperature) of the QGP. After formation, the QGP expands hydrodynamically as a near perfect liquid before it undergoes a chemical freeze-out. The chemical freezeout temperature is nearly beam-energy independent for centerof-mass energy per nucleon pair larger than 10 GeV [7,8]. The hadronic system continues to interact (elastically) until kinetic freeze-out. We report in this paper a comprehensive study of bulk particle production at the highest beam energy for A-A collisions available at the LHC. We probe the highest QGP temperature, to further study this paradigm and address its open questions. Transverse momentum distributions of identified particles in Pb-Pb collisions provide information on the transverse expansion of the QGP and the freeze-out properties of the ensuing hadronic phase. By analyzing the p T -integrated yields in Pb-Pb collisions it has been shown that hadron yields in high-energy nuclear interactions can be described assuming their production at thermal and chemical equilibrium [9][10][11][12], with a single chemical freeze-out temperature, T ch ≈ 156 MeV, close to the one predicted by lattice QCD calculations for the QGP-hadronic phase transition, T c = (154 ± 9) MeV [13]. Indeed, the Pb-Pb data from LHC Run 1 [14] showed an excellent agreement with the statistical hadronization model with the exception of the proton and antiproton, (K * )K * and multistrange particle yields [9,12]. The deviation could be in part due to interactions in the hadronic phase, which result in baryon-antibaryon annihilation that is most significant for (anti-)protons [15][16][17][18]. Proposed explanations for the observed discrepancy with respect to the thermal model predictions can be found in Refs. [18][19][20][21][22]. Moreover, at √ s NN = 2.76 TeV the proton-to-pion [(p + p)/(π + + π − ) ≡ p/π ] ratio exhibits a slight decrease with centrality and a slightly lower value than measured at RHIC. New measurements at √ s NN = 5.02 TeV, which exploit the currently highest medium density, could provide an improved understanding of the particle production mechanisms [22]. The spectral shapes at low p T (p T <2 GeV/c) in central Pb-Pb collisions at √ s NN = 2.76 TeV showed a stronger radial flow than that measured at RHIC energies, in agreement with the expectation based on hydrodynamic models [14,23]. The results for identified particle production at low p T and higher √ s NN are useful to further test hydrodynamic predictions. At intermediate p T (2 − 10 GeV/c), the particle ratios experimentally show the largest variation and in particular for the baryon-to-meson enhancement several new hadronization mechanisms have been proposed [24][25][26]. In the most central Pb-Pb collisions at √ s NN = 2.76 TeV, the p/π ratio reaches values larger than 0.8 for p T ≈ 3 GeV/c, which surpass those for inelastic pp collisions at the same energy [27,28]. An intermediate p T enhancement of heavier hadrons over lighter hadrons is expected from the collective hydrodynamic expansion of the system alone [29][30][31]. In coalescence models [32][33][34], which requires radial flow as well, baryon-to-meson ratios are further enhanced at intermediate p T by the coalescence of lower p T quarks that leads to a production of baryons (3 quarks) with larger p T than for mesons (2 quarks). The baryon-to-meson ratio decreases at high p T and reaches the values observed in pp collisions as a consequence of the increasing importance of parton fragmentation. The observation of a qualitatively similar enhancement of the kaon-to-pion [(K + + K − )/(π + + π − ) ≡ K/π ] ratio in central Pb-Pb collisions with respect to inelastic pp collisions [28,35] supports an interpretation based on the collective radial expansion of the system that affects heavier particles more. For high p T (p T >10 GeV/c), measurements of the production of identified particles in Pb-Pb collisions relative to inelastic pp collisions contribute to the study of hard probes propagating through the medium. This offers the possibility to determine the properties of the QGP like the transport coefficientq [36] and the space-time profile of the bulk medium in terms of local temperature and fluid velocity [37]. The modification of particle production is quantified with the nuclear modification factor, R AA , defined as where d 2 N AA /(dyd p T ) is the particle yield in nucleus-nucleus collisions and σ pp is the production cross section in pp collisions. The average nuclear overlap function is represented by T AA and is obtained from a Glauber model calculation [38]. It is related to the average number of binary nucleon-nucleon collisions N coll , and the total inelastic nucleon-nucleon cross section, σ NN INEL = (67.6 ± 0.6) mb at √ s NN = 5.02 TeV [39], as T AA = N coll /σ NN INEL . Several measurements of R AA at high p T for different √ s NN [40][41][42][43][44][45][46] support the formation of a dense partonic medium in heavy-ion collisions where hard partons lose energy via a combination of elastic and inelastic collisions with the constituents of the QGP [47]. Results from Pb-Pb collisions at √ s NN = 2.76 TeV showed that within uncertainties, the suppression is the same for pions, kaons and (anti-)protons [28]. Moreover, the inclusive charged-particle nuclear modification factor measured in Pb-Pb collisions at 5.02 TeV shows that the suppression continues to diminish for p T above 100 GeV/c [48] while the suppression of jets saturates at a value of 0.5 [49]. Particle production at high transverse momentum has also been studied as a function of the Bjorken energy density [50] and path length [51][52][53]. The results show interesting scaling properties which can be further tested using LHC data at higher energies. In this paper, the measurement of p T spectra of π ± , K ± and (p)p in inelastic pp and Pb-Pb collisions at √ s NN = 5.02 TeV over a wide p T range, from 100 MeV/c for pions, 200 MeV/c for kaons, and 300 MeV/c for (anti-)protons, up to 20 GeV/c for all species, are presented. Particles are identified by combining several particle identification (PID) techniques based on specific ionization energy loss (dE/dx) and time-of-flight measurements, Cherenkov radiation detection and the identification of the weak decays of charged kaons via their kink-topology. The article is organized as follows: Sec. II outlines the analysis details including the track and event selections as well as the particle identification strategies. The obtained results are discussed in Sec. III. Section IV presents the comparison of data with model predictions. Finally, Sec. V contains a summary of the main results. II. DATA ANALYSIS In this paper the measurements obtained with the central barrel of the ALICE detector, which has full azimuthal coverage around midrapidity, |η| < 0.8 [54], are presented. A detailed description of the ALICE detector can be found in Ref. [55]. The pp results were obtained from the analysis of ≈ 1.2 × 10 8 minimum bias pp collisions, collected in 2015. The Pb-Pb analysis with ITS and TOF uses ≈ 5 × 10 6 minimum bias Pb-Pb collisions, collected in 2015. The Pb-Pb analysis where PID is provided by the TPC, the high momentum particle identification (HMPID) detector and the kink decay topology requires more statistics and uses the full data sample collected in 2015 corresponding to ≈ 6.5 × 10 7 Pb-Pb collisions. Both in pp and Pb-Pb collisions, the interaction trigger is provided by a pair of forward scintillator hodoscopes, the V0 detectors, which cover the pseudorapidity ranges 2.8 < η < 5.1 (V0A) and −3.7 < η < −1.7 (V0C) [56]. The minimum bias trigger is defined as a coincidence between the V0A and the V0C trigger signals. The V0 detector signals, which are proportional to the charged-particle multiplicities, are used to divide the Pb-Pb event sample into centrality classes, defined in terms of percentiles of the hadronic cross section [38]. A Glauber Monte Carlo model is fitted to the V0 amplitude distribution to compute the fraction of the hadronic cross section corresponding to any given range of V0 amplitudes. The 90-100% centrality class has substantial contributions from QED processes (≈ 20%) [38] and its low track multiplicity presents difficulties in the extraction of the trigger inefficiency; it is therefore not included in the results presented here. Also, an offline event selection is used to remove beam background events. It employs the information from two zero degree calorimeters (ZDCs) positioned at 112.5 m on either side of the nominal interaction point. Beam background events are removed by using the V0 timing information and the correlation between the sum and the difference of times measured in each of the ZDCs [55]. The central barrel detectors are located inside a solenoidal magnet providing a magnetic field of 0.5 T and are used for tracking and particle identification. The innermost barrel detector is the inner tracking system (ITS) [57], which consists of six layers of silicon devices grouped in three detector systems (from the innermost outwards): the silicon pixel detector (SPD), the silicon drift detector (SDD), and the silicon strip detector (SSD). The time projection chamber (TPC), the main central-barrel tracking device, follows outwards. The results are presented for primary particles, defined as particles with a mean proper lifetime τ > 1 cm/c which are either produced directly in the interaction or from decays of particles with τ < 1 cm/c, restricted to decay chains leading to the interaction [58]. To limit the contamination due to secondary particles and tracks with wrongly associated hits and to ensure high tracking efficiency, tracks are required to cross at least 70 TPC readout rows with a χ 2 normalized to the number of TPC space-points ("clusters"), χ 2 /NDF, lower than 2. Tracks are also required to have at least two hits reconstructed in the ITS out of which at least one is in the SPD layers and to have a distance of closest approach (DCA) to the interaction vertex in the direction parallel to the beam axis (z), |DCA z | < 2 cm. A p T -dependent selection on the DCA in the transverse plane (DCA xy ) of the selected tracks to the primary vertex is also applied [59]. Furthermore, the tracks associated with the decay products of weakly decaying kaons ("kinks") are rejected. The latter selection is not applied in the study of kaon production from kink decay topology. The primary vertex position is determined from tracks, including short track segments reconstructed in the SPD [60]. The position of the primary vertex along the beam axis is required to be within 10 cm from the nominal interaction point. The position along z of the SPD and track vertices are required to be compatible within 0.5 cm. This ensures a uniform acceptance and reconstruction efficiency in the pseudorapidity region |η| < 0.8 and rejects pileup events in pp collisions. Different PID detectors are used for the identification of the different particle species. Ordering by p T , from lowest to highest, the results are obtained using the dE/dx measured in the ITS and the TPC [61], the time of flight measured in the time-offlight (TOF) detector [62], the Cherenkov angle measured in the high-momentum particle identification detector (HMPID) [63] and the TPC dE/dx in the relativistic rise region of the Bethe-Bloch curve. The performance of these devices is reported in Ref. [55]. A. Particle identification strategy For the analysis presented here, pions, kaons, and (anti-)protons have been identified following the same analysis techniques as in the previous ALICE measurements. The ITS, TPC (low p T ) and TOF analyses are described in Refs. [14,64,65], while the HMPID and TPC (high p T ) analyses are documented in Refs. [28,35,66]. The kink analysis is described in Ref. [59]. In this paper, only the most relevant aspects of each specific analysis are described. In most analyses, the yield is extracted from the numberof-sigma (N σ ) distribution. This quantity is defined as where i refers to a given particle species (i = π , K, p), signal is the detector PID signal (e.g., dE/dx), and signal i and σ i are the expected average PID signals in a specific detector and its standard deviation, respectively. Figure 1 shows the pion-kaon and kaon-proton separation power as a function of p T for ITS, TPC, TOF, and HMPID. The separation power is defined as follows: Note that the response for the individual detectors is momentum (p) dependent. However, since results are reported in transverse momentum bins, the separation power as a function of p T has been evaluated, averaging the momentumdependent response over the pseudorapidity range |η| < 0.5. In Table I a. ITS analysis. The four outer layers of the ITS provide specific energy-loss measurements. The dynamic range of the analog readout of the detector is large enough [67] to provide dE/dx measurements for highly ionizing particles. Therefore, the ITS can be used as a standalone low-p T PID detector in the nonrelativistic region where the dE/dx is proportional to 1/β 2 . For each track, the energy loss fluctuation effects are reduced by using a truncated mean: the average of the lowest two dE/dx values in case four values are measured, or a weighted sum of the lowest (weight 1) and the second lowest (weight 1/2), in case only three values are available. The plane (p; dE/dx) is divided into identification regions where each point is assigned a unique particle identity. The identity of a track is assigned based on which dE/dx curve the track is closest to, removing in this way the sensitivity to the dE/dx resolution. To reject electrons, a selection on |N π σ | < 2, is applied. Using this strategy, it is possible to identify π and K with an efficiency of about 96-97% above p T = 0.3 GeV/c, and (p)p with an efficiency of 91-95% in the entire p T range of interest. In the lowest p T bin, the PID efficiency reaches ≈ 60%, ≈ 80%, and ≈ 91% for pions, kaons, and (anti-)protons, respectively. By means of this technique it is possible to identify π ± , K ± , and (p)p in Pb-Pb (pp) collisions in the p T ranges 0.1-0.7 GeV/c, 0.2-0.5 (0.6) GeV/c, and 0.3-0.6 (0.65) GeV/c, respectively. b. TOF analysis. The analysis with the TOF detector uses the subsample of tracks for which a time measurement with TOF is available. The time of flight t TOF is the difference between the measured particle arrival time τ TOF and the event time t 0 , namely t TOF = τ TOF − t 0 . In the ALICE experiment, the t 0 value can be obtained with different techniques [68]. The best precision on the t 0 evaluation is obtained by using the TOF detector itself. In this case, the t 0 is obtained on an event-by-event basis by using a combinatorial algorithm that compares the measured τ TOF with the expected one under different mass hypotheses. The procedure to evaluate t 0 with the TOF detector is fully efficient if enough reconstructed tracks are available, which is the case of the 0-80% Pb-Pb collisions. The resolution on the t 0 evaluated with the TOF detector is better than 20 ps if more than 50 tracks are used for its determination. This improvement with respect to Run 1 performance [68] is due to improved calibration procedures carried out during Run 2. Overall the TOF signal resolution is about 60 ps in central Pb-Pb collisions. In pp and 80-90% Pb-Pb collisions the measurement of the event time relies on the T0 detector (σ t T0 ev ≈ 50 ps) [68] or, in case it is not available, on the bunch crossing time, which has the worst resolution (≈ 200 ps). The PID procedure is based on a statistical unfolding of the time-of-flight N σ distribution. For each p T bin, the expected shapes for π ± , K ± , and (p)p are fitted to the t TOF distributions, allowing the three particles to be distinguished when the separation is as low as ≈ 2σ . An additional template is needed to account for the tracks that are wrongly associated with a hit in the TOF. The templates are built from data as described in Ref. [14]. For this purpose the length of measured tracks is used to compute a realistic distribution of the expected time of arrival for each mass hypothesis and the signal shape is reproduced by sampling the parametrized TOF response function (described by a Gaussian with an exponential tail) obtained from data. Since the rapidity of a track depends on the particle mass, the fit is repeated for each mass hypothesis. TOF analysis makes identification of π ± , K ± , and (p)p in Pb-Pb (pp) collisions possible in the p T ranges 0.60-3.50 GeV/c, 1.00 (0.65)−3.50 GeV/c and 0.80-4.50 GeV/c, respectively. c. TPC analysis. The TPC provides information for particle identification over a wide momentum range via the specific energy loss [55]. Up to 159 space-points per trajectory can be measured. A truncated mean, utilizing 60% of the available clusters, is employed in the dE/dx determination [61]. The dE/dx resolution for the Minimum Ionizing Particle (MIP) is ≈ 5.5% in peripheral and ≈ 6.5% in central Pb-Pb collisions. Particle identification on a track-by-track basis is possible in the region of momentum where particles are well separated by more than 3σ . This allows the identification of pions, kaons and (anti-)protons within the transverse momentum ranges 0.25-0.70 GeV/c, 0.25-0.45 GeV/c, and 0.45-0.90 GeV/c, respectively. The TPC dE/dx signal in the relativistic rise region (3 < βγ 1000), where the average energy loss increases as ln(βγ ), allows identification of charged pions, kaons, and (anti-)protons from p T ≈ 2-3 GeV/c up to p T = 20 GeV/c. The first step of the TPC high-p T analysis is the calibration of the PID signal; a detailed description of the the dE/dx calibration procedure can be found in Refs. [28,35]. Particle identification requires precise knowledge of the dE/dx response and resolution σ . This is achieved using the PID signals of pure samples of secondary pions and protons originating from K 0 S and decays as well as a sample of tracks selected with TOF. In addition, measured K 0 S spectra are used to further constrain the TPC charged kaon response [28]. For different momentum intervals, a sum of four Gaussian functions associated with the pion, kaon, proton and electron signals is fitted to the dE/dx distribution. d. HMPID analysis. The HMPID performs identification of charged hadrons based on the measurement of the emission angle of Cherenkov radiation. Starting from the association of a track to the MIP cluster centroid one has to reconstruct the photon emission angle. Background, due to other tracks, secondaries and electronic noise, is discriminated exploiting the Hough Transform Method (HTM) [69]. Particle identification with the HMPID is based on statistical unfolding. In pp collisions, a negligible background allows for the extraction of the particle yields from a three-Gaussian fit to the Cherenkov angle distributions in a narrow transverse momentum range. In the case of Pb-Pb collisions, the Cherenkov angle distribution for a narrow transverse momentum bin is described by the sum of three Gaussian distributions for π ± , K ± , and (p)p for the signal and a sixth-order polynomial function for the background [28]. This background is due to misidentification in the high occupancy events: the larger the angle, the larger the probability to find background clusters arising from other tracks or photons in the same event. This background is uniformly distributed on the chamber plane. The resolution in Pb-Pb events is the same as in pp collisions (≈ 4 mrad at β ≈ 1). In this analysis, the HMPID provides results in pp and Pb-Pb collisions in the transverse momentum ranges 1.5-4.0 GeV/c for π ± and K ± , and in 1.5-6.0 GeV/c for (p)p. e. Kink analysis. In addition to the particle identification techniques mentioned above, charged kaons can also be identified in the TPC using the kink topology of their two-body decay mode (e.g., K → μ + ν μ ) [59]. With the available statistics, this technique extends PID of charged kaons up to 4 GeV/c in pp collisions and up to 6 GeV/c in Pb-Pb collisions. The kink analysis reported here is applied for the first time to Pb-Pb data. For the reconstruction of kaon kink decays, the algorithm is implemented within the fiducial volume of the TPC detector (130 < R < 200 cm), to ensure that an adequate number of clusters is found to reconstruct the tracks of both the mother and the daughter with the necessary precision to be able to identify the particles. The mother tracks of the kinks are selected using similar criteria as for other primary tracks, except that the minimum number of TPC clusters required are 30 instead of 70, because they are shorter compared to the primary ones. Assuming the neutrino to be massless, the invariant mass of the decayed particle (M μν ) is estimated from the charged decay product track and the momentum of the neutrino as reported in Ref. [59]. The main background is from charged pion decays, π → μ + ν μ (B.R. = 99.99%), which also gives rise to a kink topology. A proper q T selection, where q T is the transverse momentum of the daughter track with respect to the mother's direction at the kink, can separate most of the pion kink background from the kaon kinks. Since the upper limit of q T values for the decay channels π → μ + ν μ and K → μ + ν μ are 30 MeV/c and 236 MeV/c, respectively, a selection of q T > 120 MeV/c rejects more than 80% (85% in pp collisions) of the pion background. For further removal of the contamination from pion decays, an additional selection on kink opening angle, as reported in Ref. [59], has been implemented. Finally, the TPC dE/dx of the mother tracks is required to have |N K σ | < 3, which improves the purity of the sample. After these selections, the purity ranges from 99% at low p T to 92% (96% in pp collisions) at high p T according to Monte Carlo studies. The remaining very low background is coming from random associations of charged tracks reconstructed as fake kinks. After applying all these topological selection criteria, the invariant mass of kaons (M μν ) obtained from the reconstruction of their decay products integrated over the above mentioned mother momentum ranges for pp and Pb-Pb collisions are shown in Fig. 2. B. Correction of raw spectra To obtain the p T distributions of primary π ± , K ± , and (p)p, the raw spectra are corrected for PID efficiency, misidentification probability, acceptance, and tracking efficiencies, following the procedures described in Ref. [14] for the ITS, TPC (low p T ) and TOF, in Ref. [28] for the HMPID and TPC (high p T ) and in Ref. [59] for the kink analysis. The acceptance, reconstruction, and tracking efficiencies are obtained from Monte Carlo simulated events generated with PYTHIA 8.1 (Monash 2013 tune) [70] for pp collisions and with HIJING [71] for Pb-Pb collisions. The particles are propagated through the detector using the GEANT 3 transport code [72], where the detector geometry and response, as well as the data taking conditions, are reproduced in detail. Since GEANT 3 does not describe well the interaction of low-momentum p and K − with the material, a correction to the efficiencies is estimated using GEANT 4 and FLUKA, respectively, which are known to describe such processes better [14,[73][74][75]. The PID efficiency and the misidentification probability are evaluated by performing the analysis on the Monte Carlo simulation, which requires that the simulated data are first tuned to reproduce the real PID response for each PID technique. The contamination due to weak decays of light flavor hadrons (mainly K 0 S affecting π ± spectra, and + affecting (p)p spectra) and interactions with the material has to be computed and subtracted from the raw spectra. Since strangeness production is underestimated in the event generators and the interactions of low p T particles with the material are not properly modeled in the transport codes, the secondary-particle contribution is evaluated with a data-driven approach. For each PID technique and species, the contribution of feed-down in a given p T interval is extracted by fitting the measured distributions of DCA xy of the tracks identified as the given hadron species. The DCA xy distributions are modeled with three contributions: primary particles, secondary particles from weak decays of strange hadrons and secondary particles produced in the interactions with the detector material. Their shapes are extracted for each p T interval and particle species from the Monte Carlo simulation described above. The contribution of secondaries and after (lower) the topological selection. The peak centered at M μν = 0.49 GeV/c 2 is for the decay channel K → μ + ν μ (B.R. = 63.55%), whereas the peak centered at M μν = 0.43 GeV/c 2 is for the decay channel K → π + π 0 (B.R. = 20.66%), whose invariant mass is calculated with the wrong mass hypothesis. is different for each PID analysis due to the different track and PID selections and is more important at low p T . The measured Pb-Pb spectra are then normalized to the number of events in each centrality class. The spectra measured in pp collisions are also normalized to the number of inelastic collisions obtained from the number of analyzed minimum bias events corrected with an inelastic normalization factor of 0.757 (± 2.51%), defined as the ratio between the V0 visible cross section and the inelastic pp cross section at √ s = 5.02 TeV [39]. C. Systematic uncertainties The evaluation of systematic uncertainties follows the procedures described in Ref. [14] for the ITS, TPC (low p T ), and TOF analyses, in Ref. [28] for the HMPID and TPC (high p T ) analyses and in Ref. [59] for the kink analysis. The main sources of systematic uncertainties, for each analysis, are summarized in Tables II and III, for the Pb-Pb and pp analyses, respectively. Sources of systematic effects such as the different PID techniques, the feed-down correction, the imperfect description of the material budget in the Monte Carlo simulation, the knowledge of the hadronic interaction cross section in the detector material, the TPC-TOF and ITS-TPC matching efficiency, and the track selection have been taken into account. The systematic uncertainties related to track selection were evaluated by varying the criteria used to select single tracks (number of reconstructed crossed rows in the TPC, number of available clusters in the ITS, DCA xy and DCA z , χ 2 /NDF of the reconstructed track). The ratio of the corrected spectra with modified selection criteria to the default case is computed to estimate the systematic uncer-tainty for a given source. A similar approach is used for the evaluation of the systematic uncertainties related to the PID procedure. The uncertainties due to the imperfect description of the material budget in the Monte Carlo simulation is estimated varying the material budget in the simulation by ±7%. To account for the effect related to the imperfect knowledge of the hadronic interaction cross section in the detector material, different transport codes (GEANT3, GEANT4, and FLUKA) are compared. Finally, the uncertainties due to the feed-down correction procedure are estimated for all analyses by varying the range of the DCA xy fit, by using different track selections, by applying different cuts on the (longitudinal) DCA z , and by varying the particle composition of the Monte Carlo templates used in the fit. For the ITS analysis, the standard N σ method is compared with the yields obtained with a Bayesian PID technique [76]. Moreover, the Lorentz force causes shifts of the cluster position in the ITS, pushing the charge in opposite directions when switching the polarity of the magnetic field of the experiment (E × B effect) [14]. This effect is not fully reproduced in the Monte Carlo simulation and has been estimated by analyzing data samples collected with different magnetic field polarities. To estimate possible systematic effects deriving from signal extraction in the low p T TPC analysis, the yield was computed by varying the selection based on the number of TPC crossed rows from 70 to 90 and the yield was computed from the sum of the bin content of the N σ distribution in the range [−3, 3], instead of fitting. The systematic uncertainty was obtained from the comparison to the nominal yield. Regarding the TPC analysis at high p T , the imprecise knowledge of both the Bethe-Bloch TABLE II. Main sources and values of the relative systematic uncertainties (expressed in %) of the p T -differential yields of π ± , K ± , and (p)p obtained in the analysis of Pb-Pb collisions. When two values are reported, these correspond to the lowest and highest p T bin of the corresponding analysis, respectively. If only one value is reported, then the systematic uncertainty is not p T -dependent. If not specified, then the uncertainty is not centrality-dependent. The first three systematic uncertainties are common to all PID techniques. The maximum (among centrality classes) total systematic uncertainties and the centrality-independent ones are also shown. TABLE III. Main sources and values of the relative systematic uncertainties (expressed in %) of the p T -differential yields of π ± , K ± , and (p)p obtained in the analysis of pp collisions. When two values are reported, these correspond to the lowest and highest p T bin of the corresponding analysis, respectively. If only one value is reported, then the systematic uncertainty is not p T -dependent. The first three systematic uncertainties are common to all PID techniques. In the last row, the total systematic uncertainty is reported. a TOF time response function with varied parameters. For the HMPID analysis, the selection on the distance between the extrapolated track point at the HMPID chamber planes and the corresponding MIP cluster centroid, d MIP−trk , is varied by ±1 cm to check its systematic effect on the matching of tracks with HMPID signals. Moreover, the systematic bias due to the background fitting, which represents the largest source, is estimated by changing the fitting function: from a sixth-order polynomial to a power law of the tangent of the Cherenkov angle. This function is derived from geometrical considerations [77]. For the kink analysis, the systematic uncertainties are estimated by comparing the standard spectra with the ones obtained by varying the selection on decay product transverse momentum, the minimum number of TPC clusters, kink radius and TPC N σ values of the mother tracks. By using the same methods as for the spectra, the systematic uncertainties for the p T -dependent particle ratios were computed to take into account the correlated sources of uncertainty (mainly due to PID and tracking efficiency). Finally, for both p T -dependent spectra and ratios the particlemultiplicity-dependent systematic uncertainties, those that are uncorrelated across different centrality bins, were determined. The improved reconstruction and track selection in the analysis of pp and Pb-Pb data at √ s NN = 5.02 TeV lead to reduced systematic uncertainties as compared to previously published results at √ s NN = 2.76 TeV. III. RESULTS AND DISCUSSION The measured p T spectra of π ± , K ± , and (p)p from the independent analyses have to be combined in the overlapping ranges using a weighted average with the systematic and statistical uncertainties as weights. All the systematic uncertainties are considered to be uncorrelated across the different PID techniques apart from those related to the ITS-TPC matching efficiency and the event selection. The correlated systematic uncertainties have been added in quadrature after the spectra have been combined. For a given hadron species, the spectra of particles and antiparticles are found to be compatible, and therefore all spectra reported in this section are shown for summed charges. Figure 3 shows the combined p T spectra of π ± , K ± , and (p)p measured in 0-90% Pb-Pb and inelastic pp collisions at √ s NN = 5.02 TeV. Results for Pb-Pb collisions are presented for different centrality classes. Scaling is applied in the plots to improve spectra visibility. In the low p T region, the maximum of the spectra is pushed toward higher momenta while going from peripheral to central Pb-Pb events. This effect is mass dependent and can be interpreted as a signature of radial flow [14]. For high p T , the spectra follow a power-law shape, as expected from perturbative QCD (pQCD) calculations [78]. The p T -integrated yields, dN/dy, and the average transverse momentum, p T , are determined for the different centrality classes using an extrapolation to p T = 0. The extrapolation procedure is performed after fitting the measured spectra with Boltzmann-Gibbs Blast-Wave [79] (for Pb-Pb) or the Lévy-Tsallis [80,81] (for pp) functions. In the most central Pb-Pb collisions (0-5%), the extrapolated fractions of the total yields are 5.84%, 5.20%, and 3.72%, for pions, kaons, and (anti-)protons, respectively. The fractions increase as centrality decreases, reaching 8.63%, 9.36%, and 10.73% in the most peripheral collisions (80-90%). In pp collisions the fractions are 8.59%, 9.98%, and 12.61% for pions, kaons, and (anti-)protons, respectively. The systematic uncertainties are then propagated to the p T -integrated yields and mean transverse momentum. For the uncertainty on dN/dy, the fit is performed with all data points shifted up by their full systematic uncertainties. To estimate the uncertainty on p T , points in the 0-3 GeV/c range are shifted up and down within their systematic uncertainty to obtain the softest and hardest spectra. The maximum difference (in absolute value) between the integrated quantities obtained with the standard and modified spectra are included as part of the systematic uncertainty. Additionally, different functions 1 were used to perform the extrapolation and the largest differences were added to the previous contributions. The statistical uncertainties on the dN/dy and p T values are evaluated propagating the uncertainties on the fit parameters obtained directly from the fit procedure. The procedure described above is repeated using the systematic uncertainties uncorrelated across different centrality bins to extract the centrality uncorrelated part of the systematic uncertainties on the p T -integrated particle yields and the average transverse momenta. In Table IV, the dN/dy and p T are shown for Pb-Pb and pp collisions, respectively. For Pb-Pb collisions the values are given for different centrality ranges. A. Particle production at low transverse momentum The Boltzmann-Gibbs blast-wave function is a threeparameter simplified hydrodynamic model in which particle production is given by [79] 1 Lévy-Tsallis (Pb-Pb only); Boltzmann-Gibbs blast-wave (pp only); m T -exponential: Ax × exp(− √ x 2 + m 2 /T ), where A is a normalization constant, T the temperature, and m the mass; Fermi-Dirac 044907-9 The velocity profile ρ is given by where β T is the radial expansion velocity, m T the transverse mass (m T = m 2 + p T 2 ), and T kin the temperature at the kinetic freeze-out, I 0 and K 1 are the modified Bessel functions, r is the radial distance in the transverse plane, R is the radius of the fireball, β s is the transverse expansion velocity at the surface, and n is the exponent of the velocity profile. To quantify the centrality dependence of spectral shapes at low p T , the Boltzmann-Gibbs blast-wave function has been simultaneously fitted to the charged pion, kaon and (anti-)proton p T spectra, using a common set of parameters but different normalization factors and masses. Although the absolute values of the parameters have a strong dependence on the p T range used for the fit [14], the evolution of the parameters with √ s NN can still be compared across different collision energies by using the same fitting ranges. The present analysis uses the same p T intervals employed for fitting as in a previous publication [14], namely, 0.5-1 GeV/c, 0.2-1.5 GeV/c, and 0.3-3 GeV/c for charged pions, kaons, and (anti-)protons, respectively. Figure 4 shows the ratios of the spectra to results of the fits for different centrality classes and particle species. If the shape of the p T distributions over the full measured p T range was purely driven by the collective radial expansion of the system, then the functions determined by fitting the data in a limited p T range would be expected to describe the spectral shapes in the full p T range. Within uncertainties, this is only observed for the proton p T spectra (up to 4 GeV/c) in 0-20% Pb-Pb collisions. A different situation is observed for pions where, due to their small mass and the large centralitydependent feed-down contribution from resonance decays, the agreement with the model is worse than that observed for kaons and (anti-)protons. The p T interval where the model describes the data within uncertainties gets wider going from peripheral to central Pb-Pb collisions. FIG. 5. Average expansion velocity ( β T ) and kinetic freeze-out temperature (T kin ) progression from the simultaneous Boltzmann-Gibbs blast-wave fit to π ± , K ± , and p(p) spectra measured in Pb-Pb collisions at √ s NN = 5.02 and 2.76 TeV [14]. The correlated uncertainties from the global fit are shown as ellipses. The elliptic contours correspond to 1σ uncertainties, with statistical and systematic uncertainties being added in quadrature. In Table V the blast-wave fit parameters β T , T kin and n in Pb-Pb collisions at √ s NN = 5.02 TeV, for different centrality classes, are listed. Figure 5 shows the correlation between β T and T kin , both obtained from the simultaneous fit for Pb-Pb collisions at √ s NN = 2.76 TeV and 5.02 TeV. For Pb-Pb collisions at √ s NN = 5.02 TeV, β T increases with centrality, reaching β T = 0.663 ± 0.003 in 0−5% central collisions, while T kin decreases from T kin = (0.161 ± 0.006) GeV to T kin = (0.090 ± 0.003) GeV, similarly to what was observed at lower energies. This can be interpreted as a possible indication of a more rapid expansion with increasing centrality [4,14]. In peripheral collisions this is consistent with the expectation of a shorter lived fireball with stronger radial pressure gradients [82]. The value of the exponent of the velocity profile of the expansion, n, is about 0.74 in central collisions and it increases up to 2.52 in peripheral collisions (see Table V). The values of n in peripheral collisions increase with respect to those in central collisions to reproduce the TABLE V. Results of the combined Boltzmann-Gibbs blast-wave fits to the particle spectra measured in Pb-Pb collisions at √ s NN = 5.02 TeV, in the p T ranges 0.5-1 GeV/c, 0.2-1.5 GeV/c, and 0.3-3.0 GeV/c for π ± , K ± , and (p)p, respectively. Values in parenthesis refer to the ratios to the values in Pb-Pb collisions at √ s NN = 2.76 TeV [14]. The charged particle multiplicity values are taken from Refs. [84,85]. power-law tail of the p T spectra. Finally, in the most central Pb-Pb (0-5%) collisions the difference of the average transverse velocity between the two collision energies is ≈ 2.4 standard deviations. The value at 5.02 TeV is ≈ 2% larger than that measured at 2.76 TeV, whereas the kinetic freeze-out temperature results are slightly smaller at larger collision energy but the difference is not significative. Just for the most peripheral collisions the kinetic freeze-out temperature is slightly higher at 5.02 TeV than that at 2.76 TeV. This is in contrast with our interpretation for central collisions where a larger volume has the kinetic freeze-out later allowing the kinetic temperature to decrease further. It is worth questioning whether the blast wave formalism is applicable also for these smaller system and it will be interesting to see if models, which can also describe small systems, can explain this changing pattern. Moreover, we note that event and geometry biases may also play a role in the peripheral Pb-Pb collisions [83]. Figure 6 shows the p T for charged pions, kaons, and (anti-)protons as a function of the charged particle multiplicity density dN ch /dη at midrapidity in Pb-Pb collisions at √ s NN = 5.02 and 2.76 TeV. Going from inelastic pp collisions to peripheral and central Pb-Pb collisions, the p T increases with dN ch /dη . The rise of the average p T gets steeper with increasing hadron mass, this effect is consistent with the presence of radial flow. Within uncertainties and for comparable charged particle multiplicity densities, the results for both energies are consistent for 20-90% Pb-Pb collisions. For 0-20% Pb-Pb collisions, p T is slightly higher at 5.02 TeV than at 2.76 TeV. The increase originates from the low p T part of the spectra. Again, this is an effect consistent with a stronger radial flow in Pb-Pb collisions at the highest collision energy. Figure 7 shows the p T -integrated particle ratios, K/π and p/π , as a function of dN ch /dη in Pb-Pb at on the integrated ratios have been evaluated using the uncertainties on the p T -dependent ratios, taking into account the part of the uncertainties correlated among the different particle species. No significant energy dependence is observed, indicating that there is small or no dependence of the hadrochemistry on the collision energy. The K/π ratio hints at a small increase with centrality. The effect is consistent with the observed increase of strange to nonstrange hadron production in heavy-ion collisions compared to inelastic pp collisions [86]. The p/π ratio suggests a small decrease with centrality. Using the centrality uncorrelated uncertainties, the difference between the ratio in the most central (0-5%) and peripheral (80-90%) collisions is ≈ 4.7 standard deviations, thus the difference is significant. The decreasing ratio is therefore consistent with the hypothesis of antibaryon-baryon annihilation in the hadronic phase [16][17][18][19]87,88]. The effect is expected to be less important for the more dilute system created in peripheral collisions. Recently, a new procedure has been implemented to quantitatively estimate properties of the quark-gluon plasma created in ultrarelativistic heavy-ion collisions utilizing Bayesian statistics and a multiparameter model-to-data comparison [89]. The study is performed using a recently developed parametric initial condition model, reduced thickness event-byevent nuclear topology (T R ENTo) [90], which interpolates among a general class of energy-momentum distributions in the initial condition, and a modern hybrid model which The average transverse momentum as a function of dN ch /dη is quite well reproduced by the model. The model predicts that the kaon-to-pion ratio should decrease with increasing charged particle multiplicity density while data show an increase with dN ch /dη . Within uncertainties, the model agrees with the data for the most central Pb-Pb collisions. The trend of the proton-to-pion ratio is qualitatively well captured by the model but the values of the centrality-dependent ratios are overestimated. Figure 8 shows the K/π and p/π ratios as a function of p T for Pb-Pb collisions at √ s NN = 2.76 and 5.02 TeV. The results are also compared with inelastic pp collisions at √ s = 5.02 TeV. Within uncertainties, in the K/π ratio, no significant energy dependence is observed in heavy-ion data over the full p T interval. The ratios measured in 60-80% Pb-Pb collisions at both √ s NN agree within systematic uncertainties with that for inelastic pp collisions over the full p T range. Given that in pp collisions at LHC energies the ratio as a function of p T does not change with √ s [66], and given the similarity between pp and peripheral Pb-Pb collisions, the large difference observed is likely a systematic effect of the measurement and not a physics effect. B. Intermediate transverse momentum In general, the particle ratios exhibit a steep increase with p T going from 0 to 3 GeV/c while for p T larger than 10 GeV/c little or no p T dependence is observed. Going from peripheral to the most central Pb-Pb collisions, the ratios in the region around p T ≈ 3 GeV/c are continuously growing. A hint of an enhancement with respect to inelastic pp collisions is observed at p T ≈ 3 GeV/c. As pointed out in previous publications [14,28], the effect could be a consequence of radial flow which affects kaons more than pions. The p/π ratios measured in heavy-ion collisions exhibit a pronounced enhancement with respect to inelastic pp collisions, reaching a value of about 0.8 at p T = 3 GeV/c. This is reminiscent of the increase in the baryon-to-meson ratio observed at RHIC in the intermediate p T region [45,91]. Such an increase with p T is due to the mass ordering induced by the radial flow (heavier particles are boosted to higher p T by the collective motion) and it is an intrinsic feature of hydrodynamical models. It should be noted that this is also suggestive of the interplay of the hydrodynamic expansion of the system with the recombination picture as discussed in the introduction. However, since recombination mainly affects baryon-to-meson ratios, it would not explain the bump which is also observed in the kaon-to-pion ratio. The shift of the peak towards higher p T in the proton-to-pion ratio is consistent with the larger radial flow measured in Pb-Pb at √ s NN = 5.02 TeV compared to the one measured at √ s NN = 2.76 TeV. The mass dependence of the radial flow explains also the observation that the maximum of the p/π ratio is located at a larger p T as compared to the K/π ratio. The radial flow is expected to be stronger in the most central collisions, this explains the slight shift in the location of the maximum when central and peripheral data are compared. Finally, particle ratios at high p T in Pb-Pb collisions at both energies become similar to those in pp collisions, suggesting that vacuumlike fragmentation processes dominate there [35]. For p T < 10 GeV/c, protons appear to be less suppressed than kaons and pions, consistent with the particle ratios shown in Fig. 8. The large difference between the suppression of different species is consistent with a mass ordering related to the radial flow. It is worth noting that 2.76 TeV measurements [92] showed that the mesons, including φ(1020), have smaller R AA than protons, indicating a baryon-meson ordering, so while there is a strong radial flow component, there are other effects affecting R AA in this p T region. At larger p T , all particle species are equally suppressed. Despite the strong energy loss observed in the most central heavy-ion collisions, particle composition and ratios at high p T are similar to those in vacuum. This suggests that jet quenching does not affect particle composition significantly. C. Particle production at high transverse momentum In the identified particle R AA for peripheral Pb-Pb collisions an apparent presence of jet quenching is observed (R AA < 1), although for similar particle densities in smaller systems (like p-Pb collisions) no jet quenching signatures have been reported [93]. It has been argued that peripheral A-A collisions can be significantly affected by event selection and geometry biases [83], leading to an apparent suppression for R AA even if jet quenching and shadowing are absent. The presence of biases on the R AA measurement in peripheral Pb-Pb collisions has been confirmed in Ref. [94]: the geometry bias sets in at mid-central collisions, reaching about 15% for the 70-80% Pb-Pb collisions. The additional effect of the selection bias becomes noticeable above the 60% percentile and is significant above the 80% percentile, where it is larger than 20%. All hard probes should be similarly affected [83], in particular, the leading pions, kaons and (anti-)protons reported in the present paper. Figure 10 shows 044907-14 No significant dependence on the collision energy is observed, as also been observed for unidentified charged particles [95]. IV. COMPARISON TO MODELS The results for identified particle production have been compared with the latest hydrodynamic model calculations based on the widely accepted "standard" picture of heavy-ion collisions [96]. These models all have similar ingredients: an initial state model provides the starting point for a viscous hydrodynamic calculation, chemical freeze-out occurs on a constant temperature hyper-surface, where local particle production is modeled with a statistical thermal model, and finally, the hadronic system is allowed to reinteract. The models used are: iEBE-VISHNU hybrid model [29,30], McGill [31], and EPOS [97]. In the following, specific features of each of them are described: (i) The iEBE-VISHNU model is an event-by-event version of the VISHNU hybrid model [98], which combines (2 + 1) − d viscous hydrodynamics VISH2+1 [99,100] to describe the expansion of the sQGP fireball with a hadron cascade model (UrQMD) [101,102] to simulate the evolution of the system in the hadronic phase. The prediction of iEBE-VISHNU using either T R ENTo (Sec. III A) or a multiphase transport model (AMPT) [103] as initial conditions gives a good description of flow measurements in √ s NN = 2.76 TeV Pb-Pb collisions. T R ENTo parametrizes the initial entropy density via the reduced thickness function; AMPT constructs the initial energy density profiles using the energy decomposition of individual partons. Predictions by the iEBE-VISHNU hybrid model is available for p T up to 3 GeV/c. (ii) The McGill model initial conditions rely on a new formulation of the IP-Glasma model [104], which provides realistic event-by-event fluctuations and nonzero pre-equilibrium flow at the early stage of heavy-ion collisions. Individual collision systems are evolved using relativistic hydrodynamics with nonzero shear and bulk viscosities [105]. As the density of the system drops, fluid cells are converted into hadrons and further propagated microscopically using a hadronic cascade model [101,102]. The McGill predictions are available for p T up to 4 GeV/c and centralities 0-60%. (iii) The EPOS model in the version EPOS3 is a phenomenological parton-based model that aims at modeling the full p T range. EPOS is based on the theory of the Gribov-Regge multiple scattering, perturbative QCD, and string fragmentation [105]. However, dense regions in the system created in the collisions, the so-called core, is treated as a QGP and modeled with a hydrodynamic evolution followed by statistical hadronization. EPOS3 implements saturation in the initial state as predicted by the Color Glass Condensate model [106], a full viscous hydrodynamic simulation of the core, and a hadronic cascade, not present in the previous version of the model. EPOS3 implements also a new physics process that accounts for hydrodynamically expanding bulk matter, jets, and the interaction between the two, important for particle production at intermediate p T [107] and reminiscent of the recombination mechanism [32,33]. Figure 11 shows the ratios of the p T spectra in Pb-Pb collisions at √ s NN = 5.02 TeV to the models described above for p T < 4 GeV/c. In the low p T regime, one expects bulk particle production to dominate, so the absence of hard physics processes in the iEBE-VISHNU-T R ENTo, iEBE-VISHNU-AMPT, and McGill calculations is a minor issue. One observes that all models, in general, describe the spectra and the centrality dependence around p T ≈ 1 GeV/c within 20%. For p T < 3 GeV/c the agreement with data is within 30%. The models agree with the proton (kaon) data over a broader p T range than for kaons (pions). This mass hierarchy is expected from the hydrodynamic expansion, which introduces a mass dependence via the flow velocity -the larger the mass the larger the p T boost. Similarly, it can be noticed that for the most central collisions the models describe the data over a broader p T range than in peripheral ones. This is as expected from simple considerations. In central collisions, the system is larger and so the hydrodynamic expansion lasts longer, resulting in a stronger flow. At the same time, the fraction of the system involved in this expansion, the so-called core (e.g., the fraction of participant partons experiencing two or more binary collisions), is larger for the most central collisions. One can conclude that all four model calculations qualitatively describe the centrality dependence of radial flow and how it is imprinted on the different particle species. Like the simplified blast-wave fits in Fig. 4, the two iEBE-VISHNU calculations also have difficulties to describe the very low p T (p T < 0.5 GeV/c) pion spectra. Figure 12 shows the ratios of the p T spectra in Pb-Pb collisions at √ s NN = 5.02 TeV to the EPOS3 model up to 10 GeV/c in p T . EPOS3 includes both soft and hard physics processes, which should give a better description of data at high p T and in peripheral collisions. However, its agreement with data is not significantly better than for the other models in the same p T interval (p T < 3 GeV/c) and at high p T , it is about a factor 2 off with respect to data. For completeness, Figs. 13, 14, and 15 show the comparison of the models with the p T dependent particle ratios. The larger proton-to-pion ratio in EPOS3 than observed in the data can be understood as due to the underestimated pion yield in the model (see Fig. 12). To compare the energy evolution of the spectra between data and model, in Fig. 16 is shown the ratio of the π ± , K ± , and (p)p p T spectra measured at √ s NN = 5.02 TeV to those measured at √ s NN = 2.76 TeV, compared to the same ratios obtained from model predictions. For the McGill model, predictions at √ s NN = 2.76 TeV are currently not available. For central collisions, the agreement of the energy evolution in data and predictions is very good for both VISHNU initial-state models, while for peripheral collisions the AMPT initial conditions are better. For EPOS3 instead, a good agreement with data can be observed for both central and peripheral collisions. The comparison of model predictions to the ALICE measurements of anisotropic flow [108][109][110] can be useful to obtain tighter constraints on them. V. CONCLUSIONS In this paper, a comprehensive measurement of π ± , K ± and (p)p production in inelastic pp and 0-90% central Pb-Pb collisions at √ s NN = 5.02 TeV at the LHC is presented. A clear evolution of the spectra with centrality is observed, with a power-law-like behavior at high p T and a flattening of the spectra at low p T , confirming previous results obtained in Pb-Pb collisions at √ s NN = 2.76 TeV. These features are compatible with the development of a strong collective flow with centrality, which dominates the spectral shapes up to relatively high p T in central collisions. The p T -integrated particle ratios as a function of dN ch /dη in Pb-Pb at on the collision energy. A blast-wave analysis of the p T spectra gives an average transverse expansion velocity of β T = 0.663 ± 0.004 in the most central (0-5%) Pb-Pb collisions that is ≈ 2% larger than at √ s NN = 2.76 TeV, with a difference of ≈ 2.4 standard deviations between the two energies. The p T -dependent particle ratios (p/π , K/π ) show distinctive peaks at p T ≈ 3 GeV/c in central Pb-Pb collisions, more pronounced for the proton-to-pion ratio. Such an increase with p T is due to the mass ordering induced by the radial flow that would affect heavier particles more than lighter ones. The p T of the peak position increases slightly with energy, in particular for the proton-to-pion ratio, indicating that the initially hotter system is longer lived so that radial flow is stronger. At high p T , both particle ratios at √ s NN = 5.02 TeV are similar to those measured at √ s NN = 2.76 TeV and in pp collisions, suggesting that vacuumlike fragmentation processes dominate there. No significant evolution of nuclear modification at high-p T with the center-of-mass energy is observed. At high p T , pions, kaons, and (anti-)protons are equally suppressed as observed at √ s NN = 2.76 TeV. This suggests that the large energy loss leading to the suppression is not associated with strong mass ordering or large fragmentation differences between baryons and mesons. Transverse momentum spectra and particle ratios in Pb-Pb collisions are compared to different model calculations based on the standard QGP picture, which are found to describe the observed trends satisfactorily. For p T < 3 GeV/c, all models agree with the data within 30%, at p T ≈ 1 GeV/c they describe the spectra and the centrality dependence within 20%. ACKNOWLEDGMENTS The ALICE Collaboration thanks all its engineers and technicians for their invaluable contributions to the construc-tion of the experiment and the CERN accelerator teams for the outstanding performance of the LHC complex. The ALICE Collaboration gratefully acknowledges the resources and support provided by all Grid centres and the Worldwide LHC Computing Grid (WLCG) collaboration. The ALICE Collaboration acknowledges the following funding agencies for their support in building and running the ALICE detec-
v3-fos-license
2020-10-28T19:11:49.781Z
2020-10-14T00:00:00.000
225105945
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/j.jsv.2020.115776", "pdf_hash": "b4739a579af0fe0df57ab40d58835e6ba5d30f53", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42747", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "sha1": "594e6529c49a4bb64cfa8f2da4e831b4ed4d8289", "year": 2020 }
pes2o/s2orc
Instability of vibrations of an oscillator moving at high speed through a tunnel embedded in soft soil This paper investigates the instability of vertical vibrations of an object moving uniformly through a tunnel embedded in soft soil. Using the indirect Boundary Element Method in the frequency domain, the equivalent dynamic stiffness of the tunnel-soil system at the point of contact with the moving object, modelled as a mass-spring system or as the limiting case of a single mass, is computed numerically. Using the equivalent stiffness, the original 2.5D model is reduced to an equivalent discrete model, whose parameters depend on the vibration frequency and the object’s velocity. The critical velocity beyond which the instability of the object vibration may occur is found, and it is the same for both the oscillator and the single mass. This critical velocity turns out to be much larger than the operational velocity of high-speed trains and ultra-high-speed transportation vehicles. This means that the model adopted in this paper does not predict the vibrations of Maglev and Hyperloop vehicles to become unstable. Furthermore, the critical velocity for resonance of the system is found to be slightly smaller than the velocity of Rayleigh waves, which is very similar to that for the model of a half-space with a regular track placed on top (with damping). However, for that model, the critical velocity for instability is only slightly larger than the critical velocity for resonance (of the undamped system), while for the current model the critical velocity for instability is much larger than the critical velocity for resonance due to the large stiffness of the tunnel and the radiation damping of the waves excited in the tunnel. A parametric study shows that the thickness and material damp- ing ratio of the tunnel, the stiffness of the soil and the burial depth have a stabilising effect, while the dam ping of the soil may have a slightly destabilising effect (i.e., lower critical velocity for instability). In order to investigate the instability of the moving object for velocities larger than the identified critical velocity for instability, we employ the D- decomposition method and find instability domains in the space of system parameters. In addition, the dependency of the critical mass and stiffness on the velocity is found. We conclude that the higher the velocity, the smaller the mass of the object should be to ensure stability (single mass case); moreover, the higher the velocity, the larger the stiffness of the spring should be when a spring is added (oscillator case). Finally, in view of the stability assessment of Maglev and Hyperloop vehicles, the approach presented in this paper Tunnel embedded in soft soil High-speed oscillator Vibration instability Indirect BEM D-decomposition method Critical velocity for instability a b s t r a c t This paper investigates the instability of vertical vibrations of an object moving uniformly through a tunnel embedded in soft soil. Using the indirect Boundary Element Method in the frequency domain, the equivalent dynamic stiffness of the tunnel-soil system at the point of contact with the moving object, modelled as a mass-spring system or as the limiting case of a single mass, is computed numerically. Using the equivalent stiffness, the original 2.5D model is reduced to an equivalent discrete model, whose parameters depend on the vibration frequency and the object's velocity. The critical velocity beyond which the instability of the object vibration may occur is found, and it is the same for both the oscillator and the single mass. This critical velocity turns out to be much larger than the operational velocity of high-speed trains and ultra-high-speed transportation vehicles. This means that the model adopted in this paper does not predict the vibrations of Maglev and Hyperloop vehicles to become unstable. Furthermore, the critical velocity for resonance of the system is found to be slightly smaller than the velocity of Rayleigh waves, which is very similar to that for the model of a half-space with a regular track placed on top (with damping). However, for that model, the critical velocity for instability is only slightly larger than the critical velocity for resonance (of the undamped system), while for the current model the critical velocity for instability is much larger than the critical velocity for resonance due to the large stiffness of the tunnel and the radiation damping of the waves excited in the tunnel. A parametric study shows that the thickness and material damping ratio of the tunnel, the stiffness of the soil and the burial depth have a stabilising effect, while the dam ping of the soil may have a slightly destabilising effect (i.e., lower critical velocity for instability). In order to investigate the instability of the moving object for velocities larger than the identified critical velocity for instability, we employ the Ddecomposition method and find instability domains in the space of system parameters. In addition, the dependency of the critical mass and stiffness on the velocity is found. We conclude that the higher the velocity, the smaller the mass of the object should be to ensure stability (single mass case); moreover, the higher the velocity, the larger the stiffness of the spring should be when a spring is added (oscillator case). Finally, in view of the stability assessment of Maglev and Hyperloop vehicles, the approach presented in this paper Introduction Dynamic effects are of significance for modern high-speed trains as propagating waves may be generated in the railway track and subsoil. The study of dynamic train-track-soil interactions has been of interest for researchers for decades. Popp et al. [1] gave a comprehensive review of the existing models that can be used to study the dynamic train-track-soil interaction. In general, studies on moving trains fall into two categories. The first category is environmental vibrations induced by moving trains to assess vibration hindrance and to ensure the safety of the nearby structures. The second category is the instability of vibrations of moving trains to ensure the safety and comfort of the passengers in the trains. For the former category, the steady-state regime is assumed when investigating the dynamic amplification due to resonance [e.g., [2][3][4]. Other studies in the first category are devoted to transition radiation which occurs when a train passes an inhomogeneity [5,6] . For studies falling into the second category, the train is usually modelled as a single-or multi-degree of freedom system [7] . When instability occurs, the free vibration (i.e., the vibration in the absence of an external force) of the train grows exponentially, resulting in an infinite displacement when time goes to infinity, which implies that a steady-state solution does not exist. This is very different from resonance, which happens when the steady-state response as induced by an external moving load is extreme (either bounded or not depending on the presence of damping). Both phenomena come with a certain critical velocity. The critical velocity for resonance is defined as the velocity at which the steady-state response induced by a moving load is extreme (i.e., resonance takes place at certain specific velocities) [8,9] , while the critical velocity for instability is defined as the velocity beyond which instability can occur (i.e., instability occurs in a range of velocities). Another crucial difference between resonance and instability is that resonance can be totally removed by increasing damping, while damping mostly shifts the instability domain, for example, to a region of larger velocities [10,11] . The first study on instability of vibrations is that of a mass that moves uniformly along an elastically supported beam [10] . The physical explanation of instability was given by Metrikine [12] who argued that the instability is caused by the radiated anomalous Doppler waves [13] which increase the energy of the vibrating object. In addition, the physical mechanism of instability was discussed using the laws of conservation of energy and momentum [12,14] . After the pioneering works on the instability phenomenon [10,15] , several aspects that influence the stability of an object have been discussed. For example, the effect of thermal stresses in the structure was studied considering different models of moving oscillators [11,16,17] . Other papers considered the effect of more than one contact point between the object and the structure [17] , and of contact nonlinearities [18] . Moreover, a more accurate beam model to represent the rail was considered and a comparison between the Timoshenko and Euler-Bernoulli beam models was given [19] . Four different beam and plate models were considered in [14] . Furthermore, Verichev et al. introduced other complexities in their model, i.e., a bogie model which consists of a rigid bar of finite length on two identical supports [20] . Recently, Mazilu studied the instability of a train of oscillators moving along an infinite Euler-Bernoulli beam on a viscoelastic foundation [21] . Another work focused on the stability of a moving mass in contact with a system of two parallel elastically connected beams, with one of them being axially compressed [22] . Later, the stability of vibrations of a railway vehicle moving along an infinite three-beam/foundation system has been considered, with an emphasis on the effect of the damping and stiffness of the secondary suspension of the railway vehicle [23] . For a more simple model, Dimitrovová presented a semi-analytical solution for the evolution of the beam deflection shapes and oscillator vibrations [24] . In that paper, not only the onset of instability, but also the severity is addressed. All the above works used a one-dimensional or two-dimensional model of the railway track, which may be less accurate than three-dimensional models [25,26] to predict the instability of moving trains. However, they all convey the very important message that, in the presence of damping, the instability of moving trains may happen at speeds that exceed the critical velocity for resonance of the undamped system (which is equal to the minimum phase velocity of waves in the structure); that is, the critical velocity for instability is larger than the critical velocity for resonance. The few existing works related to instability analysis employing three-dimensional models of the railway track consider trains moving on a track founded on the ground surface [25,26] . It has been shown that the critical velocity for instability of the moving object is close to the Rayleigh wave speed in the soil. Instability of trains moving through an underground tunnel has not been analysed yet. In this paper, we therefore aim to conduct the instability analyses for an oscillator and the limiting case of a single mass moving through a tunnel embedded in soft soil. We will investigate whether the critical velocity for instability of the moving object is also close to the Rayleigh wave speed in the soil, for both a shallow and a deep tunnel. The re- sults are of practical relevance especially for contemporary high-speed railway tracks as well as upcoming ultra-high-speed transportation systems such as Maglev and Hyperloop, respectively [27][28][29] . The paper is organised as follows. We present the model and a framework to conduct the instability analysis in Section 2 . Section 3 discusses the 2.5D Green's functions of a full-space and a half-space [30,31] , presents the Green's functions of the shell and the formulation of the indirect Boundary Element Method (BEM). Validations of the proposed indirect BEM are given in Section 4 . In Section 5 , we conduct the instability analysis of the single mass and the mass-spring oscillator. To this end, we study the equivalent dynamic stiffness to find the critical mass and stiffness, and analyse the effect of the tunnel thickness, the material damping ratios in the tunnel-soil system, the Lamé parameters of the soil and the burial depth of the tunnel on the critical velocity for instability. Moreover, the dependency of the critical mass and stiffness on the velocity is investigated. Conclusions are given in Section 6 . Model description In this paper, we study the vibrations of an object moving through a tunnel embedded in soft soil using a so-called 2.5D model. The soil is modelled as an elastic continuum, whereas the tunnel is modelled by the Flügge shell theory [32] . Both the soil and tunnel are assumed to be linear, elastic, homogeneous and isotropic. The soil is characterised by density ρ 1 , Poisson's ratio ν 1 , and complex Lamé parameters λ * 1 = λ 1 ( 1 + 2 i sgn (ω) ξ 1 ) and μ * 1 = μ 1 ( 1 + 2 i sgn (ω) ξ 1 ) , where i is the imaginary unit, ω is the frequency and ξ 1 the material damping ratio of the soil related to the adopted hysteretic damping model. The parameters of the tunnel are density ρ 2 , Poisson's ratio ν 2 , and complex Lamé parameters λ * 2 = λ 2 ( 1 + 2 i sgn (ω) ξ 2 ) and μ * 2 = μ 2 ( 1 + 2 i sgn (ω) ξ 2 ) , with ξ 2 being the hysteretic material damping ratio of the tunnel. The burial depth of the tunnel is H , and its inner and outer radii are R i and R o . The object is modelled by a mass-spring oscillator (see Fig. 1 ), which is characterised by its mass M and spring stiffness K , and moves through the tunnel with a constant velocity V . Note that there is no vertical external force acting on the mass because the presence of such a force is irrelevant for the dynamic-instability analysis. Shallow and deep embedded tunnels are considered in this paper. For the shallow tunnel, the soil medium is modelled as a half-space, while for the deep tunnel, the soil is modelled as a full-space. Fig. 1 only shows the configuration of the shallow tunnel. If H → ∞ , it essentially becomes a deep tunnel. The governing equations of the shell are presented later, in Section 3 . The current section only presents the framework to conduct instability analysis. Method of solution To analyse instability of vibrations of the moving object, the concept of the equivalent stiffness (also referred to as dynamic stiffness) is employed [33,34] . The procedure is illustrated in Fig. 2 and goes as follows. First, we compute the steadystate response of the system shown in Fig. 2 (a) which is subject to a uniformly moving oscillatory point load applied at the tunnel invert ( The oscillatory load has the form of P (t) = P 0 exp ( i t ) , in which P 0 is the amplitude, = 2 π f 0 is the angular frequency; f 0 is the load frequency in Hz. P ( t ) essentially represents a harmonic interaction force between the moving object and the tunnel-soil system. The steady-state radial displacement at the loading point can be expressed as U r 1 is the complex amplitude of this harmonic vibration. The indirect Boundary Element Method is employed to compute the response of the system, which is presented in detail in Section 3 . From the result, we obtain the equivalent stiffness of the tunnel-soil system at the loading point using the following relation: (1) By doing so, the original 2.5D model can be reduced to an equivalent discrete model, shown in Fig. 2 (b), consisting of a mass-spring system resting on an equivalent spring with a complex-valued stiffness K eq ( , V ), which depends on the frequency and velocity of the oscillator. To study the instability of a moving oscillator, we apply, in accordance with previous dynamic-instability studies [11,24] , the Laplace integral transform with respect to time t ( s denotes the Laplace parameter) to the well-known governing equation of the vertical motion of the oscillator. Assuming zero initial conditions (which can be done as they do not influence the stability [11,24] ), the following characteristic equation for the free vibration of the oscillator is obtained: The roots of Eq. (3) determine the (complex) eigenfrequencies, = − i s, of the vertical motion of the oscillator as it interacts with the tunnel-soil system. If one of the roots s of the characteristic equation has a positive real part, the response will grow exponentially, which implies that the vertical vibration of the oscillator is unstable [11,19,25,26,35] . Obviously, the equivalent stiffness must be single-valued for all in order for Eq. (3) to be meaningful; see also Section 2.3 . It has been shown in [11] that the instability of a moving object may occur if and only if the imaginary part of the equivalent stiffness K eq is negative in a frequency band. In [11] , a single moving mass is considered, the motion of which is even necessarily unstable as soon as Im( K eq ) < 0 at any frequency band. The imaginary part of the equivalent stiffness can be considered to be the damping coefficient of the dashpot in the equivalent mass-spring system. A negative imaginary part of the equivalent stiffness indicates a negative damping, which makes the vibration of the moving mass unstable. It would be very laborious to determine all the roots of the characteristic equation and check whether one of these roots has a positive real part. Alternatively, we follow a convenient method of root analysis, namely the D-decomposition method, to determine the number of 'unstable roots'. This method has been used in several papers [11,19,25,26,35] . The idea of this method is to map the imaginary axis of the complex s plane (i.e., the border between stability and instability) onto the plane of a system parameter, M or K , which is allowed to be complex. The mapped line divides the M or K plane into domains with different numbers of unstable roots. It is noted that the imaginary part of the complex system parameter has no physical meaning. Only the positive real part of the system parameter is physical, and the crucial question is whether one of the so-called instability domains overlay the positive real axis. The procedure is as follows. Consider s = i , where serves as the parameter of the mapping, is real valued and has the meaning of frequency (same as introduced above), and has to be varied from minus to plus infinity. We discuss the following two cases in this paper. The first one is the limit case of a single mass moving through the tunnel, thus assuming K → ∞ . The characteristic equation for a single mass is reduced from Eq. (3) to Substituting s = i into Eq. (4) gives the following rule for the mapping: The second case we consider is the more general one of the moving oscillator, taking into account both the mass and the spring of the oscillator. In this case, the stiffness K will be used as the parameter for the D-decomposition assuming M to be constant because it is of practical relevance. Taking the limit case of the moving mass as the starting point, it is interesting to know what the added stiffness of the spring should be to render the oscillator vibration unstable (see also Section 5.2 ). Substituting s = i into Eq. (3) , we get the following mapping rule for the complex K plane: By replacing s by i in K eq ( s, V ), which essentially entails considering the limit case of s → i , one can use the equivalent stiffness ( Eq. (1) ) which is determined based on the steady-state response to the harmonic loading. Employing Eq. (5) or (6) as the mapping rule, one can plot the D-decomposition curve, for example, Im( M ) versus Re( M ) (as shown in Fig. 11 , for example), where is the running parameter along this curve. One side of the D-decomposition curve is shaded, and this side is related to the right-hand side of the imaginary axis in the s plane. Crossing the curve in the direction of the shading once indicates that there is an additional unstable root. Thus, one can find information on the relative number of unstable roots in domains of the complex M or K planes. The number of unstable roots in all the domains can be determined if the absolute number of those is known for any arbitrary value of the considered system parameter. By doing so, the instability domains can be found in the M or K plane, which generally allows to identify the critical velocity for instability (defined as the velocity at which an instability domain first overlays the positive real axis when increasing the velocity). The instability analysis is conducted in Section 5 . Response to a moving oscillatory point load and derivation of equivalent stiffness As shown in Section 2.2 , it is customary to employ the equivalent stiffness K eq to conduct the instability analysis. In the current section, we aim to derive the expression for the equivalent stiffness. To this end, we first derive the steady-state response to a moving oscillatory point load at the loading point. Additionally, the response at a fixed observation point is derived, which is needed for validating the indirect BEM in Section 4 . All the responses can be computed using the indirect BEM presented in Section 3 . Here we summarise the important steps and outcome in view of the specified aim. The shear stresses σ r 1 θ 1 and σ r 1 x at the inner surface of the tunnel wall induced by the moving oscillatory point load (see Fig. 2 (a)) are zero. The non-zero normal stress σ r 1 r 1 (R i , θ 1 , x, t ) can be expressed as where δ(.) is the Dirac delta function. As the considered problem is linear, we apply the Fourier Transform to derive the response of the system subject to the uniformly moving oscillatory point load in the wavenumber-frequency ( k x , ω) domain. The Fourier Transform applied with respect to time t and spatial coordinate x is defined in the following form (for an arbitrary function g ( r 1 , θ 1 , x, t )): with the inverse Fourier Transform given by The Fourier series, which is used to derive the response in the ( k x , ω) domain, of a general response quantity f ( θ 1 ) reads Expanding the term δ(θ 1 + π 2 ) in Eq. (7) into a Fourier series, the normal stress can be rewritten as Applying the Fourier Transform defined by Eq. (8) to Eq. (11) , the normal stress in the wavenumber-frequency domain is obtained as: The response induced by the auxiliary stress ˜ ˜ σ aux ( R i , θ 1 , k x , ω ) , which relates to a radial stress in the form of δ θ 1 + π 2 · δ(x ) δ(t) , can be computed using the indirect BEM ( Section 3 ) and is denoted as ˜ ˜ U 1 , aux ( r 1 , θ 1 , k x , ω ) . Thereafter, we get the expression of the actual displacement vector excited by the stress ˜ ˜ σ r 1 r 1 ( R i , θ 1 , k x , ω ) shown in Eq. (12) : We obtain the space-time domain response by applying the inverse Fourier Transform over wavenumber k x and frequency ω to Eq. (13) : In Eq. (14) , the inverse Fourier Transform over wavenumber k x has been evaluated analytically, whereas the inverse Fourier Transform over frequency ω needs to be evaluated numerically. The radial displacement component of the steady-state response at the loading point can be obtained by substituting x = V t into Eq. (14) : In accordance with Eq. (15) , the complex amplitude of this harmonic response, which is relevant for the computation of the equivalent stiffness, is given as Using Eq. (16) , the equivalent stiffness K eq defined in Eq. (1) is obtained: We note that this result is single-valued for all as the Green's functions (see Section 3 ) used in the indirect BEM computations are uniquely defined. We also consider the steady-state response at a fixed observation point x = 0 , which is needed for the validation of the indirect BEM. Substituting x = 0 into Eq. (14) gives the corresponding displacement vector: Eq. (18) contains the responses observed at x = 0 ; for an observation point at the tunnel invert, t < 0 indicates that x = 0 > V t, which means that the moving load has not reached the observation point yet; t = 0 indicates that the moving load is at the observation point; t > 0 indicates that x = 0 < V t, which means that the moving load has passed the observation point. For the case of a stationary (i.e., non-moving) harmonic point load, which is also used in Section 4 for validation, an expression for the induced displacements is given in Appendix A . Indirect boundary element method In this paper, the indirect BEM is employed to compute the response of the tunnel-soil system in the wavenumberfrequency ( r 1 , θ 1 , k x , ω) domain. To this end, the Green's functions of the soil and tunnel are needed; note that the indirect BEM uses the Green's functions of the soil without cavity (full-space or half-space). In Sections 3.1 and 3.2 , the Green's functions of the soil and tunnel are presented. The indirect BEM is formulated in Section 3.3 . Green's functions of the soil The so-called two-and-a-half dimensional Green's functions of an elastodynamic full-space [30] and a half-space [31] are used in our work. The source considered in the mentioned papers is a spatially varying line load in the longitudinal direc- x indicates the direction of the load, subscript "s" indicates the coordinates of the source point, and F j is the amplitude of the source (Green's functions can be obtained by setting F j = 1 ). The Green's functions of the half-space consist of source terms which are the same as those of the full-space, and of surface terms which are necessary to satisfy the stress-free boundary conditions at the surface of the half-space [31] . However, we found that the stress-free conditions are not satisfied using the Green's functions presented in [31] , while they are satisfied when the source terms are replaced by the ones presented in [30] that contains the full-space Green's functions. Therefore, the Green's functions of the half-space used in the current paper consist of the source terms presented in [30] and of the surface terms presented in [31] . Because the reference frames in [31] are different from that in the current paper, we have to transform the Green's functions for displacements and stresses given in [31] through the following relations ˜ ˜ where subscripts "1" and "ref" denote the responses defined in the coordinate systems of the current and reference papers, respectively. The transformation matrix reads: ˜ ˜ G u, 1 and ˜ ˜ G σ, 1 are the Green's functions for displacements and stresses of the soil without tunnel/cavity (i.e., of the fullspace or half-space), and are 3 × 3 matrices. In matrix ˜ ˜ G u, 1 , the first, second and third rows represent the displacement components ˜ ˜ u r 1 , ˜ ˜ u θ 1 and ˜ ˜ u x , while the first, second and third columns correspond to the spatially varying unit line loads acting in y, z and x directions, respectively. In matrix ˜ ˜ G σ, 1 , the first, second and third rows represent the stress components ˜ ˜ σ r 1 r 1 , ˜ ˜ σ r 1 θ 1 and ˜ ˜ σ r 1 x , while the columns also correspond to the loads in different directions. Green's functions and responses of a cylindrical shell The tunnel is modelled by an infinitely long cylindrical Flügge shell. The associated coordinate system is shown in Fig. 3 . The equations of motion of the shell read [32] : where ū , v and w are the mid-surface displacements in directions r 2 , θ 2 and x 2 , respectively. E 2 is the Young's modulus of the shell and h its thickness. The radii of the inner and outer surface of the shell can be expressed as R i = R − h 2 and R o = R + h 2 , respectively. q r 2 , q θ 2 and q x 2 are the net external stresses acting on the shell, namely the difference between the stresses acting at the inner and outer surfaces. The governing equations of the shell can be rewritten into matrix form as where ū 2 = ( ū , v , w ) is the displacement vector, q 2 = q r 2 , q θ 2 , q x 2 is the net stress vector corresponding to the mid-surface of the shell, and A is an operator matrix given in Appendix B . The stress vector q 2 is related to the stress vectors q o 2 and q i 2 corresponding to the outer and inner surfaces of the shell, respectively, through the following relations [36] : According to Love's simplification in the shell theory [37] , the longitudinal and tangential displacements vary linearly across the shell's thickness, whereas the radial displacement is independent of radial coordinate. Therefore, the mid-surface displacement vector ū 2 is related to the displacement vector u o 2 corresponding to the outer surface of the shell through the following relation After applying the Fourier Transform over time t and spatial coordinate x 2 to Eq. (23) , computing the Fourier coefficients of the circumferential harmonics (i.e., the second relation in Eq. (10) ), and considering Eqs. (24) and (25) , the governing equations of the shell can be written as which is essentially a set of algebraic equations; n denotes the number of the circumferential harmonic, and matrices ˜ ˜ The associated Green's functions of the shell can be derived by solving Eq. (26) for each of the load components, and subsequently adding the solutions for all components in the Fourier series (see Eq. (10) ): where ˜ ˜ g o n interrelates ˜ ˜ u o 2 ,n and ˜ ˜ q o 2 ,n , ˜ ˜ g i n interrelates ˜ ˜ u o 2 ,n and ˜ ˜ q i 2 ,n , and ˜ ˜ g o , ˜ ˜ g i , ˜ ˜ g o n and ˜ ˜ g i n are 3 × 3 matrices. The positive directions of the longitudinal axes in the global coordinate system (see Fig. 1 ) and the local coordinate system for the shell (see Fig. 3 ) are opposite to each other (i.e., x 2 = −x ). Therefore, the relation between the longitudinal wavenumbers k x 2 and k x (which, for the moving oscillatory point load considered in Section 2.3 , is defined as ω− V ) is as follows: k x 2 = −k x ; this relation is used below. Using the convolution rule, the displacement vector of the shell under an arbitrary load can be obtained as The displacements in Eq. (28) are defined in the local coordinate system of the shell ( r 2 , θ 2 , x 2 , see Fig. 3 ). To satisfy the continuity of displacements and stresses at the shell-soil interface, the displacement and stress vectors ˜ ˜ u o 2 , ˜ ˜ q o 2 and ˜ ˜ q i 2 defined in the local coordinate system of the shell have to be transformed to ˜ ˜ U o 2 , ˜ ˜ Q o 2 and ˜ ˜ Q i 2 , respectively, defined in the global cylindrical coordinate system of the soil ( r 1 , θ 1 , x ), which has origin at the center of the tunnel (see Fig. 1 in which θ 1 = θ 2 and Substituting Eq. (29) into Eq. (28) , we obtain the displacements of the shell in the global cylindrical coordinate system: Formulation of the indirect boundary element method In this section, the formulation of the employed indirect BEM is presented. where ˜ ˜ U 1 and ˜ ˜ U 2 are the displacement vectors of the soil and tunnel, respectively, and ˜ ˜ 1 and ˜ ˜ 2 their stress vectors. The displacement and stress vectors of the soil and shell at the tunnel-soil interface are expressed as ˜ ˜ Note that all these displacements and stresses are defined in the global cylindrical coordinate system ( r 1 , θ 1 , x ). According to the indirect BEM, the displacement and stress vectors in the soil are given as [36] ˜ ˜ where F ( x s ) is the yet unknown vector of the source amplitudes placed inside the fictitious cavity which is commonly used for the indirect BEM (see Fig. 4 ). Vectors x r = [ x r , y r , z r ] and x s = [ x s , y s , z s ] are coordinates of the receiver and source points, respectively. L s is the surface at which the source points are located, and the radius of the surface L s is taken as where N r denotes the number of receiver points, and N r ≥ 20 as suggested in [36] . The surface L r at which the receiver points are located lies at r 1 = R o , which is the outer surface of the actual tunnel. An expression for the displacement vector of the shell has been obtained in Eq. (31) , and can be rewritten as where ˜ ˜ G 9 Table 1 Displacement components (20 ·log 10 | U i |) at different locations ( r 1 , θ 1 , x ) for a tunnel embedded in an elastic full-space subjected to a stationary harmonic point load with excitation frequency f 0 = 10 Hz obtained using different numbers of source and receiver points ( N s , N r ). In each row, the displacements are normalised by the corresponding response obtained using (N s , N r ) = (20 , 40) . Fig. 2 (a)). Substituting Eqs. (34) and (36) into Eq. (32) , and considering Eqs. (35) and (37) , we derive the boundary integral equation in terms of the unknown source amplitude vector: In order to compute the source vector for every k x and ω combination, Eq. (38) is discretised (i.e., surfaces L r , L and L s ), as indicated above. Note that the size of the matrix related to ˜ ˜ K is (3 N r × 3 N s ), where N s denotes the number of source points; this implies that it is a small matrix, and the inversion of the matrix does not take much time. The most time consuming part (even in the entire procedure of instability analysis) is the evaluation of integrals over the horizontal wavenumber k y in the Green's functions of the half-space, for which we use the "quadv" routine in Matlab. Validations To validate the accuracy of the presented indirect BEM, we compare the results obtained by the proposed method with those calculated by Yuan et al. [38] . The first validation is performed for the case of a tunnel embedded in an elastic fullspace subject to a stationary harmonic point load at the tunnel invert. The excitation for the indirect BEM computation in this case is ˜ σ aux ( Eq. (A.3) ), with P 0 = 1 N , and the steady-state response is given in Eq. (A.5) . The elastic full-space is characterised by its longitudinal wave speed C P , 1 = 944 m / s , shear wave speed C S , 1 = 309 m / s , density ρ 1 = 20 0 0 kg / m 3 and material damping ratio ξ 1 = 0 . 03 . The elastic parameters for the tunnel are the Young's modulus E 2 = 50 GPa , Poisson's ratio ν 2 = 0 . 3 , density ρ 2 = 2500 kg / m 3 and material damping ratio ξ 2 = 0 . The inner and outer radii of the tunnel are Before showing the results, we first present a convergence test for the proposed method for the considered loading case. As shown in Eq. (A.5) , the inverse Fourier Transform over longitudinal wavenumber k x has to be evaluated to get the harmonic response in the space-time domain. The integral was computed numerically using an inverse fast Fourier Transform algorithm in Matlab. The convergence was tested regarding the discretisation of k x (i.e., k x and k max x ), the maximum number of circumferential modes of the shell N max shell in Eq. (31) and the maximum number of Fourier components N max load in Eq. (A.3) , and the number of source and receiver points ( N s , N r ). We found that it is sufficient to use k max x = 2 π , k x = 2 π 1023 , N max shell = 20 and N max load = 20 . The convergence test for the number of source and receiver points ( N s , N o ) at different locations ( r 1 , θ 1 , x ) is given in Table 1 . Responses at the tunnel invert, tunnel apex ( R o , π 2 , 0), tunnel side ( R o , π , 0) and at a point far from the load (20 m, π , 20 m) are presented; the load is characterised by P 0 = 1 N and f 0 = 2 π = 10 Hz . It is clear that converged results can indeed be obtained using ( N s , N r ) = (20, 40). Fig. 5 shows the converged vertical displacements at the tunnel invert, tunnel apex and tunnel side as a function of frequency for the first validation case. A good agreement can be observed between the results obtained by different methods, which validates the proposed method and its implementation. The second validation case is that of a shallow tunnel embedded in an elastic half-space subject to a stationary harmonic load. The soil is characterised by its longitudinal wave speed C P , 1 = 400 m / s , shear wave speed C S , 1 = 200 m / s , density ρ 1 = 1800 kg / m 3 and material damping ratio ξ 1 = 0 . 02 . The parameters of the tunnel are the same as in the previous case, Fig. 5. Vertical displacements ( 20 · log 10 | U z 1 | ) at locations of (a) tunnel invert ( tunnel side ( r 1 = R o , θ 1 = π, x = 0 ) for the case of a tunnel embedded in an elastic full-space subject to a stationary harmonic point load. Table 2 Velocities ( V i ( y, z, x )) and displacement (U r 1 (r 1 , θ 1 , x )) at different locations for a tunnel embedded in an elastic half- except that ξ 2 = 0 . 015 , and the burial depth of the tunnel is H = 5 m . It is noted that in the reference paper [ 38 ], the mentioned material damping ratio should be the loss factor (there is a difference of a factor 2), which is indicated in paper [39] . This also holds for the next validation case. The displacement components (again obtained using Eq. (A.5) ) at a point on the ground surface ( y = −20 m , z = 0 , x = 20 m ) are presented in Fig. 6 , where again a good match between the results is observed. The minor differences can be attributed to the use of a continuum to model the tunnel in [38] , instead of a shell. The third validation comprises the case of a tunnel embedded in an elastic half-space subject to a uniformly moving constant point load. The excitation for the indirect BEM computation in this case is ˜ ˜ σ aux ( Eq. (12) ) and the steady-state response is given in Eq. (14) . The parameters for the soil and tunnel are as follows: μ 1 = 1 . 154 × 10 7 N / m 2 , λ 1 = 1 . 731 × (14) ). The convergence for the moving point load case was tested regarding the discretisation of ω (i.e., ω and ω max ), N max shell in Eq. (31) and N max load in Eq. (12) , and the number of source and receiver points ( N s , N r ). Numerical results related to two points on the ground surface and one at the tunnel invert are presented in Table 2 for the considered case of the moving load using different numbers of source and receiver points ( N s , N r ). We found that converged results can be obtained using f max = ω max 2 π = 15 Hz , f = ω 2 π = 0 . 05 Hz , N max shell = 20 , N max load = 20 and ( N s , N r ) = (20, 40). This is clear from Table 2 which presents the responses observed at x = 0 for varying time moments: t = 0 means that the load is right below the observation point, whereas t = 1 s indicates that the load has passed that point. Fig. 7 presents the comparison between the results obtained by the proposed method and those shown in the literature. The good agreement gives confidence about the accuracy of the proposed method. The convergence requirements for the computation of the equivalent dynamic stiffness for different velocities are presented in Appendix C . Instability of vibrations The main framework to conduct instability analysis has been given in Section 2.2 . In the current section, we present the results for both the full-space and half-space. The base-case parameters of the tunnel-soil system are listed in Table 3 ; it is noted that the base case assumes that the burial depth H → ∞ . The parameters presented in Table 3 represent a soft soil and a concrete tunnel. We chose the full-space case as the base case simply because the results for the half-space are pretty similar, and the computation of the two-and-a-half dimensional Green's functions of the full-space [30] is less expensive than that of the half-space Green's functions [31] . The Green's functions of the full-space are available analytically and can be evaluated very fast; however, the surface-terms part of the Green's functions of the half-space (see Section 3.1 ) are not available analytically, and integrals over the horizontal wavenumber k y need to be evaluated numerically. Table 3 Base-case parameters of the tunnel-soil system. Critical velocity for instability of the moving object As has been discussed in Section 2.2 , the imaginary part of the equivalent stiffness being negative indicates that the vibration of the object (i.e., moving mass or oscillator) can become unstable. Therefore, we first study the equivalent stiffness to find the critical velocity for instability of the moving object (here defined as the velocity at which Im( K eq ) < 0 first takes place). Note that the critical velocity for instability generally differs from the classical critical velocity at which the steadystate response induced by a moving load is extreme (i.e., resonance). In the general case, where the oscillator has dissipative components, the critical velocity for instability should be identified from the D-decomposition curve in the complex M or K plane (see Sections 2.2 and 5.2 ), not from the analysis of Im( K eq ) alone [19,26] . Im( K eq ) < 0 is only a necessary condition for instability. As the moving oscillator and moving mass considered in this paper do not have intrinsic dissipative components, their critical velocities for instability are the same. Table 3 , and V inst cr = 942 m / s . In previous studies [11,19] where beam on elastic foundation models (without damping) are considered, it is shown that the critical velocity for instability V inst cr of the moving object is equal to the critical velocity for resonance (of the undamped system), which in turn is equal to the minimum phase velocity V min ph of waves in the system. For a half-space model with a regular track on top [25,26] , the critical velocity for instability in the presence of damping is slightly larger than V min ph . Additionally, V min ph , which is close to and smaller than the velocity of Rayleigh waves, is easily found from the dispersion relation of the system [12,40] . However, for the tunnel-soil system considered in this paper, it is very difficult to get the dispersion curves, as the dispersion characteristics of the system are considerably more complicated. Therefore, the minimum phase velocity cannot be easily computed. We can, however, compute the steady-state response of the tunnelsoil system subject to a uniformly moving non-oscillatory load and check the features of responses for different velocities to determine the critical velocity for resonance (for the system with damping, strictly speaking, but the influence of the damping on V res cr is small). This analysis is presented in Appendix C , and it shows that V res cr ≈ 70 m / s for the current tunnelsoil system, which is also close to and smaller than the velocity of Rayleigh waves, like for the above-mentioned half-space model. For the system with the base-case parameters, we find that the imaginary part of the equivalent stiffness starts having a negative sign for at least a small frequency range at a velocity of V inst cr = 942 m / s . Based on this critical velocity for instability, we study the behavior of K eq ( , V ) for different velocities in the range of (0 . 9 − 1 . 2) V inst cr . The real and imaginary parts are shown in Figs. 8 and 9 , respectively. Nine different velocities were chosen to show the features of Re( K eq ) and Im( K eq ) as a function of the load frequency . Note that, if its mass is relatively small, the vibration of the object can still be stable when it moves faster than V inst cr , as will be demonstrated in the next section (see also Fig. 11 ). In Fig. 8 , we observe that the real part of the equivalent stiffness is positive, and the decaying trend of Re( K eq ) with frequency is similar for each velocity. The decaying trend may be related to the effect of inertia. The trough can probably Table 3 , and V inst cr = 942 m / s . The read dots indicate the crossings. be interpreted as a quasi-resonance which takes place at low frequencies and is related to the wave resonance which occurs if the velocity of the moving load is the same as the group velocity of a wave excited by the load [25] . The imaginary part of the equivalent stiffness is shown in Fig. 9 for each of the chosen velocities. For the 'sub-critical' ( V < V inst cr ) case shown in Fig. 9 (a), V = 0 . 9 V inst cr , Im( K eq ) is positive for all the frequencies , which indicates that the damping coefficient of the equivalent mass-spring system is positive, and thus, the system is always stable (see Section 2.2 ). The frequency band considered in this study of the dynamic stiffness is limited to (0 − 40) Hz , because instability is determined by the behaviour at low frequencies [25,26] ; see also the explanation given at the end of this section. For the 'critical' and 'super-critical' ( V ≥ V inst cr ) cases, shown in Figs. 9 (b) -(i), the imaginary part of the equivalent stiffness is negative at low frequencies and becomes positive at higher frequencies. We can verify that the curves of Im( K eq ) in the high-frequency band have the same trend as that in the sub-critical case; they are not shown here since we focus on features of Im( K eq ) in the low-frequency band. There are peaks and troughs in the curves of Im( K eq ), and these are suppressed or enlarged as the velocity increases. We observe that the Im( K eq ) curve crosses the real axis 0, 4, 3, 7, 5, 3, 3, 1 and 1 times for V = (0 . 9 , 1 . 00 , 1 . 01 , 1 . 02 , 1 . 03 , 1 . 04 , 1 . 05 , 1 . 06 , 1 . 20) V inst cr , respectively. We can verify that for velocities cr , similar features of Im( K eq ) are observed (i.e., the Im( K eq ) curve crosses the real axis only once), the only difference is that the crossing occurs at higher frequency as the velocity increases (see Figs. 9 (h) and (i)). The different features of Im( K eq ) observed in the entire considered velocity range imply that in the complex M or K plane different amounts of separated domains, each having a specific number of 'unstable roots/eigenvalues', are expected for different velocities (see Section 5.2 ). At this point, it is concluded that the equivalent dynamic stiffness, especially its imaginary part, strongly depends on V . In order to trace similarities and differences, let us now compare two different instability problems: the above mentioned model of an object moving on a track placed on the ground surface [25,26] , and the current model of an object moving through a tunnel embedded in a half-space. The critical velocity for instability for the current model with all parameters in accordance with the base case, except the burial depth H which is taken as 15 m, is found to be 891 m/s (see also Section 5.3.4 ). Clearly, V inst cr is much larger than the critical velocity for resonance ( V res cr ≈ 70 m / s ) in the model with an embedded tunnel, while V inst cr is just slightly larger than V res cr in the model with a track directly placed on the ground [25,26] ( V res cr is related to the undamped system in these studies, but the influence of the damping on V res cr is small). The difference is due to the large stiffness of the tunnel and the radiation damping/leaky character of the waves excited in the tunnel. However, there are similarities regarding ground vibrations in these two models. In the regime of V < V res cr ( ≈ V min ph ) , for both models mostly the medium in the vicinity of the load is disturbed by the eigenfield excited by the moving nonoscillating object, while in the regime of V > V res cr , both the vicinity of the moving source and the field far from the source are disturbed because waves are generated (see Appendix C ). As mentioned above, the critical velocity for resonance of the current tunnel-soil system is close to and smaller than the velocity of Rayleigh waves, like that of the other model. Therefore, from the ambient-vibration point of view, there is a clear similarity between both problems. However, instability happens only far beyond the critical velocity for resonance for the model with the tunnel, which is clearly different from the finding for the half-space with a track placed on top. As shown in [12] , an external source has to supply a vibrating object with energy in order to maintain its uniform motion. In the case of unstable vibrations, the work done by the source is partially transferred to vibration energy of the object by the so-called anomalous Doppler waves [13] , which are waves of negative frequency. Typical dispersion curves of the current tunnel-soil system, which are similar to the ones of the beam on elastic foundation model [35] , are shown in Fig. 10 and can be used to explain why the instability only happens in the low-frequency band, as stated above. Fig. 10 also shows the so-called kinematic invariant ω = k x V + , which is essentially found in the argument of the Dirac function in the response to a moving oscillatory load (see Eq. (14) ). The kinematic invariant is a straight line indicating the relation between the load frequency , and the frequency ω and wavenumber k x of the waves that are potentially excited by the moving object; different realizations (i.e., being zero and nonzero, together with two different velocities) are shown in Fig. 10 . Intersections of the kinematic invariant with the dispersion curves represent the excited waves. We observe that intersections with negative frequency ω (i.e., anomalous Doppler waves) are only possible when the load frequency is relatively small. If the load frequency is large, the kinematic invariant will practically never intersect the dispersion curves at negative frequency, which explains why the vibration of the moving object is always stable in the high-frequency band (i.e., Im( K eq ) > 0, see Fig. 9 ); this also justifies that we restricted the analysis of K eq to the low-frequency band in Figs. 8 and 9 . D-decomposition: complex M and K planes In order to investigate the instability of the object vibrations for velocities larger than the corresponding critical velocity for instability identified in the previous section, we apply the D-decomposition method. We first investigate the limit case of the single mass moving through the tunnel. Considering the base case, the D-decomposition curve can be plotted in the complex M plane (i.e., Im( M ) versus Re( M )) using the mapping rule shown in Eq. (5) and is presented in Fig. 11 . For most of the considered velocities, the D-decomposition curve crosses the positive real axis, that is, one or more crossing points M * are obtained. It can be verified that the frequency at which the curve crosses the real axis corresponds to the frequency at which the imaginary part of the dynamic stiffness changes sign (see Fig. 9 ). A crossing point lying on the positive real axis can be explained by the fact that Re( K eq ) is positive when the Im( K eq ) changes its sign (see Fig. 8 ). As it is clearly shown in Fig. 11 , the crucial difference between the D-decomposition curves in the super-critical and sub-critical cases -compare Figs. 12 (b) and (a), for example -is that there are crossing points M * 1 , M * 2 , M * 3 and M * 4 on the positive axis of Re( M ) for the super-critical case. The existence of such crossing points means that the number N of . The procedure to determine N is as follows. The relative number of unstable roots in domains of the complex M plane can be calculated by counting the number of times that one crosses the D-decomposition curve in the direction of the shading, which has been explained in Section 2.2 . To get the absolute number in all domains, the number of unstable roots for M = 0 has to be determined. M = 0 means that there is essentially no moving mass, which implies that the vibration of the mass cannot be unstable (i.e., the number of unstable roots N = 0 ). Thereafter, the absolute number of unstable roots in each domain can be determined and the result is shown in Fig. 11 . The number of unstable roots has also been validated using the Argument Principle, but this is not shown in the paper. In Fig. 11 , we observe that the vibration of the moving mass is stable for all values of M for V = 0 . 9 V inst cr ; for V = 1 . 00 V inst cr , the vibration of the moving mass is unstable when M Note that the vibration of a mass which moves faster then the critical velocity is not necessarily unstable. For relatively small values of the mass, for example, the vibration is stable even for super-critical velocities (as illustrated in Section 5.3 ). The question of practical relevance when studying the instability of the moving mass is whether adding flexibility (by creating a spring between the mass and the tunnel) may destabilize the system. For the mass-spring oscillator with the mass being constant, the D-decomposition curve can be plotted in the complex K plane (i.e., Im( K ) versus Re( K )) using the mapping rule shown in Eq. (6) and is presented in Fig. 12 . The mass of the moving oscillator is taken as M = 2 × 10 4 kg , which is a realistic value for a train wagon. In order to get the absolute number of unstable roots in the complex K plane with this mass, we have to connect the instability analysis of the moving oscillator to that of the single moving mass shown in Fig. 11 . From that figure, we find that N = 0 for M = 2 × 10 4 kg , which implies that the system is stable for this value of the mass for all the considered velocity cases. The single mass case corresponds to the oscillator case with K → ∞ . Therefore, knowing that the number of unstable roots at K → ∞ is zero and following the direction of the shading, the absolute number of unstable roots in domains of the complex K plane can be determined. The following can be observed from the D-decomposition curve in the complex K plane shown in Fig. 12 . For V = 0 . 9 V inst cr , the vibration of the moving oscillator is stable for all values of the stiffness; for V = 1 . 00 V inst cr , the vibration of the moving oscillator is destabilized by the added spring when K * 1 < K < K * 2 and K * 3 < K < K * 4 ( N = 2 ); for V = 1 . 01 V inst cr , that happens when K < K * 1 and K * 2 < K < K * 3 , etc. Using these findings, it can be readily concluded that for the sub-critical case, the vibration of the oscillator is stable independently of the oscillator's stiffness, while for the super-critical cases, the stability of the oscillator depends on the stiffness of the added spring. Parametric study In Section 5.1 , we found the critical velocity beyond which instability of the moving object may occur. As the critical velocity for instability is the most important outcome of the instability analysis, the effects of the tunnel thickness, the material damping ratios in the tunnel-soil system, the Lamé parameters of the soil and the burial depth of the tunnel on the critical velocity are studied here. In addition, the dependency of the critical mass and stiffness (identified in Section 5.2 ) of the corresponding moving mass and moving oscillator on velocity is considered; this is only done for full-space cases because of the computational demand of the calculations for the half-space. The effect of the thickness of the tunnel Four different thicknesses of the tunnel are considered, and the corresponding critical velocities for instability are shown in Table 4 , where the subscript "B" (also shown in indicates the parameters of the base case shown in Table 3 . Table 4 shows that the critical velocity for instability decreases as the tunnel thickness decreases. The reason of this reduction is the reduction of the stiffness of the tunnel. Moreover, we observe that even for the thinnest tunnel with thickness The effect of the material damping ratios in the tunnel-soil system We consider four different combinations of the material damping ratios of the soil and tunnel as shown in Table 5 . It demonstrates that the critical velocity for instability increases as the damping ratio of the tunnel increases and that of soil decreases. Therefore, we can conclude that the material damping of the tunnel stabilises the vibration of the moving object, while the material damping ratio of the soil may have a destabilising effect, which is similar to the finding in [26] . Table 5 Critical velocities for instability for different material damping ratios of the soil and tunnel. Damping ratios of the soil and tunnel The effect of the Lamé parameters of the soil Three sets of the Lamé parameters of the soil are considered, see Table 6 . It shows that the critical velocity for instability of the moving object increases as the Lamé parameters of the soil increase. Thus, the stiffness of the soil has a stabilising effect on the vibration of the moving object, which is in line with the literature finding [26] . Table 6 Critical velocities for instability for different Lamé parameters of the soil. The effect of the burial depth of the tunnel It is interesting to compare the critical velocity for instability for the full-space and half-space cases. Table 7 shows that V inst cr decreases as the depth of embedded tunnel decreases. This reduction is probably and mostly because the Rayleigh wave, which is slower than the body waves in the soil, also starts to play a role, although its influence is not very large. Dependency of the critical mass and stiffness on velocity In this section, three cases (see Table 8 ) are considered to investigate the dependency of the critical mass and stiffness of the moving mass and moving oscillator, respectively, on the velocity in the range of V = (1 . 00 − 1 . 20) V inst cr . We chose three full-space cases to investigate the dependency relationship. As observed in Figs. 11 and 12 , there are many critical masses and stiffnesses for some velocities. For these velocities, we only consider the smallest critical mass M * 1 and the largest critical stiffness K * max because we are interested to find the regions where the vibration of the moving oscillator is stable Table 8 , and three small intervals are considered to clearly show the trend in each interval. The region below the line relates to stable vibrations of the single mass. Table 8 , and three small intervals are considered to clearly show the trend in each interval. The region above the line relates to stable vibrations of the oscillator. The reason is that the critical mass and stiffness can decrease and increase dramatically as the velocity increases, and by distinguishing the three sub-intervals, we clearly show the trend in each interval. In Fig. 13 , we observe that the critical mass in all three cases decreases as the velocity increases, which is in line with [11] . Fig. 13 also shows that the critical mass of the moving object (single-mass case) in case III is the largest and the one in case I is the smallest when V = V inst cr , and that the difference between the three critical masses is very large. However, the critical mass in case III becomes the smallest and the one in case I the largest when V = 1 . 20 V inst cr , and the difference between the three critical masses becomes much smaller. Fig. 14 shows that the critical stiffness of the oscillator in the three cases increases as the velocity increases, which is again in line with the literature finding in [25] . Another similarity between our result and the literature is that the instability of the moving oscillator occurs when its stiffness is in the order of 10 6 (kg/s 2 ), which is approximately the same as the value of the stiffness of springs used for conventional trains. However, the critical stiffness increases dramatically up to the order of 10 8 (kg/s 2 ) for our model while K * stays in the same magnitude as the velocity increases for the model considered in the literature [25] . Fig. 14 also shows that the critical stiffness in case III is the smallest and the one in case I is the largest when V = V inst cr , and that the difference between the three critical stiffnesses is very small. However, when V = 1 . 20 V inst cr , the critical stiffness in case III becomes the largest and the one in case I the smallest, and the difference between the three critical stiffnesses becomes much larger. Finally, Figs. 13 and 14 show that the dependency of the critical mass and stiffness on the velocity is similar in the considered velocity range for the three cases; however, as is clear from the comparison of cases II and III, the Lamé parameters of the soil have a larger effect on the curves compared to the damping ratio of the tunnel. In Fig. 13 (14) , the region below (above) the line relates to stable vibrations of the object. In the region above (below) the line, the object vibration can be either purely unstable, or alternately either stable or unstable, which is the case when Im( K eq ) has many zero crossings (see Figs. 11 and 12 ). In case I, for velocities V = (1 . 06 − 1 . 20) V inst cr , the vibration of the moving mass in the region above the line shown in Fig. 13 in the subregions of K * 1 < K < K * 2 and K * 3 < K < K * 4 , but stable in the sub-regions K * 2 < K < K * 3 and K < K * 1 . In case II, for velocities V = 1 . 00 V inst cr and V = 1 . 02 V inst cr , the number of the critical masses and stiffnesses can be verified to be 2 and 9, respectively; for velocities V = (1 . 04 − 1 . 20) V inst cr , however, the number is 1. In case III, in the velocity range of V = (1 . 00 − 1 . 08) V inst cr , the number of the critical masses and stiffnesses varies significantly and jumps from 1 to 11 and then back to 3; for velocities cr , the number is again 1. Clearly, the precise number of sub-regions highly depends on the stiffness of the soil, and on the damping ratios of soil and shell. Conclusions In this paper, instability of the vibration of an object moving through a tunnel embedded in soft soil has been studied. We employed the concept of the equivalent dynamic stiffness, which reduces the original 2.5D model to an equivalent discrete model, whose parameters depend on the vibration frequency and the object's velocity. The frequency-domain indirect Boundary Element Method was used to obtain the equivalent stiffness of the tunnel-soil system at the point of contact with the moving object (i.e., the mass-spring system and the limit case of a single mass). Prior to that, the indirect BEM was validated for specific problems: the response of the system to a stationary harmonic point load and to a moving non-oscillatory load acting at the invert of a tunnel. Using the equivalent stiffness, the critical velocity beyond which the instability of the object may occur was found (it is the same for both the moving mass and the moving oscillator). The critical velocity for instability is the most important result of the instability analysis. We found that the critical velocity for instability turns out to be much larger than the operational velocity of high-speed trains and ultra-high-speed Hyperloop pods, which implies that the model adopted in this paper predicts the vibrations of these objects moving through a tunnel embedded in soft soil to be stable. For the model of a track founded on top of the elastic half-space, considered for comparison, the critical velocity for instability in the presence of damping is just slightly larger than the critical velocity for resonance of the undamped system (which is equal to the minimum phase velocity of the system). However, for the current model, the critical velocity for instability is much larger than the critical velocity for resonance (of the damped system, strictly speaking, but the influence of the damping on the resonant velocity is small). For both models, the critical velocity for resonance is slightly smaller than the velocity of Rayleigh waves, and the fact that the critical velocity for instability is so much larger in the model with the embedded tunnel is due to the large stiffness of the tunnel and the radiation damping of the waves excited in the tunnel. Other parameters affect the instability as well. A parametric study shows that the thickness of the tunnel, the material damping ratio of the tunnel, the stiffness of the soil and the burial depth have a stabilising effect, while the damping of the soil may have a slightly destabilising effect. In order to investigate the instability of the moving object in case the velocity exceeds the identified critical velocity for instability, we employed the D-decomposition method and found the instability domains in the space of system parameters. For a deep tunnel, the dependency of the critical mass and stiffness on the velocity was investigated. We conclude that the higher the velocity, the smaller the mass of the object (single mass case) should be to ensure its stability. Furthermore, the higher the velocity, the larger the stiffness of the spring should be when the spring is added (oscillator case). Our findings regarding the velocity dependency of the critical mass and stiffness are aligned with the conclusions obtained by Metrikine et al. [11,25] for other models. The fact that the critical velocity for instability for the current model is much higher than the operational velocity of contemporary and future vehicles is promising for the Maglev and Hyperloop transportation systems. Furthermore, the approach presented in this paper can be applied to more advanced models with more points of contact between the moving object and the tunnel, which would resemble reality even better. Finally, as the dynamic stiffness is very important for the instability analysis for the tunnel-soil system, a refined model of the tunnel, which can potentially increase the accuracy of the response at its interior, can be considered in future work. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. In this appendix, our first aim is to find the critical velocity for resonance V res cr of the current tunnel-soil system. To this end, we analyse the response of the tunnel-soil system induced by a uniformly moving non-oscillatory ( f 0 = 2 π = 0 ) point load observed at the tunnel invert at x = 0 as derived in Section 2.3 (see Eq. (18) Table 3 . Fig. C.1 (a) shows the amplitude spectra of the radial displacements observed at the tunnel invert, at the observation point x = 0 , for one sub-Rayleigh, one super-Rayleigh and one supersonic case (the others look similarly). The Fourier transformed displacement, ˜ U r 1 (r 1 , θ 1 , x, ω) , is defined as the integrand of Eq. (18) except for the term exp ( i ωt) . It is shown that the spectra are spread around f = 0 . The time-domain responses are shown in Fig. C.1 (b). Clearly, for V = 30 m / s and V = 69 m / s , the disturbance is localized around the moving load, and there is no significant wave radiation in these cases. For V = 70 m / s , a wave pattern emerges, which comes with significant asymmetry of the profile. For V = 75 m / s and V = 150 m / s , a more clear wave pattern can be observed; in these two cases, Rayleigh waves, and Rayleigh, shear and compressional waves are generated, respectively. Furthermore, the response is extreme for V = 70 m / s , which indicates resonance. Therefore, we conclude that for the current tunnel-soil system, V res cr ≈ 70 m / s . Clearly, a constant load moving faster than this critical speed will radiate waves. CRediT authorship contribution statement As explained in Subsection 2.2 , the radial displacement at the loading point, excited by a uniformly moving oscillatory load (i.e., f 0 = 0), is a key element for the instability analysis (see Eqs. (16) and (17) ). The second aim of this appendix is therefore to find the requirements that need to be met to get converged steady-state responses. Essentially, these requirements need to be defined based on U 0 ( , V ) in Eq. (16) , as that quantity is used to obtain the equivalent stiffness ( Eq. (17) ). For illustration purposes, we consider the three cases of V = 30 m / s , V = 75 m / s and V = 150 m / s with a load frequency f 0 = 5 Hz shown in Fig. C.2 . In this figure, small frequency and time windows are shown in order to present clear features of the spectra and time-domain responses. One observes that the amplitude spectra of displacements become wider compared with the case of f 0 = 0 (compare Figs. C.1 (a) and C.2 (a)) and are spread around f = f 0 . In the time-domain responses, oscillatory patterns are observed even for the sub-critical case due to the oscillation of the load. In addition, the Doppler effect is observed in Fig. C.2 (b) [10,40] , which implies that waves are generated at frequencies different from that of the load; these frequencies are usually found from the intersections of the kinematic invariant (e.g., line 3 or 4 in Fig. 10 ) and the dispersion curves. Based on the amplitude spectra and time-domain responses for different velocities and loading frequencies, we found the requirements (in terms of f, f max and ( N s , N r )) to obtain converged results, and they are shown in Table C.1 . Note that N shell = 20 and N load = 20 were sufficient for all the computed cases.
v3-fos-license
2022-02-26T00:03:19.781Z
2022-02-22T00:00:00.000
247096574
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1420-3049/27/5/1460/pdf", "pdf_hash": "eca77fabb04f677f504bd27c5081813f748e7abc", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42748", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "sha1": "36a91af37ee6041a5a77ab57e1253ec81910494e", "year": 2022 }
pes2o/s2orc
The Synthesis and Initial Evaluation of MerTK Targeted PET Agents MerTK (Mer tyrosine kinase), a receptor tyrosine kinase, is ectopically or aberrantly expressed in numerous human hematologic and solid malignancies. Although a variety of MerTK targeting therapies are being developed to enhance outcomes for patients with various cancers, the sensitivity of tumors to MerTK suppression may not be uniform due to the heterogeneity of solid tumors and different tumor stages. In this report, we develop a series of radiolabeled agents as potential MerTK PET (positron emission tomography) agents. In our initial in vivo evaluation, [18F]-MerTK-6 showed prominent uptake rate (4.79 ± 0.24%ID/g) in B16F10 tumor-bearing mice. The tumor to muscle ratio reached 1.86 and 3.09 at 0.5 and 2 h post-injection, respectively. In summary, [18F]-MerTK-6 is a promising PET agent for MerTK imaging and is worth further evaluation in future studies. Introduction MerTK, a receptor tyrosine kinase of the TAM (TYRO3, AXL, and MERTK) family, is over-expressed or ectopically expressed in a wide variety of cancers [1,2], including acute lymphoblastic leukemia (ALL) [3], non-small cell lung cancer (NSCLC) [4], melanoma [5], prostate cancer [6], glioblastoma [7], etc. In fact, MerTK mediates the activation of several canonical oncogenic signaling pathways in cancer cells [8,9]. In addition, due to the important physiological role of MerTK in the innate immune system, MerTK inhibitors may potentially reduce tumor growth by changing the immunosuppressive environment and stimulating antitumor immunity [10,11]. Indeed, based on the important functions of MerTK, many MerTK targeted therapies are in development to enhance outcomes for patients with a variety of types of cancers, and a few are in clinical trials [12]. Despite the enthusiasm, tumor sensitivity to MerTK suppression may not be uniform due to the heterogeneity of solid tumors and different disease stages (for example, primary v. metastatic disease) [13,14]. Clearly, there is an urgent need to better predict which cancer patients are likely to respond to such novel interventions, as well as monitor the therapeutic responses. Although the drug metabolism study based on Mass analysis could provide information on biodistribution and metabolism of small pharmaceutical molecules in vivo [15], PET is a non-invasive imaging technology that can quantitatively evaluate biological targets or biochemical processes in vivo [16][17][18][19]. Nevertheless, research on MerTK targeted PET agent are very limited [20]. Therefore, the aim of this research is to develop radio-labeled agents that will allow us to directly measure MerTK expression and distribution during different disease stages, non-invasively and repetitively. We have been committed to the development of novel therapeutics against MerTK for an extended period and have developed several small-molecule MerTK inhibitors with great potency and different selectivity profiles [21][22][23][24][25]. UNC5293 is a new MerTKspecific inhibitor developed recently at UNC, which is extremely potent against MerTK (Ki is 0.19 nM) and very selective against the kinome (Ambit selectivity score S 50 = 0.041 at 100 nM) [25]. Since target specificity is one of the key requirements of PET agents, the discovery of UNC5293A provides us with a solid foundation for developing MerTK PET ligands. In this research, we developed a series of potential MerTK PET agents based on the core of UNC5293 (UNC6429/UNC5650) and evaluated their use in B16F10 tumor-bearing mice. Chemistry As shown in Scheme 1, UNC6429 and UNC5650 were synthesized using a three-step sequence. Generally, the starting material 1 was heated with an appropriate primary amine (commercially available and enantiomerically pure) in a sealed tube under basic conditions for 3 days to complete the S N Ar replacement reaction. After purification, the resulting intermediate 2 underwent a Suzuki coupling reaction, followed by deprotection of the Boc group with hydrogen chloride to afford us with intermediate 3. Finally, UNC6429 and UNC5650 were prepared by hydrogenation of the double bond using palladium on carbon with overall yields of 33% and 62%, respectively. Molecules 2022, 27, x FOR PEER REVIEW 2 of 12 targets or biochemical processes in vivo [16][17][18][19]. Nevertheless, research on MerTK targeted PET agent are very limited [20]. Therefore, the aim of this research is to develop radio-labeled agents that will allow us to directly measure MerTK expression and distribution during different disease stages, non-invasively and repetitively. We have been committed to the development of novel therapeutics against MerTK for an extended period and have developed several small-molecule MerTK inhibitors with great potency and different selectivity profiles [21][22][23][24][25]. UNC5293 is a new MerTK-specific inhibitor developed recently at UNC, which is extremely potent against MerTK (Ki is 0.19 nM) and very selective against the kinome (Ambit selectivity score S50 = 0.041 at 100 nM) [25]. Since target specificity is one of the key requirements of PET agents, the discovery of UNC5293A provides us with a solid foundation for developing MerTK PET ligands. In this research, we developed a series of potential MerTK PET agents based on the core of UNC5293 (UNC6429/UNC5650) and evaluated their use in B16F10 tumor-bearing mice. Chemistry As shown in Scheme 1, UNC6429 and UNC5650 were synthesized using a three-step sequence. Generally, the starting material 1 was heated with an appropriate primary amine (commercially available and enantiomerically pure) in a sealed tube under basic conditions for 3 days to complete the SNAr replacement reaction. After purification, the resulting intermediate 2 underwent a Suzuki coupling reaction, followed by deprotection of the Boc group with hydrogen chloride to afford us with intermediate 3. Finally, UNC6429 and UNC5650 were prepared by hydrogenation of the double bond using palladium on carbon with overall yields of 33% and 62%, respectively. The inhibitory activities of standards towards MerTK, Axl, Tyro3 and Flt3 were determined in our in-house microcapillary electrophoresis (MCE) assays [25]. As presented in Table 1, the primary targets of these compounds are all MerTK. The inhibitory activities of standards towards MerTK, Axl, Tyro3 and Flt3 were determined in our in-house microcapillary electrophoresis (MCE) assays [25]. As presented in Table 1, the primary targets of these compounds are all MerTK. The inhibitory activities of standards towards MerTK, Axl, Tyro3 and Flt3 were determined in our in-house microcapillary electrophoresis (MCE) assays [25]. As presented in Table 1, the primary targets of these compounds are all MerTK. The inhibitory activities of standards towards MerTK, Axl, Tyro3 and Flt3 were determined in our in-house microcapillary electrophoresis (MCE) assays [25]. As presented in Table 1, the primary targets of these compounds are all MerTK. a Values are the mean of two or more independent assays. Radiochemistry With the precursors and standards in hand, we explored their radiolabeling with easily available positron nuclides: carbon-11, Gallium-68, and Fluorine-18. C-11 labeled MerTK-1 and MerTK-2 were obtained with lower yields due to the difficulty in HPLC purification (the precursor and the product had close retention times). The short half-life of 11 C (t1/2 = 20.4 min) added more challenges: only one HPLC purification could be done for each reaction. The IC50 value of MerTK-1 and MerTK-2 against MerTK were determined to be 4.2 nM and 61 nM, respectively (Table 1). Good selectivity over Axl, Tyro3 and Flt3 was observed. The 68 Ga (half-life of 67.6 min and up to 1.89 MeV positron energy) could label MerTK-3 and MerTK-4 efficiently; however, the initial pilot study in mice did not provide promising results (<1%ID/g tumor uptakes were observed). Therefore, we did not measure their binding affinity and focused on developing fluorine-18 labeled PET agents for MerTK imaging due to its relatively long half-life (109.8 min) and high resolution (up to 0.64 MeV positron energy) on the PET imaging. As shown in Scheme 2, the fluorine-18 labeling on UNC 5650 and UNC6429 were carried out using a two-step sequence. were purified on radio-HPLC and followed by reformulation. The identity of the final product was confirmed by coinjection with the standard compound in HPLC. The IC50 value of MerTK-5 and MerTK-6 were determined to be 15 nM and 37 nM against MerTK, respectively, with good selectivity over Axl, Tyro3 and Flt3 (Table 1). Although MerTK-5 had a higher binding affinity 13 4100 180 >30,000 a Values are the mean of two or more independent assays. Radiochemistry With the precursors and standards in hand, we explored their radiolabeling with easily available positron nuclides: carbon-11, Gallium-68, and Fluorine-18. C-11 labeled MerTK-1 and MerTK-2 were obtained with lower yields due to the difficulty in HPLC purification (the precursor and the product had close retention times). The short half-life of 11 C (t1/2 = 20.4 min) added more challenges: only one HPLC purification could be done for each reaction. The IC50 value of MerTK-1 and MerTK-2 against MerTK were determined to be 4.2 nM and 61 nM, respectively (Table 1). Good selectivity over Axl, Tyro3 and Flt3 was observed. The 68 Ga (half-life of 67.6 min and up to 1.89 MeV positron energy) could label MerTK-3 and MerTK-4 efficiently; however, the initial pilot study in mice did not provide promising results (<1%ID/g tumor uptakes were observed). Therefore, we did not measure their binding affinity and focused on developing fluorine-18 labeled PET agents for MerTK imaging due to its relatively long half-life (109.8 min) and high resolution (up to 0.64 MeV positron energy) on the PET imaging. As shown in Scheme 2, the fluorine-18 labeling on UNC 5650 and UNC6429 were carried out using a two-step sequence. were purified on radio-HPLC and followed by reformulation. The identity of the final product was confirmed by coinjection with the standard compound in HPLC. The IC50 value of MerTK-5 and MerTK-6 were determined to be 15 nM and 37 nM against MerTK, respectively, with good selectivity over Axl, Tyro3 and Flt3 (Table 1). Although MerTK-5 had a higher binding affinity 15 1100 190 1000 MerTK-6 Molecules 2022, 27 a Values are the mean of two or more independent assays. Radiochemistry With the precursors and standards in hand, we explored their radiolabeling with easily available positron nuclides: carbon-11, Gallium-68, and Fluorine-18. C-11 labeled MerTK-1 and MerTK-2 were obtained with lower yields due to the difficulty in HPLC purification (the precursor and the product had close retention times). The short half-life of 11 C (t1/2 = 20.4 min) added more challenges: only one HPLC purification could be done for each reaction. The IC50 value of MerTK-1 and MerTK-2 against MerTK were determined to be 4.2 nM and 61 nM, respectively (Table 1). Good selectivity over Axl, Tyro3 and Flt3 was observed. The 68 Ga (half-life of 67.6 min and up to 1.89 MeV positron energy) could label MerTK-3 and MerTK-4 efficiently; however, the initial pilot study in mice did not provide promising results (<1%ID/g tumor uptakes were observed). Therefore, we did not measure their binding affinity and focused on developing fluorine-18 labeled PET agents for MerTK imaging due to its relatively long half-life (109.8 min) and high resolution (up to 0.64 MeV positron energy) on the PET imaging. As shown in Scheme 2, the fluorine-18 labeling on UNC 5650 and UNC6429 were carried out using a two-step sequence. were purified on radio-HPLC and followed by reformulation. The identity of the final product was confirmed by coinjection with the standard compound in HPLC. The IC50 value of MerTK-5 and MerTK-6 were determined to be 15 nM and 37 nM against MerTK, respectively, with good selectivity over Axl, Tyro3 and Flt3 (Table 1). Although MerTK-5 had a higher binding affinity 37 2100 120 5500 a Values are the mean of two or more independent assays. Radiochemistry With the precursors and standards in hand, we explored their radiolabeling with easily available positron nuclides: carbon-11, Gallium-68, and Fluorine-18. C-11 labeled MerTK-1 and MerTK-2 were obtained with lower yields due to the difficulty in HPLC purification (the precursor and the product had close retention times). The short half-life of 11 C (t 1/2 = 20.4 min) added more challenges: only one HPLC purification could be done for each reaction. The IC 50 value of MerTK-1 and MerTK-2 against MerTK were determined to be 4.2 nM and 61 nM, respectively (Table 1). Good selectivity over Axl, Tyro3 and Flt3 was observed. The 68 Ga (half-life of 67.6 min and up to 1.89 MeV positron energy) could label MerTK-3 and MerTK-4 efficiently; however, the initial pilot study in mice did not provide promising results (<1%ID/g tumor uptakes were observed). Therefore, we did not measure their binding affinity and focused on developing fluorine-18 labeled PET agents for MerTK imaging due to its relatively long half-life (109.8 min) and high resolution (up to 0.64 MeV positron energy) on the PET imaging. As shown in Scheme 2, the fluorine-18 labeling on UNC 5650 and UNC6429 were carried out using a two-step sequence. (Table 1). Although MerTK-5 had a higher binding affinity towards MerTK, the initial PET study suggested that MerTK-6 had more prominent tumor uptake and contrast. Therefore, we focused on MerTK-6 in the initial evaluation. The HPLC spectra in Figure 1 illustrate the purification and quality control of [ 18 F]-MerTK-6. Molecules 2022, 27, x FOR PEER REVIEW 5 of 12 towards MerTK, the initial PET study suggested that MerTK-6 had more prominent tumor uptake and contrast. Therefore, we focused on MerTK-6 in the initial evaluation. The HPLC spectra in Figure 1 illustrate the purification and quality control of [ 18 F]-MerTK-6. Evaluation of the LogP In order to evaluate the hydrophilicity and lipophilicity of this fluorine-18 labeled agent [ 18 F]-MerTK-6, we measured the 1-octanol/water partition coefficient (LogP) of [ 18 F]-MerTK-6. The resulting fractions were counted using a gamma counter. The reaction was repeated three times. The logP values of [ 18 F]-MerTK-6 (1.56 ± 0.02) showed that it was moderately lipophilic, indicating that it had good cell membrane permeability and tumor cell uptake potential. Evaluation of the LogP In order to evaluate the hydrophilicity and lipophilicity of this fluorine-18 labeled agent [ 18 F]-MerTK-6, we measured the 1-octanol/water partition coefficient (LogP) of [ 18 F]-MerTK-6. The resulting fractions were counted using a gamma counter. The reaction was repeated three times. The logP values of [ 18 F]-MerTK-6 (1.56 ± 0.02) showed that it was moderately lipophilic, indicating that it had good cell membrane permeability and tumor cell uptake potential. Chemistry Microwave reactions were carried out using a CEM Discover-S reactor with a vertically focused IR external temperature sensor and an Explorer 72 autosampler. The dynamic mode was used to set up the desired temperature and hold time with the following fixed parameters: PreStirring, 1 min; Pressure, 200 psi; Power, 200 W; PowerMax, off; Stirring, high. Flash chromatography was carried out on Teledyne ISCO Combi Flash ® Rf 200 with pre-packed silica gel disposable columns. Preparative HPLC (Agilent Technologies 1260 Infinity, Santa Clara, CA, U.S.A) was performed with UV detection at 220 or 254 nm. Samples were injected onto a 75 × 30 mm, 5 μm, C18(2) column at room temperature. The flow rate was 30 mL/min. Various linear gradients were used with solvent A (0.1% TFA in water) and solvent B (0.1% TFA in acetonitrile). Analytical HPLC was performed with a prominence diode array detector (Shimadzu SPD-M20A, Kyoto, Japan). Samples were injected onto a 3.6 μm PEPTIDE XB-C18 100 Å, 150 × 4.6 mm LC column at room temperature. The flow rate was 1.0 mL/min. Analytical thin-layer chromatography (TLC) was performed with silica gel 60 F254, and 0.25 mm pre-coated TLC plates. The TLC plates were visualized using UV254 and phosphomolybdic acid with charring. All 1 H NMR spectra were obtained with a 400 MHz spectrometer (Agilent VnmrJ, Santa Clara, CA, U.S.A) using CDCl3 (7.26 ppm), or CD3OD (2.05 ppm) as an internal reference. Signals are reported as m (multiplet), s (singlet), d (doublet), t (triplet), q (quartet), p (pentet), and bs (broad singlet); and coupling constants are reported in hertz (Hz). The 13 C NMR spectra were obtained with a 100 MHz spectrometer (Agilent VnmrJ, Santa Clara, CA, U.S.A) using CDCl3 (77.2 ppm), or CD3OD (49.0 ppm) as the internal standard. LC/MS (Agilent Technologies 1260 Infinity II, Santa Clara, CA, U.S.A) was performed using an analytical instrument with the UV detector set to 220 nm, 254 nm, and 280 nm, and a single quadrupole mass spectrometer using an electrospray ionization (ESI) source. Samples were injected (2 μL) onto a 4.6 × 50 mm, 1.8 μm, C18 column at room temperature. A linear gradient from 10% to 100% B (0.1% acetic acid in MeOH) in 5.0 min was followed by pumping 100% B for another 2 or 4 min with A being H2O + 0.1% acetic acid. The flow rate was 1.0 mL/min. The purity of all final compounds (>95%) was determined by LC-MS. Chemistry Microwave reactions were carried out using a CEM Discover-S reactor with a vertically focused IR external temperature sensor and an Explorer 72 autosampler. The dynamic mode was used to set up the desired temperature and hold time with the following fixed parameters: PreStirring, 1 min; Pressure, 200 psi; Power, 200 W; PowerMax, off; Stirring, high. Flash chromatography was carried out on Teledyne ISCO Combi Flash ® R f 200 with pre-packed silica gel disposable columns. Preparative HPLC (Agilent Technologies 1260 Infinity, Santa Clara, CA, USA) was performed with UV detection at 220 or 254 nm. Samples were injected onto a 75 × 30 mm, 5 µm, C18(2) column at room temperature. The flow rate was 30 mL/min. Various linear gradients were used with solvent A (0.1% TFA in water) and solvent B (0.1% TFA in acetonitrile). Analytical HPLC was performed with a prominence diode array detector (Shimadzu SPD-M20A, Kyoto, Japan). Samples were injected onto a 3.6 µm PEPTIDE XB-C18 100 Å, 150 × 4.6 mm LC column at room temperature. The flow rate was 1.0 mL/min. Analytical thin-layer chromatography (TLC) was performed with silica gel 60 F 254 , and 0.25 mm pre-coated TLC plates. The TLC plates were visualized using UV 254 and phosphomolybdic acid with charring. All 1 H NMR spectra were obtained with a 400 MHz spectrometer (Agilent VnmrJ, Santa Clara, CA, USA) using CDCl 3 (7.26 ppm), or CD 3 OD (2.05 ppm) as an internal reference. Signals are reported as m (multiplet), s (singlet), d (doublet), t (triplet), q (quartet), p (pentet), and bs (broad singlet); and coupling constants are reported in hertz (Hz). The 13 C NMR spectra were obtained with a 100 MHz spectrometer (Agilent VnmrJ, Santa Clara, CA, USA) using CDCl 3 (77.2 ppm), or CD 3 OD (49.0 ppm) as the internal standard. Representative NMR spectrums were provided in Supplementary Material. LC/MS (Agilent Technologies 1260 Infinity II, Santa Clara, CA, USA) was performed using an analytical instrument with the UV detector set to 220 nm, 254 nm, and 280 nm, and a single quadrupole mass spectrometer using an electrospray ionization (ESI) source. Samples were injected (2 µL) onto a 4.6 × 50 mm, 1.8 µm, C18 column at room temperature. A linear gradient from 10% to 100% B (0.1% acetic acid in MeOH) in 5.0 min was followed by pumping 100% B for another 2 or 4 min with A being H 2 O + 0.1% acetic acid. The flow rate was 1.0 mL/min. The purity of all final compounds (>95%) was determined by LC-MS. Synthesis of UNC5650 General procedure A [25]. A mixture of 1 (3.30 g, 10.0 mmol); (S)-pentan-2-amine (3.48 g, 40.0 mmol); potassium carbonate (5.52 g, 40.0 mmol); and N,N-diisopropylethylamine (7.0 mL, 40.0 mmol) in iPrOH (80 mL) was heated at 120 • C for 3 d. The reaction mixture was extracted between EtOAc (3 × 80 mL) and H 2 O (80 mL). The combined organic layers were washed with brine (50 mL), dried (Na 2 SO 4 ), filtered, and concentrated under reduced pressure. The residue was purified by an ISCO silica gel column to afford the desired product 2 as a pale-yellow solid (2.82 g, 74%). 1 A suspension of 3a (383 mg, 1.0 mmol) and palladium on carbon (10% Pd, 380 mg) in MeOH (20 mL) was stirred at rt under hydrogen atmosphere overnight. The resulting mixture was filtered through a pad of Celite and the solvent was removed under reduced pressure. The residue was purified by an ISCO silica gel column to afford the desired product 4 (UNC5650) as a yellow solid (240 mg, 62%). 1 Synthesis of UNC6429 The title compound UNC6429 was synthesized according to the general procedure A as a yellow solid (240 mg, 0.523 mmol). 1 General procedure C. The synthesis of MerTK was modified from literature method [25]. To a solution of UNC5650 (10.0 mg, 21.8 µmol) and 2-fluoroethyl 4-toluenesulfonate (3.7 µL, 22 µmol) in acetonitrile (2.2 mL) was added sodium iodide (1.6 mg, 11 µmol), and sodium carbonate (10.4 mg, 98.2 µmol). The reaction mixture was heated at 65 • C for 18 h and concentrated in vacuo. The residue was purified by normal phase chromatography (dichloromethane/methanol gradient) to afford the desired compound MerTK-5 as a pale-yellow oil, which was freeze dried to give an orange solid (4.0 mg, 9.3 µmol) in 43% yield. 1 (Table 2), and ATP at the Km for each enzyme ( Table 2). All reactions were terminated by addition of 20 µL of 70 mM EDTA. After an 180 min incubation, phosphorylated and unphosphorylated substrate peptides (Table 2) were separated in buffer supplemented with 1 x CR-8 on a LabChip EZ Reader equipped with a 12-sipper chip. Data were analyzed using EZ Reader software. Radiochemistry General procedure D. Evaluation of LogP The LogP value of the [ 18 F]-MerTK-6 was calculated by the gamma particle counts of samples in the aqueous phase or 1-octanol phase by Automatic Gamma Counter 2480-0010 (PerkinElmer Instruments Inc., Waltham, MA, USA). The [ 18 F]-MerTK-6 was collected after HPLC purification. After reformulation (pH value around 7.4), 20 µL [ 18 F]-MerTK-6 sample in saline was added to the mixture of 1 mL Mili-Q®water and 1 mL 1-octanol in a 5 mL Eppendorf tube. The tube was shacked thoroughly and then let stand still for 5 min. Then the 100 µL 1-octanol phase and 100 µL aqueous phase were subjected to a gamma counter separately and the gamma counts were recorded (n = 3). The LogP value was then calculated and expressed as a mean value ± standard derivation. Mouse Model All animal studies were reviewed and approved by The University of North Carolina at Chapel Hill Institutional Animal Care and Use Committee. The B16F10 tumor cell was obtained from the LCCC tissue culture facility (the University of North Carolina at Chapel Hill, Chapel Hill, NC, USA). The B16F10 tumor-bearing nude mouse model was prepared as described previously [28]. Briefly, B16/F10 cells were subcutaneously injected on the right flank of C57BL/6 female mice (Jackson Laboratory). The tumor volume was measured daily. When the tumor size reached 100 mm 3 , the mice were used for PET imaging studies. PET Imaging B16F10 tumor-bearing mice (n = 3/group) were intravenously injected via the tail vein with tracers. At 30 min and 120 min post-injection, a 10-min static emission scan was acquired with a SuperArgus small-animal PET/CT scanner. The regions of interests (ROIs) were drawn over the tumor and other organs and calculated as %ID/g. The mean uptake and standard deviation were calculated. Conclusions In this study, we synthesized several MerTK targeted PET agents based on the core structure of MerTK-specific inhibitor UNC5293. Of them, [ 18 F]-MerTK-6 showed a significant uptake rate (4.79 ± 0.24%ID/g) in B16F10 tumor-bearing mice. At 0.5 and 2 h after injection, the tumor to muscle ratio reached 1.86 and 3.09, respectively. In summary, [ 18 F]-MerTK-6 is a promising PET agent for MerTK imaging and worthy of further evaluation in future studies. There are a few MerTK inhibitors entered into clinical trials recently, such as MRX-2843 [3], INCB081776 [29], and RXDX-106 [30]. The MerTK-target PET imaging tracer would potentially help evaluating target engagement and adjusting treatment plan for individual patient. Supplementary Materials: The following are available online. Figures S1-S7: 1 H NMR spectra for standard compounds.
v3-fos-license
2016-07-23T11:32:38.000Z
2016-02-02T00:00:00.000
54883704
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP06(2016)057.pdf", "pdf_hash": "d43f735242a0284f6ed281430bf9fe52acdad59d", "pdf_src": "ArXiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42749", "s2fieldsofstudy": [ "Physics" ], "sha1": "d43f735242a0284f6ed281430bf9fe52acdad59d", "year": 2016 }
pes2o/s2orc
Inclusive jet spectrum for small-radius jets Following on our earlier work on leading-logarithmic (LLR) resummations for the properties of jets with a small radius, R, we here examine the phenomenological considerations for the inclusive jet spectrum. We discuss how to match the NLO predictions with small-R resummation. As part of the study we propose a new, physically-inspired prescription for fixed-order predictions and their uncertainties. We investigate the R-dependent part of the next-to-next-to-leading order (NNLO) corrections, which is found to be substantial, and comment on the implications for scale choices in inclusive jet calculations. We also examine hadronisation corrections, identifying potential limitations of earlier analytical work with regards to their pt-dependence. Finally we assemble these different elements in order to compare matched (N)NLO+LLR predictions to data from ALICE and ATLAS, finding improved consistency for the R-dependence of the results relative to NLO predictions. Introduction Jets are used in a broad range of physics analyses at colliders and notably at CERN's large hadron collider (LHC).The study of jets requires the use of a jet definition and for many of the algorithms in use today that jet definition involves a "radius" parameter, R, which determines how far in angle a jet clusters radiation.A limit of particular interest is that where R is taken small.One reason is that contamination from multiple simultaneous pp interactions ("pileup") is minimised, as is the contribution from the large underlying event when studying heavy-ion collisions.Furthermore small-R subjets are often a component of jet substructure analyses used to reconstruct hadronic decays of highly boosted weak bosons and top quarks.Finally a powerful systematic cross-check in precision jet studies comes from the variation of R, e.g. to verify that conclusions about data-theory agreement remain stable.Such a variation will often bring one into the small-R limit. In the small-R limit the perturbative series involves terms α n s ln n 1/R 2 , where the smallness of α n s is partially compensated by large logarithms ln n 1/R 2 .The first order α s ln 1/R 2 terms (together with the constant) were first calculated long ago [1][2][3] and have been examined also more recently [4][5][6].About a year ago, the whole tower of leadinglogarithmic (LL R ) terms was determined [7], i.e. α n s ln n 1/R 2 for all n, for a range of observables (for related work, see also Refs.[8,9]).Work is also ongoing towards next-toleading logarithmic accuracy, NLL R [10,11], however the concrete results do not yet apply to hadron-collider jet algorithms. From the point of view of phenomenological studies, there has so far been one investigation of the impact of small-R resummation in the context of jet vetoes in Higgs production [12].Though small-R contributions have a significant impact on the results, most of their effect (at the phenomenologically relevant R value of 0.4) is already accounted for in fixed-order and p t -resummed calculations.Accordingly the resummation brings only small additional changes. In this article, we examine the phenomenological impact of small-R terms for the archetypal hadron-collider jet observable, namely the inclusive jet spectrum.This observable probes the highest scales that are accessible at colliders and is used for constraining parton distribution functions (PDFs), determining the strong coupling and also in studies of hard probes heavy-ion collisions.Two factors can contribute to enhance small-R effects in the inclusive jet spectrum: firstly, it falls steeply as a function of p t , which magnifies the impact of any physical effect that modifies the jet's energy.Secondly, a wide range of R values has been explored for this observable, going down as far as R = 0.2 in measurements in proton-proton collisions by the ALICE collaboration [13].That wide range of R has been exploited also in studies of ratios of cross sections at different R values [13][14][15][16][17].For R = 0.2, LL R small-R effects can be responsible for up to 40% modifications of the jet spectrum. A first part of our study (section 2) will be to establish the region of R where the small-R approximation is valid and to examine the potential impact of effects beyond the LL R approximation.This will point to the need to include at the least the subleading R-enhanced terms that arise at next-to-next-to-leading order (NNLO) and motivate us to devise schemes to match LL R resummation with both NLO and NNLO calculations (sections 3 and 4 respectively).At NLO we will see indications of spuriously small scale dependence and discuss how to resolve the issue.Concerning NNLO, since the full calculation is work in progress [18], to move forwards we will introduce an approximation that we refer to as "NNLO R ".It contains the full R-dependence of the NNLO prediction but misses an R-independent, though p t -dependent, constant term.By default we will take it to be zero at some reference radius R m , but we will also examine the impact of other choices. In addition to the perturbative contributions at small-R, one must take into account non-perturbative effects, which are enhanced roughly as 1/R at small R and grow large especially at smaller values of p t .Two approaches exist for incorporating them, one based on analytic calculations [19], the other based on the differences between parton and hadronlevel results in Monte Carlo event generators such as Pythia [20,21] and Herwig [22][23][24][25].Based on our studies in section 5, we adopt the Monte Carlo approach for our comparisons to data.These are the subject of section 6, where we examine data from the ALICE collaboration [13] at R = 0.2 and 0.4, and from the ATLAS [26] collaboration at R = 0.4 and 0.6. A broad range of dynamically-generated plots comparing different theory predictions across a range of rapidities, transverse momenta and R values can be viewed online [27].Furthermore some of the plots included in the arXiv source for this paper contain additional information in the form of extra pages not displayed in the manuscript. 2 Small-R resummation for the inclusive jet spectrum 2.1 Recall of the small-R resummation formalism at LL R accuracy As for the inclusive hadron spectrum [28], the small-R inclusive "microjet" spectrum can be obtained [7] from the convolution of the leading-order inclusive spectrum of partons of flavour k and transverse momentum p t , dσ To keep the notation compact, we use σ LL R (p t , R) to denote either a differential cross section, or the cross section in a given p t bin, depending on the context.At LL R accuracy, the small-R effects are entirely contained in the fragmentation function, which depends on R through the evolution variable t, defined as . 1 Here, R 0 is the angular scale, of order 1, at which the small-radius approximation starts to become valid.For R = R 0 , or equivalently t = 0, the fragmentation function has the initial condition f incl jet/k (z, 0) = δ(1 − z).It can be determined for other t values by solving a DGLAP-like evolution equation in t [7].These results are identical for any standard hadron-collider jet algorithm of the generalised-k t [29][30][31][32][33] and SISCone [34] families, with differences between them appearing only at subleading logarithmic order.In this work we will restrict our attention to the anti-k t algorithm (as implemented in FastJet v3.1.3[35]).The LL R resummation is implemented with the aid of HOPPET [36].The phenomenological relevance of the small-R terms is illustrated in Fig. 1, which shows the ratio of the jet spectrum with small-R resummation effects to the LO jet spectrum. 2For this and a number of later plots, the p t and rapidity ranges and the collider energy choice correspond to those of ATLAS measurements [26], to which we will later compare our results.We show the impact of resummation for a range of R values from 0.1 to 1.0 and two R 0 choices.The smallest R values typically in use experimentally are in the range 0.2-0.4 and one sees that the fragmentation of partons into jets brings up to 40% reduction in the cross section for the smaller of these radii.The fact that the small-R effects are substantial is one of the motivations for our work here. From the point of view of phenomenological applications, the question that perhaps matters more is the impact of corrections beyond NLO (or forthcoming NNLO), since fixed order results are what are most commonly used in comparisons to data.This will be most easily quantifiable when we discuss matched results in sections 3 and 4. Note that there was already some level of discussion of effects beyond fixed order in Ref. [7], in terms of an expansion in powers of t.However comparisons to standard fixed order refer to an expansion in α s , which is what we will be using throughout this article.A brief discussion of the different features of t and α s expansions is given in Appendix A. Range of validity of the small-R approximation and effects beyond LL R In order to carry out a reliable phenomenological study of small-R effects it is useful to ask two questions about the validity of our LL R small-R approach.Firstly we wish to know from what radii the underlying small-angle approximation starts to work.Secondly, we want to determine the potential size of small-R terms beyond LL R accuracy. To investigate the first question we take the full next-to-leading-order (NLO) calculation for the inclusive jet spectrum from the NLOJet++ program [37], and look at the quantity ∆ 1 (p t , R, R ref ), where Here σ i (p t ) corresponds to the order α 2+i s contribution to the inclusive jet cross section in a given bin of p t .This can be compared to a similar ratio, ∆ LL R 1 (p t , R, R ref ), obtained from the NLO expansion of Eq. (2.1) instead of the exact NLO result. 3The quantity R ref here is some small reference radius at which one expects the small-R approximation to be valid; we choose R ref = 0.1.Fig. 2 (left) shows the comparison of ∆ 1 (filled squares) and ∆ LL R 1 (crosses) as a function of p t for several different R values.One sees very good agreement between ∆ 1 and ∆ LL R 1 for the smaller R values, while the agreement starts to break down for R in the vicinity of 1-1.5.This provides grounds for using the small-R approximation for R values 0.6 and motivates a choice of R 0 in range 1-1.5.We will take R 0 = 1 as our default, and use R 0 = 1.5 as a probe of resummation uncertainties. Next let us examine effects of subleading small-R logarithms, terms that come with a factor α n s ln n−1 R relative to the Born cross section.While there has been some work investigating such classes of terms in Refs.[10,11], those results do not apply to hadroncollider jet algorithms.Instead, here we examine the R dependence in the NNLO part of the inclusive jet cross section to evaluate the size of these terms.Because the R dependence starts only at order α 3 s , we can use the NLO 3-jet component of the NLOJet++ program to determine these terms.More explicitly, we use the fact that To determine this difference in practice, for each event in the NLOJet++ 3-jet run we apply the following procedure: we cluster the event with radius R and for each resulting jet add the event weight to the jet's corresponding p t bin; we then recluster the particles with radius R ref and for each jet subtract the event weight from the corresponding p t bin.For this procedure to give a correct answer, it is crucial not to have any 3-jet phasespace cut in the NLO 3-jet calculation (i.e.there is no explicit requirement of a 3rd jet). Eq. (2.5)).In both plots CT10 NLO PDFs [38] are used, while the renormalisation and factorisation scales are set equal to the p t of the highest-p t R = 1 jet in the event (this same scale is used for all R choices in the final jet finding). Hence, we can then examine and its corresponding LL R approximation, ∆ LL R 1+2 (p t , R, R ref ).The reason for including both NLO and NNLO terms is to facilitate comparison of the size of the results with that of the pure NLO piece.The results for ∆ 1+2 (filled squares) and ∆ LL R 1+2 (crosses) are shown in Fig. 2 (right).The difference between the crosses in the left-hand and right-hand plots is indicative of the size of the NNLO LL R contribution.At small R, the difference between the crosses and solid squares in the right-hand plot gives the size of the NLL R contribution at NNLO.It is clear that this is a substantial contribution, of the same order of magnitude as the LL R contribution itself, but with the opposite sign.Ideally one would therefore carry out a full NLL R calculation.That, however, is beyond the scope of this article. is equal to the ratio of p t,max p t,min dpt ∆1+2(pt, R, R ref ), as obtained from a bin-wise unweighted combination (with removal of a few percent of outlying runs in each bin) and a bin-wise weighted combination (an alternative approach to a similar issue was recently discussed in Ref. [39]).We believe that the systematics associated with this procedure are at the level of a couple of percent. Instead we will include a subset of the subleading ln R terms by combining the LL R resummation with the exact R dependence up to NNLO fixed order, i.e. the terms explicitly included in the solid squares in Fig. 2. 3 Matching NLO and LL R For phenomenological predictions, it is necessary to combine the LL R resummation with results from fixed-order calculations.In this section we will first examine how to combine LL R and NLO results, and then proceed with a discussion of NNLO corrections. Matching prescriptions One potential approach for combining LL R and NLO results would be to use an additive type matching, where σ 1 (R) denotes the pure NLO contribution to the inclusive jet spectrum (without the LO part, as in section 2.2) and σ LL R 1 (R) refers to the pure NLO contribution within the LL R resummation.For compactness, the p t argument in the cross sections has been left implicit. A simple, physical condition that the matching must satisfy is that in the limit R → 0, the ratio of the matched result to the LO result should tend to zero perturbatively,5 Eq. (3.1) does not satisfy this property: while σ LL R /σ 0 does tend to zero, the quantity (σ 1 − σ LL R 1 )/σ 0 instead tends to a constant for small R. We will therefore not use additive matching. Another class of matching procedure is multiplicative matching.One simple version of multiplicative matching is given by Because σ LL R 1 (R) contains the same logs as those in σ 1 (R), the right hand bracket tends to a constant for small R and all the R dependence comes from the σ LL R (R) factor.Since σ LL R (R) tends to zero for R → 0, Eq. (3.3) satisfies the condition in Eq. (3.2).The matching formula that we actually use is where R 0 is taken to be the same arbitrary radius of order 1 that appears in σ LL R (R) as defined in Eq. (2.1).Compared to Eq. (3.3), we have explicitly separated out a factor (σ 0 + σ 1 (R 0 )).As with Eq. (3.3), at small-R the entire R dependence comes from the σ LL R (R) factor, thus ensuring that Eq. (3.2) is satisfied.Eq. (3.4) has the advantage over Eq. (3.3) that matching will be simpler to extend to NNLO+LL R , which is why we make it our default choice.Eq. (3.4) has a simple physical interpretation: the left-hand factor is the cross section for producing a jet of radius R 0 and is effectively a stand-in for the normalisation of the (ill-defined) partonic scattering cross section, i.e. we equate partons with jets with radius R 0 ∼ 1.The right hand factor (in square brackets) then accounts for the effect of fragmentation on the cross section, including both the LL R contribution and an exact NLO remainder for the difference between the cross sections at radii R 0 and R. Even without a small-R resummation, one can argue that the physical separation that is embodied in Eq. (3.4) is one that should be applied to normal NLO calculations.This gives us the following alternative expression for the NLO cross section i.e. the cross section for producing a small-radius jet should be thought of as the cross section for the initial partonic scattering, followed by the fragmentation of the parton to a jet.As in Eq. (3.4), we introduce a radius R 0 ∼ 1 to give meaning to the concept of a "parton" beyond leading order.It is straightforward to see that Eq. (3.5) differs from standard NLO only in terms of corrections at order α 2 s relative to LO. Unphysical cancellations in scale dependence Let us now return to the resummed matched prediction, Eq. (3.4).The left and right-hand factors in that formula are shown separately in Fig. 3.The left-hand factor, corresponding to an overall normalisation for hard partonic scattering, is shown in the left-hand plot (divided by the LO to ease visualisation), while the small-R fragmentation (i.e.righthand) factor, which includes the resummation and matching contributions, is shown on the right.One sees that the two terms bring K-factors going in opposite directions.The overall normalisation has a K-factor that is larger than one and grows with p t .Meanwhile the fragmentation effects generate a K-factor that is substantially below one for smaller R values, with a relatively weak p t dependence.The p t dependence of the two factors involves an interplay between two effects: on one hand, the fraction of gluons decreases at large p t , as does α s ; on the other hand the PDFs fall off more steeply at higher p t , which enhances (positive) threshold logarithms in the normalisation factor and also increases the effect of small-R logarithms in the fragmentation factor (i.e.reduces the fragmentation factor).We believe that the gentle increase of the fragmentation factor is due to the decrease in gluon fraction, partially counteracted by the increasing steepness of the PDFs.A similar cancellation is probably responsible for the flatness of the normalisation factor at low and moderate p t 's, with threshold logarithms induced by the PDFs' steepness driving the increase at the highest p t 's. We note also that both factors in Eq. (3.4) depend significantly on the choice of R 0 , with two values being shown in Fig. 3, R 0 = 1 (solid) and R 0 = 1.5 (dashed).However in the full results, Eqs.(3.4) and (3.5), the R 0 dependence in cancels up to NLO, leaving a residual R 0 dependence that corresponds only to uncontrolled higher-order terms. The partial cancellation between higher-order effects that takes place between the small-R effects and the residual matching correction is somewhat reminiscent of the situation for jet vetoes in Higgs-boson production.There it has been argued that such a cancellation can be dangerous when it comes to estimating scale uncertainties.As a result, different schemes have been proposed to obtain a more reliable and conservative estimate, notably the Stewart-Tackmann [40] and jet-veto-efficiency [41] methods.Here we will take an approach that is similar in spirit to those suggestions (though somewhat closer to the jet-veto-efficiency method) and argue that for a reliable estimate of uncertainties, scale-dependence should be evaluated independently for the left and right-hand factors in Eqs.(3.4) and (3.5) (and also in Eq. (3.3)), and the resulting relative uncertainties on those two factors should be added in quadrature.We will always verify that the R 0 dependence (for just the central scale choice) is within our scale uncertainty band. NLO+LL R matched results Fig. 4 shows the inclusive jet cross section integrated from 100 GeV to 1992 GeV (the full range covered by the ATLAS data [26]), as a function of R, normalised to the leading order result.The left-hand plot shows the standard NLO result (light blue band), the "NLOmult."result of Eq. (3.5) (green band) and the NLO+LL R matched result of Eq. (3.4) (orange band).To illustrate the issue of cancellation of scale dependence discussed in section 3.2, the scale uncertainty here has been obtained within a prescription in which the We adopt the standard convention of independent µ R = { 1 2 , 1, 2}µ 0 and µ F = { 1 2 , 1, 2}µ 0 variations around a central scale µ 0 , with the additional condition 1 2 ≤ µ R /µ F ≤ 2. Our choice for µ 0 is discussed below. One sees that in each of the 3 bands, there is an R value for which the scale uncertainty comes close to vanishing, roughly R = 0.5 for NLO, R = 0.3 for "NLO-mult."and R = 0.1−0.2 for NLO+LL R .We believe that this near-vanishing is unphysical, an artefact of a cancellation in the scale dependence between small-R and overall normalisation factors, as discussed in the previous paragraph.One clear sign that the scale dependence is unreasonably small is that the NLO-mult.and NLO+LL R bands differ by substantially more than their widths. The right-hand plot of Fig. 4 instead shows uncertainty bands when one separately determines the scale variation uncertainty in the left-hand (normalisation) and the righthand (small-R matching) factors and then adds those two uncertainties in quadrature ("uncorrelated scale choice"; note that the NLO band is unchanged).Now the uncertainties remain fairly uniform over the whole range of R and if anything increase towards small R, as one might expect.This uncorrelated scale variation is the prescription that we will adopt for the rest of this article. Intriguingly, the NLO+LL R result is rather close to the plain NLO prediction.Given the large difference between the NLO and NLO-mult.results, this is, we believe, largely a coincidence: if one examines yet smaller R, one finds that the NLO and NLO+LL R results separate from each other, with the NLO and NLO-mult.results going negative for sufficiently small R, while the NLO+LL R result maintains a physical behaviour. Fig. 4 (right) also shows the impact of increasing R 0 to 1.5.One sees a 5−10% reduction in the cross section, however this reduction is within the uncertainty band that comes from the uncorrelated scale variation. A comment is due concerning our choice of central scale, µ 0 .At NLO, for each event, we take µ 0 to be the p t of the hardest jet in the event, p t,max .In NLO-like configurations, with at most 3 final-state partons, this hardest jet p t is independent of the jet radius and so we have a unique scale choice that applies to all jet radii.An alternative, widely used prescription is to use a separate scale for each jet, equal to that jet's p t .We disfavour this alternative because it leads to a spurious additional R dependence, induced by inconsistent scale choices in real and virtual terms.Further details are given in Appendix B. Matching prescription Given that full NNLO results for the inclusive cross section are likely to be available soon [18], here we propose matching schemes for combining our small-R resummed results with a full NNLO result.The direct analogue of Eq. (3.4) is where the functions ∆ 1 and ∆ 1+2 were defined in Eq. ( 2.3) and (2.5) and we recall that σ LL R and its expansion are functions both of R and R 0 .As with our NLO+LL R formula, Eq. (3.4), we have written Eq.(4.1) in terms of two factors: an overall normalisation for producing R 0 -jets, together with a matched fixed-order and resummed result for the correction coming from fragmentation of the R 0 jet into small-R jets.One comment here is that in Eq. (3.4) the matching part (big round brackets inside the square brackets) gave a finite result for R → 0. The situation is different at NNLO because the LL R resummation does not capture the α 2 s ln 1/R 2 (NLL R ) term that is present at fixed order and so the matching term has a residual α 2 s ln 1/R 2 dependence.This means that for sufficiently small-R, Eq. (4.1) can become unphysical.We have not seen any evidence of this occurring in practice, but one should keep in mind that for future work one might aim to resolve this in-principle problem either by incorporating NLL R resummationor by choosing a different form of matching, for example one where the O α 2 s parts of the matching correction are exponentiated, ensuring a positive-definite result.Note that with NLL R resummation one could also use a formula analogous to Eq. (3.3), where each of the terms is evaluated at radius R. As well as a matched result, it can be instructive to study a modification of the plain NNLO result, "NNLO-mult.", in analogy with Eq. (3.5).This remains a fixed order result, but it factorises the production of large-R 0 jets from the fragmentation to small-R jets, 3) It differs from σ NNLO only by terms beyond NNLO. As in section 3.1, in Eqs.(4.1)-( 4.3) we advocate varying scales separately in the normalisation and fragmentation factors, and also studying the R 0 dependence of the final result. A stand-in for NNLO: NNLO R We have seen in section 2.2 that NNLO terms of the form α 2 s ln 1/R 2 that are not accounted for in our LL R calculation can be large.Insofar as they are known, they should however be included in phenomenological studies.This specific class of terms can be taken into account in the context of a stand-in for the full NNLO calculation which contains the exact NNLO R dependence and that we refer to as NNLO R .It is constructed as follows: which depends on an arbitrary angular scale R m .Though neither σ 2 (R) nor σ 2 (R m ) can be fully determined currently, their difference can be obtained from the same NLO 3-jet calculation that was used to examine ∆ 1+2 (p t , R, R ref ) in Fig. 2 (right). Since the full NNLO result has the property vanishes.In practice we will take R m = 1, independently of p t .One point to be aware of is that σ NNLO R (R, R m ) and σ NNLO (R) have parametrically different scale dependence.On one hand, the σ 2 (R) term in σ NNLO (R) fully cancels the (relative) O α 2 s scale variation that is left over from σ 0 and σ 1 , leaving just O α 3 s dependence.On the other, in s dependence is left over.In particular, for R = R m the scale dependence is identical to that at NLO. Accordingly, when estimating higher-order uncertainties in studies that use NNLO R results, we do not explicitly need to vary R m , since the O α 2 s uncertainty that it brings should already be accounted for in the scale variation. 6ur central scale choice for any given event will be µ 0 = p R=1 t,max , the transverse momentum of the hardest jet in the event as clustered with R = 1.This is analogous to the choice of p t,max used at NLO, except that at NNLO one needs to explicitly specify R since p t,max can depend on the jet clustering.The logic for taking p t,max at a fixed jet radius of 1, independently of the R used in the clustering for the final jet spectrum, is that one obtains a unique scale for the event as a whole and avoids mixing scale-variation effects with R dependence.Another potential choice that we did not investigate is to take the averaged p t of the two hardest jets.As long as the jets are obtained with a clustering radius ∼ 1 such a choice is expected to be good at minimising the impact both of initial-state and final-state radiation, whereas our p t,max choice has some sensitivity to initial-state radiation. Results at NNLO R and NNLO R +LL R Let us start by examining the NNLO R result, shown versus R as the purple band in Fig. 5 (left), together with the NNLO R -mult.results using Eq.(4.3) and the NLO band.One sees that the R dependence of the NNLO R result is steeper than in the NLO result, especially for R 0.2.This pattern is qualitatively in line with one's expectations from Fig. 2 (right) and will hold also for the full NNLO calculation, which differs from NNLO R only by an R-independent (but p t and scale-dependent) additive constant.The point of intersection between the NLO and NNLO R results, at R = 1, is instead purely a consequence of our choice of R m = 1 in Eq. (4.4).Thus at R = 1, both the central value and scale dependence are by construction identical to those from the NLO calculation. The left-hand plot of Fig. 5 also shows the NNLO R -mult.result.Relative to what we saw when comparing NLO and NLO-mult., the most striking difference here is the much better agreement between NNLO R and NNLO R -mult., with the two generally coinciding to within a few percent.For R 0.4, this good agreement between different approaches carries through also to the comparison between NNLO R and NNLO R +LL R .However, for yet smaller values of R, the NNLO R +LL R result starts to be substantially above the NNLO R and NNLO R -mult.ones.This is because the NNLO R and NNLO R -mult.results have unresummed logarithms that, for very small-R, cause the cross section to go negative, whereas the resummation ensures that the cross section remains positive (modulo the potential issue with unresummed NLL R terms that remain after matching). Comparing the NNLO R +LL R result to the NLO+LL R of Fig. 4 (right), one finds that the central value of the NNLO R +LL R prediction starts to lie outside the NLO+LL R uncertainty band for R 0.5.This highlights the importance of the NNLO corrections, and in particular of terms with subleading ln 1/R 2 enhancements.Finally, the dependence on the choice of R 0 is slightly reduced at NNLO R +LL R compared to NLO+LL R and it remains within the scale-variation uncertainty band. To help complete the picture, we also show results as a function of R in a high-p t bin, 1530 < p t < 1992 GeV in Fig. 6.Most of the qualitative observations that we discussed above remain true also for high p t .The main difference relative to the p t > 100 GeV results is that scale uncertainty bands generally grow larger.This is perhaps due to threshold effects and might call for the inclusion of threshold resummation, see e.g.Ref. [42] and references therein.Figs.7 and 8 show the jet spectrum as a function of p t , normalised to the LO result, for R = 0.2 and two rapidity bins.Again, the conclusions are similar. All of the predictions shown here have been obtained with the choice R m = 1 in Eq. (4.4), equivalent to the assumption that σ 2 (R m = 1) = 0 in Eq. (4.5).For a discussion of how the predictions change if σ 2 (R m = 1) is non-zero, the reader is referred to section 4.4. To conclude this section, our main observation is that LL R and NNLO terms both have a significant impact on the R dependence of the inclusive jet spectrum, with the inclusion of both appearing to be necessary in order to obtain reliable predictions for R 0.4.In particular, if NNLO and NLO coincide for R = 1, then for R = 0.4 the NNLO results will be about 20% below the NLO ones.Going down to R = 0.2, one sees that even with NNLO corrections resummation of small-R logarithms is important, having a further 10% effect. Impact of finite two-loop corrections In our NNLO R -based predictions, we have all elements of the full NNLO correction except for those associated with 2-loop and squared 1-loop diagrams (and corresponding counterterms). Here, we examine how our results depend on the size of those missing contributions.We introduce a factor K that corresponds to the NNLO/NLO ratio for a jet radius of R m : Figure 9.The same as figure 5, but applying a K-factor of 1.10 to the NNLO R prediction, as an estimate of the potential impact of the full NNLO calculation. For other values of the jet radius, we have As before, we will take R m = 1.0.One could attempt to estimate K from the partial NNLO calculation of Ref. [18], however given that this calculation is not yet complete, we prefer instead to leave K as a free parameter and simply examine the impact of varying it. In Fig. 9, we show the impact of taking K = 1.10, to be compared to Fig. 5, which corresponds to K = 1.As K is increased, one sees that NNLO R,K and NNLO R,K +LL R start to agree over a wider range of R.This behaviour can be understood by observing there are two effects that cause NNLO R,K and NNLO R,K +LL R to differ: on one hand the small-R resummation prevents the cross section from going negative at very small radii, raising the prediction in that region relative to NNLO R and reducing the overall R dependence.On the other hand, the normalisation (first) factor in Eq. (4.1), which is larger than 1, multiplies the full NNLO R dependence that is present in the fragmentation (second) factor, thus leading to a steeper R dependence than in pure NNLO R,K .With K = 1, the first effect appears to dominate.However as K is increased, the second effect is enhanced and then the two effects cancel over a relatively broad range of R values. To put it another way, in the NNLO R,K result the K factor acts additively, shifting the cross section by the same amount independently of R. In the NNLO R,K +LL R result, the K factor acts multiplicatively, multiplying the cross section by a constant factor independently of R. By construction, the two always agree for R = R 0 = 1.With K = 1, NNLO R,K is below NNLO R,K +LL R at small R, but the additive shift for K > 1 brings about a larger increase of NNLO R,K than the multiplicative factor for NNLO R,K +LL R , because σ/σ(R 0 ) is smaller than one. Another point to note is that while in Fig. 5 the NNLO R -mult.and NNLO R results agreed over the full range of R, that is no longer the case with K = 1.1: this is because NNLO R -mult.acquires a multiplicative correction, as compared to the additive correction for NNLO R,K .Therefore one strong conclusion from our study is that independently of the size of the NNLO K-factor, plain fixed order calculations at NNLO are likely to be insufficient for R 0.4. Comparison to POWHEG One widely used tool to study the inclusive jet spectrum is POWHEG's dijet implementation [43].Insofar as parton showers should provide LL R accuracy and POWHEG guarantees NLO accuracy, POWHEG together with a shower should provide NLO+LL R accuracy.It is therefore interesting to compare our results to those from the POWHEG BOX V2's dijet process (v3132), which are obtained here using a generation cut bornktmin of 50 GeV and a suppression factor bornsuppfact of 500 GeV. 7We have used it with Pythia 8 (v8.186 with tune 4C [44]) for the parton shower, Pythia 6 (v6.428 the Perugia 2011 tune [45]) and with Herwig 6 (v6.521 with the AUET2-CTEQ6L1 tune [46]).We examine the results at parton level, with multiple-parton interaction (MPI) effects turned off.Since the Pythia 6 and Pythia 8 results are very similar we will show only the latter.In the case of Pythia 8, we include an uncertainty band from the variation of scales in the generation of the POWHEG events. In Fig. 10, we show the p t -integrated cross section as a function of R. The dark blue band (or line) shows the predictions obtained from POWHEG.In the top left-hand plot one sees a comparison with POWHEG+Herwig 6, which agrees with the NNLO R +LL R result to within the latter's uncertainty band, albeit with a slightly steeper R dependence at large R values.In the top right-hand plot, one sees a comparison with POWHEG+Pythia 8.There is reasonable agreement for small radii, however the POWHEG+Pythia 8 prediction has much steeper R dependence and is substantially above the NNLO R +LL R result for R = 1.Differences between Herwig and Pythia results with POWHEG have been observed before [47], though those are at hadron level, including underlying-event effects, which can introduce further sources of difference between generators.One difference between the NNLO R +LL R results and those from POWHEG with a shower-generator is an additional resummation of running scales and Sudakov effects for initialstate radiation (ISR).To illustrate the impact of ISR, the dark-blue dashed curve shows how the POWHEG+Pythia 8 prediction is modified if one switches off initial-state radiation (ISR) in the shower.Though not necessarily a legitimate thing to do (and the part of the ISR included in the POWHEG-generated emission has not been switched off), it is intriguing that this shows remarkably good agreement with the NNLO R +LL R results over the full R range.This might motivate a more detailed future study of the interplay between ISR and the jet spectrum.Note that, as shown in [43], nearly all the R dependence of the POWHEG+parton-shower result comes from the parton shower component.It is not so straightforward to examine Herwig with ISR turned off so we have not included this in our study.Given the differences between POWHEG +Pythia 8 and our NNLO R +LL R results, it is also of interest to examine what happens for K = 1.We can tune K so as to produce reasonable agreement between NNLO R,K +LL R and POWHEG+Pythia 8 for R = 1 and this yields K 1.15, which we have used in the bottom-right plot.Then it turns out that both predictions agree within uncertainty bands not just at R = 1, but over the full R range.In this context it will be particularly interesting to see what effective value of K comes out in the full NNLO calculation.Note that the patterns of agreement observed between different predictions depend also on p t and rapidity.For a more complete picture we refer the reader to our online tool [27]. Hadronisation Before considering comparisons to data, it is important to examine also the impact of nonperturbative effects.There are two main effects: hadronisation, namely the effect of the transition from parton-level to hadron-level; and the underlying event (UE), generally associated with multiple interactions between partons in the colliding protons.Hadronisation is enhanced for small radii so we discuss it in some detail. One way of understanding the effect of hadronisation and the underlying event is to observe that they bring about a shift in p t .This can to some extent be calculated analytically and applied to the spectrum [19].An alternative, more widespread approach is to use a Monte Carlo parton shower program to evaluate the ratio of hadron to parton level jet spectra and multiply the perturbative prediction by that ratio.One of the advantages of the analytical hadronisation approaches is that they can matched with the perturbative calculation, e.g. as originally proposed in Ref. [48].In contrast, a drawback of the Monte Carlo hadronisation estimates is that the definition of parton-level in a MC simulation is quite different from the definition of parton level that enters a perturbative calculation: in particular showers always include a transverse momentum cutoff at parton level, while perturbative calculations integrate transverse momenta down to zero. To help guide our choice of method, we shall first compare the p t shift as determined in Ref. [19] with what is found in modern Monte Carlo tunes.We first recall that the average shift should scale as 1/R (see also Refs.[49,50]) for hadronisation and as R 2 for the underlying event (see also Ref. [51]).For small-R jets, hadronisation should therefore ).The left-hand plot shows results from the AUET2 [46] tune of Herwig 6.521 [22,23] and the Monash 13 tune [53] of Pythia 8.186 [21], while the right-hand plot shows results from the Z2 [54] and Perugia 2011 [45,55] tunes of Pythia 6.428 [20].The shifts have been obtained by clustering each Monte Carlo event at both parton and hadron level, matching the two hardest jets in the two levels and determining the difference in their p t 's.The simple analytical estimate of 0.5 GeV ± 20% is shown as a yellow band. become a large effect, while the underlying event should vanish.By relating the hadronisation in jets to event-shape measurements in DIS and e + e − collisions in a dispersive-type model [48,52], Ref. [19] argued that the average p t shift should be roughly where C is the colour factor of the parton initiating the jet, C F = 4 3 for a quark and C A = 3 for a gluon.Those expectations were borne out by Monte Carlo simulations at the time, with a remarkably small O (1) term.Eq. (5.1) translates to a −6 GeV shift for R = 0.2 gluon-initiated jets.On a steeply falling spectrum, such a shift can modify the spectrum significantly. Fig. 11 shows the shift in p t in going from parton-level jets to hadron level jets, as a function of the jet p t .Four modern Monte Carlo generator tunes are shown [20-23, 45, 46, 53, 55], two in each plot.For each generator tune (corresponding to a given colour), there are four curves, corresponding to two values of R, 0.2 and 0.4 and both quark and gluon jets.The shifts have been rescaled by a factor RC F /C.This means that if radius and colour-factor dependence in Eq. (5.1) are exact, then all lines of a given colour will be superposed.This is not exactly the case, however lines of any given colour do tend to be quite close, giving reasonable confirmation of the expected trend of C/R scaling. A further expectation of Eq. (5.1) is that the lines should cluster around 0.5 GeV and be p t independent.This, however, is not the case.Firstly, there is almost a factor of two difference between different generators and tunes, with Pythia 6 Perugia 2011 and Pythia 8 Monash 2013 both having somewhat smaller than expected hadronisation corrections.Secondly there is a strong dependence of the shift on the initial jet p t , with a variation of roughly a factor of two between p t = 100 GeV and p t = 1 TeV.Such a p t dependence is not predicted within simple approaches to hadronisation such as Refs.[19,48,49,52].It was not observed in Ref. [19] because the Monte Carlo study there restricted its attention to a limited range of jet p t , 55 − 70 GeV.The event shape studies that provided support for the analytical hadronisation were also limited in the range of scales they probed, specifically, centre-of-mass energies in the range 40−200 GeV (and comparable photon virtualities in DIS).Note, however, that scale dependence of the hadronisation has been observed at least once before, in a Monte Carlo study shown in Fig. 8 of Ref. [56]: effects found there to be associated with hadron masses generated precisely the trend seen here in Fig. 11.The p t dependence of those effects can be understood analytically, however we leave their detailed study in a hadron-collider context to future work. 8Experimental insight into the p t dependence of hadronisation might be possible by examining jet-shape measurements [58,59] over a range of p t , however such a study is also beyond the scope of this work. In addition to the issues of p t dependence, one further concern regarding the analytical approach is that it has limited predictive power for the fluctuations of the hadronisation corrections from jet to jet.Given that the jet spectrum falls steeply, these fluctuations can have a significant impact on the final normalisation of the jet spectrum.One might address this with an extension of our analytical approach to include shape functions, e.g. as discussed in Ref. [60]. In light of the above discussion, for evaluating hadronisation effects here, we will resort to the standard approach of rescaling spectra by the ratio of hadron to parton levels derived from Monte Carlo simulations. Fig. 12 shows, as a function of R, the ratio of hadron-level without UE to parton-level (left) and the ratio of hadron level with UE to hadron level without UE (right), for a range of Monte Carlo tunes.The results are shown for p t > 100 GeV in the upper row and p t > 1 TeV in the lower row.A wide range of R values is shown, extending well below experimentally accessible values.Beyond the tunes shown in Fig. 11, here we also include the UE-EE-4 tune [61] of Herwig++ 2.71 [24,25] and tune 4C [44] of Pythia 8.186 [21].To investigate the issue of possible mismatch between our analytic parton-level calculations and parton-level as defined in Monte Carlo simulations, we have considered a modification of Monte Carlo parton level where the transverse moment cutoff was taken to zero (an effective cutoff still remains, because of the use finite parton masses and Λ QCD in the shower, however this method can arguably still give a rough estimate of the size of the effect one is dealing with).One finds that taking the cutoff to zero changes the partonlevel spectrum by a few percent effect.As this is somewhat smaller than the differences that we will shortly observe between tunes, it seems that for the time being it may not be too unreasonable to neglect it.While there is a substantial spread in results between the different tunes in Fig. 12, the observed behaviours are mostly as expected, with hadronisation reducing the jet spectrum, especially at the smallest R values, while the UE increases it, especially at large R values.The magnitude of these effects is strongly p t dependent, with (roughly) a factor of ten reduction at not-too-small R values when going from p t > 100 GeV to p t > 1 TeV.Such a scaling is consistent with a rough 1/(Rp t ) behaviour for hadronisation and R 2 /p t behaviour for the UE (ignoring the slow changes in quark/gluon fraction and steepness of the spectrum as p t increases). One surprising feature concerns the behaviour of the UE corrections at very small radii: firstly, in a number of the tunes the corrections tend to be smaller than 1, suggesting that the multi-parton interactions (MPI) that are responsible for the UE remove energy from the core of the jet.For R values in the range 0.4−1, the effect of MPI is instead to add energy to the jet, as expected.Secondly, this loss of energy from the jet is not particularly suppressed at high p t .The most striking example is the Z2 tune where there Non-perturbative corrections (UE+hadr) Hadronisation Underlying Event UE+hadronisation Figure 13.Non-perturbative corrections to the inclusive jet spectrum for the p t range, rapidity and centre-of-mass energy corresponding to the ALICE data [13] for R = 0.2 (left) and R = 0.4 (right).The results are shown separately for hadronisation, UE and the product of the two, and in each case include the average and envelope of the corrections from the six tunes discussed in section 5. explained in section 3, this is intended to avoid spuriously small scale uncertainties associated with cancellations between different physical contributions. Non-perturbative corrections are taken as the average of the parton-to-hadron Monte Carlo correction factors (including hadronisation and UE) as obtained with the six different tunes discussed in section 5.The envelope of that set of six corrections provides our estimate of the uncertainty on the non-perturbative corrections, which is added in quadrature to the perturbative uncertainty. In the case of the ATLAS data we will explore transverse momenta well above the electroweak (EW) scale, where EW corrections become substantial.The ATLAS collaboration accounted for these using the calculation of tree-level (O (α s α EW )) and loop (O α 2 s α EW ) EW effects from Ref. [70].Here, since we concentrate on QCD effects, when showing the data we divide it by the EW corrections quoted by ATLAS. 11 Comparison to ALICE data As a first application of small-R resummation in comparisons to data, we look at the inclusive jet cross section in proton-proton collisions at √ s = 2.76 GeV reported by the ALICE collaboration [13].The measurements are in the |y| < 0.5 rapidity range, with jets obtained using the anti-k t algorithm with a boost-invariant p t recombination scheme, for radii R = 0.2 and 0.4. The non-perturbative corrections for hadronisation and underlying event are shown in Fig. 13.For R = 0.2, non-perturbative corrections are largely dominated by hadronisation, with underlying event being a small effect, as expected for sufficiently small R. The net non-perturbative correction is about −50% at the lowest p t of 20 GeV, while it decreases to about −10% at 100 GeV.For R = 0.4 there is a partial cancellation between hadronisation and UE, with a net impact of about −10% percent at low p t and a 5−10% uncertainty. The comparison of our full results to the ALICE data is given in Fig. 14, as a ratio to the NNLO R +LL R theory prediction (including non-perturbative corrections).The top row shows the jet spectrum for R = 0.2, while the lower row corresponds to R = 0.4.The lefthand plots show NLO-based theory results.They all appear to be consistent with the data within their large uncertainties.The right-hand plots show NNLO R -based theory (with plain NLO retained to facilitate cross-comparisons).In general the NNLO R +LL R results NNLO result.Accordingly, we will drop the subscript R label in these cases, i.e. writing R NNLO+LL R in Eq. (6.2) rather than R NNLO R +LL R . To estimate the perturbative theoretical uncertainties on the ratio, we take the envelope of the ratios as determined for our seven renormalisation and factorisation scale choices.In the case of (N)NLO-mult.and (N)NLO+LL R results, since the normalisation factor cancels, we only consider the component of the perturbative uncertainties associated with the fragmentation factor.We have verified that the effect of R 0 variation is contained within the scale-variation envelope.For the non-perturbative uncertainties, we take the envelope of the ratios of the corrections factors from different Monte Carlo tunes.The perturbative and non-perturbative uncertainties on the ratio are added in quadrature. The comparison of the theory predictions with the measurements of the ALICE collaboration is presented in Fig. 15, at NLO accuracy on the left and at NNLO (R) -based accuracy on the right.At first sight, it appears that the data have a considerably flatter p t dependence than any of the theory predictions.The latter all grow noticeably with increasing p t , a consequence mainly of the p t dependence of the non-perturbative correction factor, cf.Fig. 13.Nevertheless, on closer inspection one sees that if one ignores the left-most data point then the remaining data points are compatible with the predicted p t dependence.The overall agreement is then best with the NNLO LL R -based prediction.However, the sizes of the experimental uncertainties are such that it is difficult to draw firm conclusions. We have also examined the impact of using Eq.(6.1) instead of (6.2) and find that the difference is small, no more than 5%.We have also examined the pure NNLO expansion of the ratio of cross sections, as used in Ref. [17] and find that this too is quite similar to Non-perturbative corrections (UE+hadr) Hadronisation Underlying Event UE+hadronisation Figure 16.Non-perturbative corrections to the inclusive jet spectrum for the p t range, rapidity and centre-of-mass energy corresponding to the ATLAS data [26] for R = 0.4 (left) and R = 0.6 (right). Eq. (6.2), much more so than the direct ratio of NNLO results, σ NNLO R (R 1 )/σ NNLO R (R 2 ).Thus our finding that we obtain reasonable agreement between Eq. ( 6.2) and the data is consistent with the observations of Ref. [17], which were based on expanded NNLO ratios. 12 Comparison to ATLAS data Let us now turn to a comparison with the inclusive jet cross-sections reported by the ATLAS collaboration [26], obtained from 4.5 fb −1 of proton-proton collisions at √ s = 7 TeV.Jets are identified with the anti-k t algorithm, this time with a usual E-scheme, taking radii R = 0.4 and 0.6.The measurements are doubly-differential, given as a function of jet p t and rapidity, and performed for p t > 100 GeV and |y| < 3. Note that given the difference in centre-of-mass energy, the lower p t for the ATLAS data, 100 GeV, involves the same partonic x range as p t = 40 GeV for the ALICE data. The hadronisation and underlying event corrections applied are shown in Fig. 16.As in the case of the ALICE data, for R = 0.4 these two classes of correction mostly cancel.When increasing the jet radius to R = 0.6, the hadronisation corrections shrink, while the UE corrections increase and now dominate, leaving a net effect of up to 6−7% at the lowest p t 's. Figs. 17 and 18 show comparisons between data and theory for two rapidity bins, |y| < 0.5 and 2.0 < |y| < 2.5.At central rapidities the situation here contrasts somewhat with that for the ALICE data and in particular the inclusion of NNLO R corrections worsens the agreement with data: over most of the p t range, the data points are about 15−20% higher than than either NNLO R or NNLO R +LL R (which are close to each other, as expected for R 0.4).Nevertheless, one encouraging feature of the NNLO R -based predictions is that there is now a consistent picture when comparing R = 0.4 and R = 0.6, insofar as the ratio of data to NNLO R -theory is essentially independent of R.This is not the case when comparing data and NLO predictions (cf.Fig. 5, which shows the steeper R dependence of NNLO R -based results as compared to NLO).We return to the question of R dependence in more detail below. In the forward rapidity bin, over most of the p t range, the data instead favours the NNLO R -based predictions over NLO, while at high p t the data falls below all of the predictions.However the systematic uncertainties on the data are slightly larger than the difference with any of the theory predictions, making it difficult to draw any solid conclusions. A significant positive 2-loop correction (cf. the discussion in sections 4.4, 4.5 and 6.3) would bring overall better agreement at central rapidities, but would worsen the agreement at forward rapidities.However, the finite 2-loop effects can be p t and rapidity dependent, making it difficult to draw any conclusions at this stage.Furthermore, one should keep in mind that adjustments in PDFs could affect different kinematic regions differently.We close this section with an explicit comparison of the ratio of the jet spectra for the two different R values.For the theoretical prediction, we proceed as discussed in the previous subsection, when we made a comparison with the ALICE data for such a ratio.We will not include EW effects, since in the ratio they appear to be at a level well below 1%. Concerning the experimental results, the central value of the ratio can be obtained directly from the ATLAS data at the two R values.However the ATLAS collaboration has not provided information on the uncertainties in the ratio.It has provided information [73] to facilitate the determination of correlations between p t and rapidity bins, specifically 10000 Monte Carlo replicas of their data to aid in estimating statistical correlations, as results.This is to be contrasted with the situation in Fig. 14.The difference is due to the fact that the K factor acts additively on the NNLO R,K result, but multiplicatively on the NNLO R,K +LL R result, as discussed already in section 4.4. Note that for the ATLAS comparison, while a K-factor of K = 1.10 improves agreement with the data at central rapidities, it appears to worsen it somewhat at high rapidities, as can be seen in Fig. 22.One should, however, keep in mind that the true K-factor will depend both on rapidity and p t , and also that modifications associated with changes in PDFs can affect forward and central rapidities differently. Conclusion In this paper we have used the limit of small-radius jets to explore a variety of features of the most basic of jet observables, the inclusive jet spectrum. A first observation, in section 2, was that the small-R approximation starts to reproduce fixed-order R dependence quite well already for R just below 1, giving us confidence in the usefulness of that approximation for phenomenologically relevant R values. In seeking to combine small-R resummation with NLO predictions, in section 3, it was natural to write the cross section as a product of two terms: an overall normalisation for elementary partonic scattering, together with a factor accounting for fragmentation of those partons into small-R jets.Such a separation can be performed also at fixed order.There appear to be spurious cancellations between higher-order contributions for the two factors and this led us to propose that one should estimate their scale uncertainties independently and then add them in quadrature.This procedure has similarities with methods used for jet vetoes in Higgs physics [40,41].We also saw that there are large R-dependent terms at NNLO that are beyond the control of our LL R resummation (sections 2.2 and 4).To account for them in the absence of the full NNLO calculation, we introduced a stand-in for NNLO that we called NNLO R .This is defined to be identical to NLO for R = 1 but includes full NNLO R dependence, which can be obtained from a NLO 3-jet calculation.Once complete NNLO predictions become available, it will be trivial to replace the NNLO R terms with NNLO ones. For an accurate description of the inclusive jet spectrum one must also account for nonperturbative effects.In section 6 we revisited the analytical hadronisation predictions of Ref. [19].We found that the predicted scaling with R and the parton flavour was consistent with what is observed in Monte Carlo simulations.However such simulations additionally show a non-trivial p t dependence that is absent from simple analytical estimates.Accordingly we decided to rely just on Monte Carlo simulations to evaluate non-perturbative corrections. We compared our results to data from the ALICE and ATLAS collaborations in section 6.For the smallest available R value of 0.2, both the NNLO R and the LL R corrections beyond NNLO R play important roles and at the lower end of ALICE's p t range, the effect of NNLO R corrections was almost 50%, while further LL R corrections mattered at the 20% level.For R = 0.4, NNLO R corrections still mattered, typically at the 10−30% level, depending on the p t .However LL R resummation then brought little additional change.Overall, for the ALICE data and the forward ATLAS data, NNLO R +LL R brought somewhat better agreement than NLO, while for central rapidities, the ATLAS data were substantially above the NNLO R +LL R predictions.It will be important to revisit the pattern of agreement once the full NNLO corrections are known, taking into account also aspects such as correlated experimental systematic uncertainties and PDF uncertainties. Where the NNLO R and NNLO R +LL R predictions clearly make the most difference is for reproducing the R-dependence of the cross sections.For the inclusive spectrum plots, once one goes to NNLO R +LL R , the picture that emerges is consistent across different values of R. That was not the case at NLO.This is visible also in the ratios of cross sections at different R values.In particular, for the reasonably precise ATLAS data, NNLO R and NNLO+LL R are in much better agreement with the data than the NLObased predictions.For the ALICE data, the uncertainties are such that it is harder to make a definitive statement.Nevertheless NNLO+LL R performs well and notably better than plain NNLO R . Overall, the substantial size of subleading R-enhanced terms in the NNLO corrections also motivates studies of small-R resummation beyond LL R accuracy and of small-R higher order effects in other jet observables. A final comment concerns long-term prospects.We have seen here that the availability of data at multiple R values provides a powerful handle to cross-check theoretical predictions.As the field moves towards ever higher precision, with improved theoretical predictions and reduced experimental systematic uncertainties, cross checks at multiple R values will, we believe, become increasingly important.In this respect, we strongly encourage measurements at three different radii.Small radii, R 0.2−0.3, are particularly sensitive to hadronisation effects; large radii, R 0.6−0.8 to underlying event effects; the use of an intermediate radius R 0.4 minimises both and provides a good central choice.Only with the use of three radii do we have a realistic chance of disentangling the three main sources of theoretical uncertainties, namely perturbative effects, hadronisation, the underlying event.summation, one should simply be aware that the convergence properties of the t and α s expansions will be sometimes be noticeably different.Note also that the above discussion holds specifically for the expansion of the LL R result.As we have seen in section 2.2, NLL R effects are large and at NNLO are of opposite sign to the LL R contribution.This further complicates the discussion of the convergence properties of the inclusive jet spectrum. B Scale choice beyond leading order When making fixed-order predictions for the inclusive jet cross section, there are two widely used prescriptions for the choice of a central renormalisation and factorisation scale.One prescription is to use a single scale for the whole event, set by the p t of the hardest jet in the event, µ 0 = p t,max .This was adopted, for example, in Ref. [26].Another prescription is to take instead a different scale for each jet, specifically that jet's p t , µ 0 = p t,jet .This was adopted for example in Ref. [13]. 13t LO, the two prescriptions give identical results, since there are only two jets in the event and they have the same p t .However, starting from NLO the prescriptions can differ substantially.Interestingly, a study of the small-radius limit can provide considerable insight into which choice is more appropriate. Figure 24 (left) shows the ratio of the NLO result as obtained with µ 0 = p t,jet to that with µ 0 = p t,max , as a function of the jet p t , for three different jet radii.The main observation is that the µ 0 = p t,jet prescription increases the cross section, especially at small radii: it brings an increase of almost 20% for R = 0.1 at low p t , versus 4% for R = 1.0 (in both cases for a central scale choice).As we saw in section 4, for reasonably The bands correspond to the effect of scale variation, where the scales are varied upwards and downwards by a factor of two simultaneously for the numerator and denominator.Right: fraction of the inclusive jet spectrum (for |y| < 0.5) that comes from jets beyond the two hardest.The 3-jet rate and the overall normalisation are both evaluated at LO. small R, the NNLO corrections suppress the cross section.Therefore the choice µ 0 = p t,jet takes us in the wrong direction. In order to understand this better, it is useful to make a number of observations: 1.For the virtual part of the NLO calculation, the two scale prescriptions give identical results, so the deviation of the ratio from 1 in Fig. 24 (left) can come only from the real part. 2. The real part itself involves two different pieces: that from binning either of the two leading jets, and that from binning the 3rd jet.The right-hand plot of Fig. 24 shows that the leading-order 3rd-jet contribution is at the level of 1−2% of the leading-order dijet result and so it is reasonable to neglect it in our discussion. 14.When a real emission is within an angle R of its nearest other parton, there are only two jets in the event and the two scale-choice prescriptions are identical. 4. Differences between the prescriptions arise when the softest parton falls outside one of the two leading jets.Then one of those jets has a reduced p t and the choice µ 0 = p t,jet gives a smaller scale than µ 0 = p t,max .This occurs with a probability that is enhanced by ln 1/R. 5.At p t ∼ 100 GeV, where the effects are largest, renormalisation scale (µ R ) variations play a much larger role than factorisation scale (µ F ) variations.Therefore a smaller scale translates to a larger value of α s and thus a larger cross section for the real contribution (which is always positive).Consequently, the prescription µ 0 = p t,jet leads to a cross section that is larger than the prescription µ 0 = p t,max and the difference is enhanced by a factor ln 1/R for small R. This qualitatively explains the behaviour seen in Fig. 24 (left).The µ 0 = p t,jet scale choice introduces a correction that goes in the wrong direction because it leads to a smaller scale (and larger α s ) for the real part, but a corresponding modification of the virtual part.Thus it breaks the symmetry between real and virtual corrections. The above reasoning leads us to prefer the µ 0 = p R=1 t,max prescription.To make it a unique event-wide choice, independent of R, we define always define µ 0 = p t,max using jets with a radius equal to one, regardless of the R value used in the measurement. We note that µ 0 = p t,max has a potential linear sensitivity to initial-state radiation, i.e. initial state radiation of transverse momentum p t,i shifts µ 0 by an amount p t,i .A yet more stable choice might be µ 0 = 1 2 (p t,1 + p t,2 ), the average transverse momentum of the two hardest jets (again defined with a radius of one).For this choice, the shift of µ 0 would be limited O p 2 t,i /(p t,1 + p t,2 ) .We leave its study to future work.Yet another option is the use of MINLO type procedures [75].For dijet systems, this should be rather similar to µ 0 = 1 2 (p t,1 + p t,2 ). Figure 1 . Figure1.Impact of R-dependent terms in the inclusive-jet spectrum, illustrated using the small-R resummation factor obtained from the ratio of σ LL R in Eq. (2.1) to the leading order inclusive jet spectrum σ LO .It is shown as a function of the jet p t for different jet radius values.For each R value, the plot illustrates the impact of two choices of R 0 : R 0 = 1 (our default) as solid lines and R 0 = 1.5 as dashed lines. Figure 2 . Figure 2. Left: Comparison of the R dependence in the exact and small-R approximated NLO expansion, using Eq.(2.3), shown as a function of jet transverse momentum p t , for √ s = 7 TeV in the rapidity region |y| < 0.5.Right: comparison of ∆ 1+2(p t , R, R ref ) and ∆ LL R 1+2 (p t , R, R ref ) (cf.Eq.(2.5)).In both plots CT10 NLO PDFs[38] are used, while the renormalisation and factorisation scales are set equal to the p t of the highest-p t R = 1 jet in the event (this same scale is used for all R choices in the final jet finding). Figure 3 . Figure 3. Left: size of the matching normalisation factor (left-hand factor of Eq. (3.4), normalised to LO), shown v. p t for various R values and two R 0 choices.Right: size of the matched small-R fragmentation factor (right-hand factor of Eq. (3.4); similar results are observed for the right-hand factor of Eq. (3.5)).The results are shown for the scale choice µ R = µ F = p t,max , where p t,max is the transverse momentum of the hardest jet in the event. Figure 4 . Figure 4. Inclusive jet cross section for p t > 100 GeV, as a function of R, normalised to the (Rindependent) leading-order result.Left: the standard NLO result, compared to the "NLO-mult."result of Eq. (3.5) and the NLO+LL R matched result of Eq. (3.4).The scale uncertainty here has been obtained within a prescription in which the scale is varied simultaneously in the left and right-hand factors of Eqs.(3.4) and (3.5) ("correlated scale choice").Right: the same plot, but with the scale uncertainties determined separately the left and right-hand factors of Eqs.(3.4) and (3.5), and then added in quadrature ("uncorrelated scale choice").The plot also shows the NLO+LL R result for R 0 = 1.5 at our central scale choice. Figure 5 . Figure 5. Left: comparison of the NLO, NNLO R and NNLO R -mult.results for the inclusive jet cross section for p t > 100 GeV, as a function of R, normalised to the LO result.Right, corresponding comparison of NLO, NNLO R and NNLO R +LL R together with the central curve for NNLO R +LL R when R 0 is increased to 1.5.In both plots, for the NNLO R -mult.and NNLO R +LL R results the scaledependence has been evaluated separately in the normalisation and fragmentation contributions and added in quadrature to obtain the final uncertainty band. Figure 6 .Figure 7 . Figure 6.Same as figure 4, but focusing only on the high p t bin.Both plots use an uncorrelated scale variation in the normalisation and fragmentation factors. Figure 10.Top: comparison between the NNLO R -based results and POWHEG+Herwig 6 (left) and POWHEG+Pythia 8 (right), shown as a function of R, integrated over p t for p t > 100 GeV.Bottom right: comparison of POWHEG+Pythia 8 with NNLO R -based results, where the latter have an additional NNLO K-factor of 1.15. Figure 11 . Figure 11.The average shift in jet p t induced by hadronisation in a range of Monte Carlo tunes, for R = 0.4 and R = 0.2 jets, both quark and gluon induced.The shift is shown as a function of jet p t and is rescaled by a factor RC F /C (C = C F or C A ) in order to test the scaling expected from Eq. (5.1).The left-hand plot shows results from the AUET2[46] tune of Herwig 6.521[22,23] and the Monash 13 tune[53] of Pythia 8.186[21], while the right-hand plot shows results from the Z2[54] and Perugia 2011[45,55] tunes of Pythia 6.428[20].The shifts have been obtained by clustering each Monte Carlo event at both parton and hadron level, matching the two hardest jets in the two levels and determining the difference in their p t 's.The simple analytical estimate of 0.5 GeV ± 20% is shown as a yellow band. Figure 12 . Figure 12.Hadronisation (left) and underlying event (right) multiplicative corrections to the jet spectrum, as a function of R for pp collisions at 7 TeV.The top row shows results for p t > 100 GeV and |y| < 0.5, while the bottom row is for p t > 1 TeV.Six combinations of generator and tune are shown, and the yellow band corresponds to the envelope of the tunes. Figure 14 . Figure 14.Comparison between a range of theoretical predictions for the inclusive jet spectrum and data from ALICE at √ s = 2.76 TeV [13].The upper row is for R = 0.2 and the lower one for R = 0.4.The left-hand column shows NLO-based comparisons, while the right-hand one shows NNLO R -based comparisons.Rectangular boxes indicate the size of systematic uncertainties on the data points, while the errors bars correspond to the statistical uncertainties.Results are normalised to the central NNLO R +LL R prediction (including non-perturbative corrections). Figure 15 . Figure 15.Comparison between a range of theoretical predictions for the inclusive jet crosssection ratio and data from ALICE at √ s = 2.76 TeV [13].The left-hand column shows NLObased comparisons, while the right-hand one shows NNLO R -based comparisons.Rectangular boxes indicate the size of systematic uncertainties on the data points, while the errors bars correspond to the statistical uncertainties. Figure 17 . Figure 17.Comparison between a range of theoretical predictions for the inclusive jet spectrum and data from ATLAS at √ s = 7 TeV [26] in the rapidity bin |y| < 0.5.The upper row is for R = 0.4 and the lower one for R = 0.6.The left-hand column shows NLO-based comparisons, while the right-hand one shows NNLO R -based comparisons.Rectangular boxes indicate the size of systematic uncertainties on the data points, while the errors bars correspond to the statistical uncertainties.Results are normalised to the central NNLO R +LL R prediction (including non-perturbative corrections). Figure 20 . Figure 20.Comparison between theoretical predictions with a NNLO R m = 1 correction factor K = 1.10 and data from ALICE at √ s = 2.76 TeV [13] at R = 0.2 and R = 0.4.Rectangular boxes indicate the size of systematic uncertainties on the data points, while the errors bars correspond to the statistical uncertainties.Results are normalised to the central NNLO R,K +LL R prediction (including non-perturbative corrections). Figure 21 .Figure 22 . Figure 21.Comparison between theoretical predictions with a NNLO R m = 1 correction factor K = 1.10 and data from ATLAS at √ s = 7 TeV [26] in the rapidity bin |y| < 0.5, for R = 0.4 and R = 0.6.Rectangular boxes indicate the size of systematic uncertainties on the data points, while the errors bars correspond to the statistical uncertainties.Results are normalised to the central NNLO R,K +LL R prediction (including non-perturbative corrections). Figure 23 . Figure 23.Comparison of the convergence of the α s expansion of the small-R resummation (left) relative to that of the t expansion (right). Figure 24 . Figure24.Left: ratio of NLO predictions for the inclusive spectrum when using the per-jet scale choice µ 0 = p t,jet versus the per-event choice µ 0 = p t,max .The results are shown as a function of jet p t for three jet radius choices, R = 0.1, 0.4 and 1.0 and have been obtained with NLOJet++.The bands correspond to the effect of scale variation, where the scales are varied upwards and downwards by a factor of two simultaneously for the numerator and denominator.Right: fraction of the inclusive jet spectrum (for |y| < 0.5) that comes from jets beyond the two hardest.The 3-jet rate and the overall normalisation are both evaluated at LO.
v3-fos-license
2024-03-12T16:09:09.533Z
2024-03-05T00:00:00.000
268348543
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/14657503241229691", "pdf_hash": "8f5bf605dfa2fc3fef050c46cf11fa8fd7b79d23", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42751", "s2fieldsofstudy": [ "Sociology", "Business" ], "sha1": "5b0150af5ca2a38acbe9cbfa72d561d940660f0e", "year": 2024 }
pes2o/s2orc
The social regulation of inter-SME relations: Norms shaping SMEs relationships in Nigeria A fi eld study involving 35 institutionally constrained small and medium enterprises (SMEs) was conducted to investigate how entrepreneurs operating in developing economies draw on norms in the absence of formal institutional support. Employing a qualitative approach, our fi ndings revealed that the institutional logics perspective, which presupposes an understanding of entrepreneurial behaviour, provided insights into many of the decisions observed within the SMEs. Our interview data revealed how a variety of culturally speci fi c norms, including those in fl uenced by kinship, religion and trade associations, played a pivotal role in structuring market-oriented economic activities. Central to our contribution is the concept that norms play a crucial role in enforcing trade agreements. This suggests that in situations where actors cannot rely on formal institutional arrangements, norms not only limit opportunistic behaviour but also foster trust within networks. Our paper makes a signi fi cant contribution to the fi eld of entrepreneurship by addressing issues related to norms and SMEs within economic-institutional contexts that have been largely overlooked. Introduction Key insights into the structure of African entrepreneurship emphasise the significant role of social networks within the context of small and medium enterprises (SMEs) (Aluko et al., 2019;Udry and Conley, 2004;Wellalage and Locke, 2016).This includes how these enterprises rely on informal network relationships such as family and kinship (Amoako and Matlay, 2015;Drakopoulou-Dodd, 2011;Granovetter, 1985), religion (Dana, 2010;Omeihe, 2019), and credit associations (Amoako, 2018;Lyon and Porter, 2007) to leverage trade opportunities.Within the West African context, Nigeria's market system offers an intriguing case study.It predominantly operates within environments characterised by imperfect credit markets (Nyarku and Oduro, 2017;Smith and Luttrell, 1994) and is influenced by fragile state-backed institutional frameworks (Abor and Quartey, 2010;Amoako, 2018;Lyon and Porter, 2007;Omeihe et al., 2021).Nevertheless, it becomes increasingly apparent that beneath this surface lies a diverse array of indigenous economic and social institutions supported by various norms, which significantly shape entrepreneurial behaviour. There is no consensus in the literature on how norms are generated.While some scholars strongly emphasise the relevance of norms (e.g.Faulkner, 2010;Sardan, 2013;Omeihe, 2019), others point out that norms function to hold people accountable to each other (Brennan et al., 2013;Bicchieri et al., 2018), particularly in terms of adherence to certain principles (Coleman, 1990).Understanding why and how norms shape social action provides explanations for how they enhance economic performance and facilitate transactions between economic actors (Amoako and Matlay, 2015;Lyon, 2000;Meek et al., 2010). Norms, broadly conceived, are embedded within social institutions operating at different levels of jurisdiction.This is because institutional arrangements vary across societies, thereby providing varying degrees of cooperation (Lane, 1998).In most African economies, norms exert a more significant influence on entrepreneurial behaviour than scholars have readily recognised.In this context, norms play a decisive role in shaping entrepreneurial behaviour by defining the boundaries of acceptable actions and the institutional framework within which entrepreneurs operate (Nee, 1998).This is particularly evident in many parts of West Africa where entrepreneurs rely on norms as alternatives to weak formal institutional arrangements (Amoako, 2018;Omeihe et al., 2021). In fact, this further emphasises the importance of understanding entrepreneurial behaviour within its institutional and social contexts.This encompasses the political, economic and socio-cultural settings in which entrepreneurs operate (Refai et al., 2018;Smallbone and Welter, 2015;Welter, 2011).Such recognition holds the promise of providing insights into the influence of context on African entrepreneurship. In this article, we draw attention to a variety of indigenous norms and their role in governing entrepreneurial actions.By using the term 'indigenous norms,' we aim to emphasise that these norms, in addition to providing instructions on how to act, have contextual interpretations that go beyond the current understanding in academic discourse.The assertion put forth here is that indigenous norms lead to the development of predictable responses that are common among specific groups of people embedded within cultural contexts (Heide and John, 1992;Hodgson, 2006;Omeihe, 2022).This includes norms related to religion, trade associations and family, with a focus on how they shape entrepreneurial relationships (Amoako and Matlay, 2015;Bicchieri et al., 2018;Lyon and Porter, 2007).Rarely are these indigenous norms described in parallel.We demonstrate how these norms are associated with reputation and a range of enforcement mechanisms that influence entrepreneurial activities.We argue that the relative importance of indigenous norms lies in their ability to mitigate opportunism in one-off transactions, especially when prior interactions are non-existent and there are no expectations for future transactions.This raises the question of how different SMEs hold varying perceptions of how norms serve to organise economic behaviour.Recognising this significance, we aim to establish an understanding of how indigenous norms shape the activities of Nigerian SMEs. From a theoretical perspective, we theorise by drawing on the institutional logics perspective to emphasise how actors, actions and contexts converge within institutional settings (Thornton and Ocasio, 1999).Guided by this theoretical framework, we aim to illustrate how various practices, beliefs and values influence different aspects of entrepreneurial actions.In summary, our objective is to understand how culturally specific norms situated within social and institutional contexts shape the behaviour of SMEs. Building on this perspective, this paper takes an exploratory approach to facilitate an understanding of the processes through which entrepreneurs draw on norms in contexts characterised by inadequate formal institutions.The study focuses on the entrepreneur, who in this research is the SME owner-manager, as the unit of analysis.The original study relies on in-depth fieldwork conducted over a one-year period, with data collected through interviews and observations of entrepreneurs operating across Nigerian markets.To examine how entrepreneurial relationships are shaped by norms, our data collection is guided by one crucial sub-question: what types of norms influence SME relationships? Here, our empirical evidence reveals a significant finding.In compensating for the weaknesses of inadequate formal institutions, such as weak courts and legal structures, entrepreneurs were compelled to rely on culturally specific norms, such as religion, family, kinship and trade associations, to establish the rules governing their business activities.These norms, in the form of institutional structures, demonstrate the unique context in which they emerge and how they, in turn, provide alternatives to formal institutional arrangements.We present evidence that the role of these norms varies in terms of their nature and scope of functions, with inclusion often being determined by reputation and moral judgments.The qualitative importance of these norms is evaluated in the findings section of this article as part of a broader interpretation concerning sanctions and enforcement of SME agreements within markets. To systematically address the research question mentioned above, this article is divided into six sections.Following the introduction, we selectively review the evidence showing how institutions offer incentives for human exchange, whether social or economic.Second, we proceed to conduct a review of studies on norms before justifying our chosen methodology.The results yield a series of novel findings that highlight the significance of norms in SME relationships in Nigeria.Overall, we identify several areas where interventions would be highly beneficial. Theory: institutional logics What are institutions precisely?North (1990: 3) provides the following definition: 'Institutions are the rules of the game in a society or, more formally, are the humanly devised constraints that shape human interaction'.This initial statement emphasises that institutions structure incentives in human exchange, whether social or economic.Institutions play a significant role in fostering economic growth because they influence the behaviour of key economic actors in society (Acemoglu et al., 2005;Omeihe, 2023).Therefore, actors rely on them as they provide legitimate courses of action.However, there is no certainty that actors will resort to the same set of institutions, as contextual factors such as culture also impact economic performance. Certainly, one of the most enduring divides in the social sciences exists among scholars associated with institutions.For these scholars, institutional theory has proven to be a distinctive theoretical framework for exploring a wide range of issues.Since its emergence decades ago, its application has continued to generate significant interest across different domains.Seminal studies conducted by scholars like Meyer and Rowan (1977), Zucker (1977), andDiMaggio andPowell (1983) place culture within the framework of institutional analysis.For Meyer and Rowan, the role of the actor is seen as isolated from the institutional and societal levels, thus forming the basis for neglecting phenomenological insights-a stance commonly adopted by neo-institutionalists.A major advancement in new institutionalism by DiMaggio and Powell (1983) drew attention to the coercive, mimetic and normative components.Their contributions focused on the thoughtless nature of behaviour in response to cultural rationalisation but overlooked the role of institutions and their various forms. The studies by Friedland and Alford (1991), Scott et al. (2000), and Thornton and Ocasio (1999) have provided greater precision by positioning institutional logics as a means of defining institutions.For these scholars, the focus has been on the role of culture as a critical determinant of institutional analysis.Building on this, Thornton and Ocasio (1999: 804) define institutional logics as 'the socially constructed, historical patterns of material practices, assumptions, values, beliefs and rules by which individuals produce and reproduce their material subsistence, organise their time and space, and give meaning to their social reality'.The conceptualisation of institutional logics, in principle, offers a flexible and adaptable explanation for how actors, actions and contexts converge within institutional settings.This manifestation of institutional theory recognises that the identities, practices, beliefs and values of actors are embedded within prevailing institutional logics.In short, institutional logics presuppose an understanding of entrepreneurial behaviour situated within social and institutional contexts that regulate social behaviour (Thornton and Ocasio, 1999;Thornton et al., 2012). Clearly, explanations that reference institutional logics are incomplete without considering the interplay between formal and informal constraints.As evident from Nee's (1998) analysis, it becomes apparent that informal rules, which are rooted in norms and operate in the shadow of formal institutions, can either constrain or limit economic actions.This explanation stems from the uncertainty surrounding whether informal rules within specific institutional settings will align or conflict with formal rules. When it comes to Africa, the economic analysis of its institutions has often followed blueprints shaped by Westernised assumptions that bear little reflection of its current context.Consequently, its economic actors have been observed to rely on norms to facilitate economic action (Amoako and Matlay, 2015;Omeihe et al., 2020).It is worth emphasising that African SMEs are influenced by the logics of various social norms.These norms define what social behaviour is considered right or wrong, and their creation is influenced by the cultural background of relationships, especially within a socio-economic context.This suggests that cultural norms establish the boundaries of legitimate action that shape entrepreneurial interactions. In sum, the idea that norms operate in opposition to formal institutional arrangements when they contradict social interests is appropriate.This is because norms encourage actors to avoid resorting to inadequate formal institutional arrangements and instead, rely on culturally specific arrangements to pursue their objectives.When applied to SMEs, the logic of norms largely reduces the costs associated with enforcement and monitoring.Such circumstances result in lower transaction costs, enabling entrepreneurs to realize their economic potential. Norms The most common yet crucial question is: What are norms?According to Faulkner (2010), norms are the explicit or implicit rules of behaviour that embody the preferences and interests of close-knit groups.They lie at the heart of a set of behavioural expectations shared by a group of people, providing instructions that guide their actions.The attitudinal hallmark of norms suggests feelings of guilt, embarrassment and shame that accompany violations of these norms (Elster, 1989;Faulkner, 2010).A notable feature common to various definitions of norms is the recognition that they play a vital role in the development of specific types of concrete relationships (Amoako and Matlay, 2015;Bruton et al., 2010;Porter et al., 2007).In simple terms, the concept of norms invokes an image of actions that are considered either desirable or undesirable, as they form the foundation for establishing and maintaining personalised trust (Lyon, 2000;Omeihe, 2019). Indeed, the primary advantage of relying on norms is that they mitigate opportunism in one-off transactions, especially in cases where previous interactions are nonexistent, with no expectations for possible future transactions.Varied discussions on the nature of norms have led to a significant number of debates regarding their formation.For example, while norms have been conceptualised to include aspects of social habits that shape cognitive actions (Hodgson, 2007;Levi, 1996), there is evidence suggesting that norms are formed through socialisation within schools, families and religious institutions (Lyon, 2000;Lyon and Porter, 2007).Many of these interpretations reveal that norms emerge from networks and are reinforced through ongoing relationships.This implies that norms lead to the development of predictable responses common among specific groups of people (Hodgson, 2007;Omeihe, 2019). The fundamental assumption that norms embody a range of interests and preferences achievable through collective action provides a straightforward rationale for why people adhere to norms (Nee, 1998).This basic explanation is grounded in the idea that norms are followed because they prescribe what should be done.In doing so, norms are internalised by individuals who possess the moral sensibility to avoid the shame and guilt that come with sanctions (Elster, 1989;Hodgson, 2007).This cost gives rise to threats to one's reputation, shaming and exclusion from communal and economic activities (Amoako, 2019;Brennan and Pettit, 2004;Porter and Lyon, 2006). For this reason, and within the context of SMEs, scholars such as Blomqvist et al. (2008) indicate that norms reduce the risk of transaction costs related to monitoring and negotiations.Other scholars support this view by stating that norm-based relationships provide market information and protection against the liability of newness for SMEs (Amoako, 2019;Amoako and Matlay, 2015;Brunetto and Farr-Wharton, 2007;Rodriguez and Sanchez, 2012).It is no wonder that norms enable SMEs to economise on decisions regarding market opportunities. In a related study, Amoako and Lyon (2014) show how norms of kinship and ethnicity in Ghana were useful for SME exporting.Combining this with the contributions by Omeihe et al. (2020) further amplifies the idea that family norms provide the framework within which transnational relationships operate.From our perspective, the complexities through which culturally specific norms shape exporting SMEs, particularly in a developing market economy, remain strikingly underexplored and under-theorised in entrepreneurship research (Amoako and Matlay, 2015;Amoako et al., 2018;Lyon and Porter, 2007;Omeihe, 2022).Our focus in this article is to explore new avenues by uncovering the types of norms that affect entrepreneurial behaviour. Methodology To explore the research question about the types of norms influencing SME relationships, this study primarily relies on qualitative research conducted in Bokkos market in 2017.It also builds upon additional research conducted by the authors in this region and elsewhere (see Omeihe, 2019;Omeihe et al., 2020).Bokkos market in North Central Nigeria received the most attention due to its significance in the marketing systems of manufacturing and agricultural produce, particularly tomatoes, vegetables, potatoes and fertiliser. We opted against adopting a quantitative analysis of norms and their role in governing entrepreneurial action.Such an analysis is challenging to capture through surveys and statistical methods (;Jenssen and Kristiansen, 2004;Omeihe et al., 2020).In response to calls for qualitative approaches that embrace a process perspective to provide a rich understanding of actual entrepreneurial experiences and the embeddedness of relationships under investigation (Crotty, 1998;Colman and Rouzies, 2019;Hack-Polay et al., 2020;Möllering, 2006), we placed emphasis on qualitative methodological analysis.This approach allowed for the emergence of new knowledge within socio-economic relations that had not been previously identified in the literature.However, it is important to note that explanations based on the above methodology are challenging without acknowledging the scarcity of studies that focus on the diversity of norms within African entrepreneurship. We employed the purposive sampling technique (e.g.Omeihe, 2019) to select 35 entrepreneurs operating within the manufacturing and agricultural sectors.The choice of both sectors facilitated comparisons across markets.We gained access to participants through interactions with key informants, involving visits to open marketplaces, trade meetings, export forums and trade fairs.This approach provided the advantage of accessing a wide range of entrepreneurs operating within the market.We conducted approximately five visits to various market sites and recorded field notes after each visit.Additionally, we conducted 35 semi-structured interviews with the entrepreneurs, which were audio-recorded and transcribed. In our investigation of the role of norms, we maintained an open-minded approach to uncovering the information we needed.Each interview lasted approximately 35 min, and the questions were designed to probe into what norms meant to the entrepreneurs.During our analysis, we initiated the process of generating initial codes from the data and then proceeded to identify patterns within our dataset (Boyatzis, 1998;Braun and Clarke, 2008).With the research question in mind (Braun and Clarke, 2008;Miles and Huberman, 1994;Saunders et al., 2019), we systematically examined the entire dataset, identifying interesting aspects of the data that gave rise to recurring patterns (themes).Following the initial coding, we were presented with an extensive list of codes, which required us to categorise these codes into various themes (refer to Figure 1). We began the analysis of our data by considering how different codes could be combined to develop overarching themes.This process led to the identification of a collection of candidate themes and sub-themes, providing us with a preliminary understanding of these individual themes.Subsequently, we initiated the refinement of these themes to ensure that the data within them conveyed meaningful insights, revealing the richness of the phenomenon under study.This refinement involved two levels of scrutiny.Firstly, all the compiled themes were reviewed to unveil coherent patterns, while the second level assessed the validity of each theme in relation to the dataset.This step also ensured that any additional themes that may have been missed during the initial coding process were re-examined and incorporated as necessary (Table 1).Through this process, we revealed the themes that had emerged within the dataset.The final stage entailed naming and defining these themes.Thus, each theme was accompanied by a subtheme, a typical description of the main theme, and illustrative quotes (refer to Table 2).Our objective was to provide readers with a clear understanding of the essence of each theme (Braun and Clarke, 2008). Throughout the iterative process of scrutinising our data, themes such as family and kinship norms, personal trust, trade associations and religion gradually emerged.Some of these findings had been presented in a prior presentation of our studies (Omeihe et al., 2021).A more systematic qualitative analysis was conducted after collecting all the data.We observed that norms were contingent upon several factors: (1) the reputation of the entrepreneurs, (2) their affiliations with existing trade associations and religious groups, (3) family and kinship connections and (4) the potential consequences of sanctions and trade ostracism.Through this analysis, we were able to highlight that norms were perceived as integral to the social structure, guiding instinctive actions and enabling entrepreneurs to mitigate risks (refer to Figure 2). In presenting our findings, we have used a combination of quotes and illustrative examples to convey the insights derived from our data.When introducing sample excerpts, we have maintained respondent anonymity by assigning them alphabetical indices.The findings regarding how respondents relied on norms to influence SME relationships are comprehensively presented and discussed in the following section. Context: Bokkos market Situated in Plateau State in North Central Nigeria, Bokkos market is renowned for its substantial community of entrepreneurs specialising in manufacturing and agricultural production.The market's robustness has attracted Findings The primary objective of this paper is to identify a range of culturally specific norms that underlie SME relationships. The substantive findings have unveiled the novelty and diversity of narratives reflected in the nature of SMEs.Regardless of personal characteristics, industry or markets, entrepreneurs followed a similar set of actions as the foundation for relying on norms.Due to the entrepreneurs' reliance on various norms, the robustness of our sampling allowed us to access multiple entrepreneurial networks, expanding the scope of our inquiry.Our engagement process facilitated the exchange of knowledge, which was central to understanding how SMEs operated.Overall, the emerging findings from our interactions revealed that SME relationships were shaped by norms related to family/kinship, personal trust, trade associations and religion. Norms of family and kinship Considering the central premise that family and kinship constitute rich and valuable resources for Nigerian entrepreneurs, we examined the role of family and kinship norms in SME relationships.Evidence from our findings highlights that within Nigeria, and by extension Africa, a family often consists of a husband, his wife (or wives), children and numerous relatives with kinship or blood ties.Consequently, the concept of family implies a form of social security where members can support each other.Responses from our participants indicated that many entrepreneurs tended to rely on trust established through family and kinship connections in their business activities.These ties were often used to address grievances and problems.For example, participants were questioned about the benefits of working closely with family members.Out of the 35 entrepreneurs, 26 stated that they trusted their family members and chose to work with them.This trust was attributed to the understanding that family and kinship bonds, along with emotional ties, fostered trust-based relationships.Excerpts from their narratives are highlighted as follows: This business is a family business.I have my sisters, nieces, husband, kids, and in-laws involved in this business with me.So, I would say that trust contributes to our business's success.(Case B) When it comes to international trade, our partners are mostly from within the family.They belong to my mother's family, and they are all involved in trading.We do business with them because we know them and trust them.(Case L) You know, in African society, regardless of how we describe it, it's a family-oriented society.Most of the retailers, including the transporters and exporters, are familiar with you.They know your warehouses, they know your family, and they are acquainted with the nature of your business.So, in that sense, there are no problems because if you happen to miss a payment, we know how to reach you.(Case N) The evidence presented above underscores the significance of family ties in the level of support extended to family members' businesses.Traditionally, it is customary for family members to support the businesses of their kin, as a family member's wealth is seen as the collective property of all.This support is not only expected but also considered obligatory.Given their specific support requirements, kinship connections played a particularly vital role in bridging the 'psychic distance'.Many participants revealed how they relied on the strength of family ties to overcome geographical, economic and socio-cultural differences during their international expansion efforts.They mentioned that they received market information from family and kinship members, and these connections heavily influenced their choices regarding entrepreneurial opportunities.Conversations revealed that many of their relatives, who also acted as intermediaries, played a central role in bridging the gap between domestic and foreign markets. Our findings highlight a distinctive feature of African SME activities, namely the extensive family networks cultivated over generations.Comparing cases, it becomes evident that most entrepreneurs relied on connections with their emigrant family networks across the West African border markets.An illustration of such connections is described below: I have brothers who have been living in Ghana for a long time.They initially introduce me and my product samples, and later they trade on my behalf.They ask me to send products that customers demand, and I forward them.That's how I conduct my business.(Case C) We have connections outside Nigeria, and they are our family members there.I trust them, so I send my business to them.In return, they assist me in selling to customers and then return the money to me.(Case H) Overall, it is evident that the entrepreneurs widely recognised the positive impact of family and kinship ties.They frequently relied on these networks to overcome trade barriers. Norms of personal trust Considering West Africa's challenging institutional context, characterised by inefficient legal systems and minimal banking infrastructure, the activities of SMEs stand as a remarkable display of ingenuity.Specifically, the existing dysfunctional formal institutional structures indicate a limited use of legally binding commercial trade contracts for enforcing commercial trade agreements.The empirical findings revealed that all the entrepreneurs relied on personal trust-based relationships to facilitate both their domestic and international trade dealings.It was evident that the inclination to employ written contracts, as commonly seen in well-established market economies, High Religious ties This includes the religious influences evident in entrepreneurial relationships. High Family ties This includes the family influences evident in entrepreneurial relationships. High Trade association This includes the ability of trade associations to enhance entrepreneurial action and sanction malfeasance.was sparingly used by the selected participants.Due to the shortcomings of the existing formal institutional arrangements, norms of personal trust emerged as a critical resource for entrepreneurs.In further elaboration on these issues, the entrepreneurs expressed consistent negative views regarding the court system's ability to resolve disputes between them and their partners in West African markets (see Table 2).It is unequivocal that the entrepreneurs had built relationships over a series of past interactions.This explains why all the respondents were found to rely on oral agreements rather than written legal contracts. High When the topic of court systems and contract enforcement arose, all participants acknowledged the impracticality of depending on commercial contracts, as pursuing legal remedies for commercial disputes was perceived as a futile endeavour.Out of the 35 responses, 30 participants cited concerns regarding corrupt and expensive court systems, while 5 participants admitted to viewing the courts as accessible primarily to the affluent.These apprehensions led them to place their reliance on oral and written agreements. To be specific, 19 of the respondents stated that they relied on oral contracts established through face-to-face interactions and telephone communications, while only 16 indicated their reliance on written agreements.It is worth emphasising that these written agreements, typically in the form of memoranda of understanding, were not crafted with legal assistance and should not be mistaken for legally binding contracts.Instead, they serve as adaptable arrangements between parties. Beyond the weak courts, each entrepreneur conducted trade based on the personal trust they had developed with their partners.The relational and trusting nature of these transactions appealed to the entrepreneurs because it enhanced the prospects of honoured obligations.Oral and written agreements revealed that the entrepreneurs relied on personalised trust relationships to address the limitations of weak legal arrangements.Case evidence of the use of written agreements is clearly reflected in the below excerpt: We transact our business based on trust.Without trust, I should have been out of business.I have been in this business for 10 years; no one has ever cheated me; it is all about trust.No contracts, we just write agreements on paper.(Case A) Interestingly, two women entrepreneurs indicated how they related to their business partners in West African markets without relying on legally backed contracts.In the respective cases, we sought to allow the actors to recount their unique narratives: We don't write on paper or sign any contracts.I don't even understand how to read contracts.Really, it is just phone calls most times, and I see my partners twice a year.During those times when I get to meet with them, we make our discussions and agreements for the year. (Case L) There is an understanding that we agree verbally.There is nothing to write.There is no formal agreement; it is just through word of mouth.Yes, we agree, we sit down, we discuss it, and we move on.(Case M) The picture that emerges from the above responses reveals that, on one hand, personalised relational agreements with partners were built on norms of personal trust.Conversely, even though these agreements, both oral and written, lacked formal legal backing, they held substantial respect among the parties involved.It is noteworthy that all entrepreneurs emphasised that trust played a pivotal role in fostering these agreements, allowing them to share complementary interests, and thereby strengthening their trade relationships. Overall, the compelling evidence underscores the significance of norms of personal trust in the success of entrepreneurs.In all the cases examined, the presence of personalised trust relationships played a crucial role in resolving trade disputes and upholding agreements.This, in turn, contributed to the support and enhancement of their trade activities. Norms of trade associations Arguably, trade associations are a particularly prevalent feature of the SMEs analyzed in this study.The evidence presented in the cases enabled an understanding of the functioning of trade associations.In fact, the uniqueness of trade associations in Nigeria and West Africa relates to their specific function as alternatives to the dysfunctional features of the formal environment constraining the entrepreneurs.Viewed in this perspective, trade associations enabled the exchange of trade information between members and ensured that certain market rules are followed.They also provided dispute mediation between members and their partners. Most of the entrepreneurs (33 out of 35) belonged to trade associations.All the responses uncovered the associations' unique capacity to bestow trust on its members.Specifically, their regulatory role is explicable when placed within the broader context of SMEs.For instance, trade associations were found to exercise regulatory capacity by sanctioning members who go contrary to the existing norms governing trade activities.The below extract from one of the entrepreneurs describes the associations' regulatory influence: Our associations are very important as they provide business referrals within and across different markets.However, once you abuse the trust that exists within the association, the chances are high that you will be blacklisted, and your reputation will be tainted.Members are warned not to trade with you.This signals an end to business for you.No one wants this!.(Case P) Similarly, the point that trust is sustained when members are particularly wary of sanctions such as reputation damage and ostracisation is further illustrated by Case O who stated: The application of trade sanctions is dependent on the gravity of the offense.Defaulters can be suspended or expelled from the association, and other members will be mandated to desist from transacting with you across the markets.(Case O) Furthermore, the interviewees reiterated that associations created arenas for price settings by ensuring that agreed market prices were followed.The reflection of one of the entrepreneurs during the interviews expands on this point: There is no disadvantage in belonging to our association.Our association helps in fixing the prices of the goods which we sell.It also provides information about which markets to invest in and details about our members in those markets.The association decides the pricing, and we all stick to the price as it benefits all of us.(Case L) In view of the entrepreneurs' perception of the norms of trade associations, general case evidence enables an understanding of culturally specific norms used in enhancing trade.This demonstrates that norms are quite effective when working within specific contexts.Yet, the damage in adopting a glamorised stance of norms of trade associations lies in the constraint that it presents for non-members.While it is accepted that norms of associations generally have positive benefits, in looking at our data, we found that norms were also used to foster trade cartels.Although this was recognised to be beneficial to members and those with whom they cooperated, it was found to restrict trade opportunities and market access for non-members. Norms of religion Despite its multi-ethnic composition, one of the key features of the Nigerian context is the significant influence of religion.Beyond family and kinship structures, a significant part of Nigeria's indigenous structure is founded on religious beliefs.In fact, religion is considered very important, more than how it is accorded in more advanced societies.Many aspects of Nigeria's social structure are reinforced by religious norms, since economic exchange is often related to religion.Given the context of SMEs, the reasons given above support the view that religious solidarity is demonstrated in the building of economic relations with people of similar faith.From our interpretations, we captured that commitment decisions were underpinned by religious norms.The extent of this influence is reflected in the significant number of entrepreneurs (35/35) who relied on religious norms in fostering trade relationships.Owing to the historically religious origins of Nigerian trade, we found that religious norms enjoy a high degree of acceptance and authority.And this knowledge is widespread among entrepreneurs. Given that a relative number of Nigerians believe in traditional worship, it was no surprise to find that 5 of the 35 respondents expressed their belief in ancestral deities.One of the entrepreneurs explained the role of traditional religious norms: In making our decisions, we are more disposed to trading with people who are of the same traditional beliefs.Differences in belief imply that it will be possible for one to be cheated.This belief in our traditional religion enhances our business exchange.(Case F) The relevance of the above excerpt is profound as religious ties were judged as advantageous for the development of trade relationships.It is evident that the respondent was more obliged to deal with members of similar religious beliefs.This overt expression of such economic choices, whether the entrepreneurs are fully aware of their inclination or not, appears to indicate the entrepreneurs' trust in religious norms.Furthermore, evidence from similar responses indicates that being bound by shared religious norms reduces the chances of malfeasance.Admittedly, the use of oaths as sanctioning mechanism portends great dangers in the event of default.For example, in certain cases, partners are warned about incurring the wrath of the traditional deities, as curses may be invoked as punishment.This finding was emergent from the statement below: In keeping with our custom, we are traditionalists, and we believe in ancestral deities.If you deceive us in business, there will be divine implications that are not usually favourable.This instills trust in our trade interactions.(Case L) Interestingly, one of the cases represented their views on how strong Islamic beliefs enforced trust with his partners in neighbouring markets of Chad and Niger Republic: To be competitive, we ensure that we work well together and do not cheat each other.So, we would cite specific passages in the Holy book and take an oath that is binding on our agreement.Through this, we have trust that the other partner would not cheat us, and this would reduce the fear of cheating.(Case A) Similarly, the influence of religious norms on entrepreneurs serves to illustrate the excerpt stated below.One of the respondents provided evidence of how religious norms were useful in strengthening exporting relationships between his partners in Mali and Niger: Doing business with people of similar religions can be easier.We don't cheat people in our business.This act is simple and reciprocal.So, it makes life simple and easy.In most cases, we are more confident doing transactions with brothers with similar beliefs.(Case Q) The views expressed above cogently show how religious norms shape the process of trust building (Table 3).It is conceivable that religious norms play an important role in establishing transnational relationships with partners whom the respondents may not know very well.Through shared beliefs, it is recognised that parties would not act opportunistically.Our findings leave scope for further reflections on the role of norms (see Table 4).We further established that despite the importance of religious norms, women entrepreneurs were constrained by varied circumstances simply for the reasons of gender. Discussion To sum up, the point of departure for this article is the unresolved issue of how norms form the basis for entrepreneurial relations (Amoako, 2018).Perspectives investigating norms tend to focus primarily on the emergence of norms while giving scant explicit attention to the functions they perform.This article has specifically analysed how entrepreneurs rely on a range of norms to enhance entrepreneurial exchange in contexts with deficient institutional arrangements.It draws attention to an understanding of a range of indigenous norms such as religion, trade associations and family/kinship which define the rules of SME activities.We have also attempted to analyse the conditions under which norms come into being by exploring the instances by which one is required to draw on or maintain normative bases of entrepreneurial exchange.We find that while prior studies have underscored the relevance of context for entrepreneurship (Kalantaridis and Fletcher, 2012;Vershinina and Rodgers, 2019;Welter, 2011;Welter and Baker, 2021), the findings of this study extend such work to show that there is much more to the role of norms in shaping how entrepreneurs operate.Furthermore, we discover that contracts taken for granted in more mature economies are not widely regarded in Nigeria.No evidence demonstrating recourse to legal contracts or arrangements was found.In the absence of formal structures, we find that entrepreneurs preferred to rely on norms of personalised-trust relationships to address the limitations of weak legal arrangements.This is consistent with Rotter's (1971) definition that trust is the generalised expectation of the reliability of one's promise.The broader knowledge of relying on norms of personal trust was generally acknowledged as a powerful incentive for business continuity.This subsequently enables entrepreneurs to secure a pattern of reciprocity that mitigates the likelihood of opportunism in entrepreneurial exchange (Alexander, 2007;Bicchieri et al., 2018). With reference to the findings, we must understand that entrepreneurial relationships within the Nigerian context are dependent on the extent of information about a member's reputation.This perspective unwittingly shared implies that the nature of information on one's reputation is likely to determine the chance for trade interactions.In this way, we would do well to be reminded that such information cannot only guide the development of potential trade exchange but can move us far beyond knowing what to expect.This affirms the notion that norms are based on the embedded structures of network relations (Anderson and Jack, 2002;Granovetter, 1985;Nee and Ingram, 1998). While we highlight how religious solidarity (Christianity, Islam and tradition) is demonstrated in the building of economic relations with people of similar faith (Amoako, 2019;Lyon and Porter, 2007), our findings, which are consistent with Omeihe's (2019;2023) observation of three main Nigerian cultural blocs, reveal that women entrepreneurs were subjugated by the sheer power of certain Islamic norms.In particular, we find that such norms discourage female entrepreneurs from engaging in venture creation.We outline how it is indicative that certain religious norms serve as constraints to female entrepreneurship.Women who defied these norms were bound to be ostracised from trade relations.Yet, such enforcement mechanisms have evolved to become accepted religious norms limiting the development of women-led SMEs.As such, norms can be viewed as constraining as well as enabling for Nigerian entrepreneurs. In this article, we have uncovered how norms, in a range of forms, play an important role in enabling entrepreneurial behaviour.Rather than making a broad assumption that norms can be transplanted across contexts, we demonstrate how norms underpin the activities of entrepreneurs by assuming regulatory roles that allow confidence in entrepreneurial relationships.These include norms of trade associations, family/kinship, religion and personal trust which allow enforcement of sanctions when agreements are not honoured.Thus, we proffer a more nuanced understanding of how norms become alternative institutions to weak state institutions perceived to be dysfunctional and inefficient (Amoako and Matlay, 2015;Lyon and Porter, 2007;Omeihe and Omeihe, 2020). Conclusion The empirical findings from this study demonstrate that indigenous norms are particularly important for African SMEs.The paper has presented an analysis of norms to establish that written contracts are not relied upon by African SMEs; rather, they rely considerably on a range of norms to enhance their entrepreneurial activities.In this context, the prevalence of dysfunctional institutions, together with weak courts and legal structures, forced entrepreneurs to rely on culturally specific norms that defined the rules for entrepreneurship.By specifying the mechanism that links norms to building entrepreneurial relationships, our findings provide the missing link that furthers the understanding of African entrepreneurship.Four types of indigenous norms were identified, each distinguished by the way in which they enhance entrepreneurial activity.The production and maintenance of (1) norms of family/kinship, (2) personal trust, (3) trade associations and (4) religion represent reliable resources for the selected SMEs.All 35 respondents were found to subscribe to these norms in their entrepreneurial activities.This view establishes that norms play an essential role in defining acceptable and non-acceptable practices, thus serving as a basis for trust development.A major finding of the research can be interpreted in terms of the norms working as constraints on women entrepreneurs.This contributes to a rich store of knowledge.Such particularistic standards were enforced with ostracism and damage to reputation. Conversely, a salient finding clearly demonstrates the entrepreneurs' propensity to rely on traditional beliefs in building trust with business partners.Perceived ties to similar religious beliefs were vital in selecting one's partners.We note a certain profoundness in this finding, suggesting that norms governing religious beliefs are advantageous for the development of business relationships.In other words, this preference was found to constrain opportunism in economic exchange. In summary, we have advanced the institutional logics perspective (Thornton and Ocasio, 1999) by providing a foundation for understanding how the socio-cultural dimensions of African institutions enable and constrain social behaviour.In advancing theory, we situated the entrepreneurs in a network of relationships supported by culturally specific norms, through which they are rewarded or sanctioned.We drew attention to the role of context as a critical determinant of institutional analysis and sought to describe how socially constructed norms provide meaning to the participants.Our approach here is guided by the belief that the advancement of institutional analysis requires an analytic, not a descriptive, approach that best explains the observed relationship between entrepreneurs and their context.For future research, it will be interesting to examine, through a requisite, multi-dimensional framework, the means by which one can better understand the institutional logics construct. Norms Descriptions Norms of family and kinship • Attributed to the knowledge that family/kinship bonds, as well as emotional bonds, foster trust relationships.• Family and kinship links are particularly important in overcoming geographical, economic and socio-cultural differences in their international expansion.• The impact of family and kinship ties was generally acknowledged to overcome trade barriers. • Trust is formed based on norms of benevolence and developed through shared cultural ties and solidarity.Norms of personal trust • The propensity to rely on written contracts found in well-established market-based economies was limited.Agreements (oral and verbal) were used to address the limitation of ineffective legal arrangements.• Trust-based relationships were developed through a series of past interactions and reciprocity.Elements of personalised trust relations ensured that trade disputes were resolved, and agreements upheld. Norms of trade associations • Provide support to the entrepreneurs as they enable the exchange of market information between markets to ensure that certain market rules are followed.Trade associations play crucial roles in knowledge creation and exchange.They also possess the ability to enforce trade agreements.• They play a vital role in reducing the transaction costs associated with long-distance trade through sharing of market information and advisory services. Norms of religion • Religious norms reduce the chances of malfeasance or trust violation.The use of oaths as a sanctioning mechanism portends great dangers in the event of default.Female entrepreneurs were found to be subjugated by Muslim norms which discourages them from entrepreneurship. A fruitful research agenda would involve conducting a cross-cultural comparison of SMEs across the West African contexts.Such an attempt should aim to promote the development of favourable institutions (cultural, economic, religious and political) that enhance African entrepreneurship.This will be expedient in complementing the activities of trade associations, religious, cultural and family networks as they aid the entrepreneurial strategy of African SMEs.As a result of our empirical study, some relevant implications were identified and underlined.Our findings reveal that within Nigeria and, by extension, Africa, indigenous norms play an important role in SME relationships, but to a large extent, they have been largely ignored.An extension of this finding can lead to interventionist strategies where policy approaches could seek to boost concrete regional relationships. We are also of the view that interventions may provide the necessary development for indigenous trade associations as their welfare implications hold valid promise for the development of SMEs.Such interventions would stimulate the development of existing social arrangements and contribute to the development of a broader African society.It, therefore, becomes pertinent to consider such policy initiatives when implementing future trade initiatives. Notwithstanding, it is obvious that this study has some limitations in its empirical data.It is our opinion that while the multiple case studies yielded more results than would have been obtained from a single case, they lack the basis for generalisation (Yin, 2014).Nonetheless, our findings are more suitable for particularisations and the advancement of theory rather than generalisations.Lastly, our results contribute to the growing discourse on the role of norms in African entrepreneurship.Empirically, we demonstrate the types of culturally specific norms and the means through which these norms shape entrepreneurship.It is our core belief that our study would enhance the understanding of indigenous norms vital for African SMEs. Table 2 . Selected themes and sub-themes. Table 3 . Summary of responses. Table 4 . Summary of norms shaping entrepreneurial relationships.
v3-fos-license
2023-10-31T06:17:13.310Z
2023-10-01T00:00:00.000
264589440
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "a214a856ad60766551b08948d6d6be8c01eb0a92", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42758", "s2fieldsofstudy": [ "Medicine", "Economics", "Political Science" ], "sha1": "189ca7d48884313ca7cc05a35804444de2ad5113", "year": 2023 }
pes2o/s2orc
Monitoring healthy ageing for the next decade: South Korea’s perspective Abstract South Korea is the fastest ageing country among OECD countries. Unlike the older generation growing up in the aftermath of the Korean war, the first and second baby boomer generations have heightened expectations regarding public services. In addition to the demand in higher quality of both social and health services by these newer older population, there is a concomitant increased quantitative demand. It is imperative that Korea reimagines their health, social welfare and economic policies to reflect the rapidly changing needs of such generations. One way to do this is to mainstream and continually monitor healthy ageing in all aspects of future policies. In 2021, the Korean Longitudinal Healthy Aging Study was launched in this context, to better understand the needs of the new-older age generation and to produce evidence to support formulation of better tailored policies that could promote healthy ageing. However, Korea is only in its early stage in developing a monitoring system that looks into the performance level of policies that support healthy ageing. As a country that is preparing for such rapid demographic transition and has already commenced developing its healthy ageing indicators, it will be important to assess and monitor uniformly the level of healthy ageing from the framework perspective of WHO. Korea welcomes WHO’s development of an internationally applicable M&E framework for healthy ageing. We hope that WHO’s M&E framework on healthy ageing will help Korea align to the international standards in its journey through the UN Decade of Healthy Ageing 2021–2030 and beyond. • WHO's Monitoring & Evaluation framework on Healthy ageing will allow more concrete application of the healthy ageing concept in countries.• WHO's M&E Framework will help South Korea to align its healthy ageing indicators to be more internationally comparable.• South Korea hopes to learn from the Framework and share our experience on the development of national healthy ageing indicators. Monitoring healthy ageing for the next decade: South Korea's perspective Korea's demographic trends and the need for a monitoring framework for healthy ageing Korea is the fastest ageing country among OECD countries.Rates of ageing have accelerated even after 2018 when South Korea became an 'aged society' with its older population reaching over 14%.In Korea, older people are defined as 65 and over and it is the standard for qualification for government policies including pension and long-term care services.In 2025, as a result of its first baby boomer generation (people born from 1955 to 1963) turning 65 in 2020, Korea is expected to become a 'super-aged society' with more than 20% of its population becoming 65 and over [1].In addition to this demographic change, in 2033, the second baby boomer generation (people born from 1968 to 1974) will reach 65 and with the addition of these approximately 17 million population, the added on societal and economic impact of ageing in all aspects of Korean society will be enormous. Unlike the older generation growing up in the aftermath of the Korean war, the first and second baby boomer generations have a high level of education and have experienced much greater economic wellbeing.They have heightened expectations regarding public services and also have a deeper interest in societal, political and cultural issues that affect them. Furthermore, in addition to the demand in higher quality of both social and health services by these newer older population, there is a concomitant increased quantitative demand.As of 2020, 54.9% of persons aged 65 and over in Korea experienced multimorbidity and the proportion increased with age, from 47.2% for persons aged 65-74 years, 63.3% for those aged 75-84, and 73% for those aged 85 and over [2].Multimorbidity is a highly associated functioning decline and health-related disability [3,4,5,6,7] and thus as the proportion of older population grows, people who require both healthcare and LTC services will increase significantly. Therefore, it is imperative that Korea reimagines their health, social welfare, economic policies to reflect the rapidly changing needs of such generations.Also, Korea must accelerate its transformative approach in viewing older people as not just a vulnerable minority, but as active and valuable members of Korean society.For this approach to be realised, it is crucial that Korea mainstreams and continually monitors healthy ageing in all aspects of future policies in Korea. Korea's efforts to realise effective monitoring of healthy ageing Korea is only in its early stage in developing a monitoring system that looks into the performance level of policies that support healthy ageing using indicators.Presently, the public health policy in Korea is disease focused and the concept of functioning is not accepted as a main indicator for policy evaluation.Furthermore, there is still no definite consensus among experts on which tool to use to measure the various aspects of healthy ageing.Therefore, it is important to develop a performance indicator and evaluation system that can assess and monitor uniformly the level of healthy ageing from the framework perspective of WHO.Korea already has the Korean Longitudinal Study of Aging (first conducted in 2006) and the Korean Frailty and Aging Cohort Study (launched in 2016) which surveys and tracks the ageing process.However, the surveys have a few limitations regarding under-coverage of the first and second baby boomer generations, insufficient measurements related to healthy ageing, and differences in the purpose of cohort construction. In 2021, the Korean Longitudinal Healthy Aging Study (KLHAS) was launched in this context, to better understand the needs of the new-older age generation and to produce evidence and base data to support formulation of better tailored policies that could promote healthy ageing [8].The KLHAS is a large-scale random sample cohort representing people aged 45 years and over in Korea.It is the only nationally representative cohort established based on the concept of healthy ageing in Korea.The KLHAS is made up of the Korean Longitudinal Healthy Ageing Cohort (KLHAC) the Korean Longitudinal Long-Term Care Cohort (KLTC), The KLHAC identifies the causes of intrinsic capacity and functional decline by tracing the trajectories of the ageing process.The KLTC elucidates factors that lead to eligibility into the long-term care system and trace main factors leading to institutionalisation.The baseline study for the KLHAC was conducted from 2021 to 2022, enrolling 10,416 individuals.The KLTC baseline study was conducted from 2022 to 2023, focusing on 5,000 older individuals living in the community who had qualified for LTC insurance.The KLTC also included information on family caregivers and their role in ageing in place(AIP). The conceptual framework of the KLHAC includes (i) a successful ageing concept model; (ii) a concept to be used for characterisation of frailty syndrome; (iii) a conceptual framework of the ICF; (iv) WHO's Healthy Ageing concept and UN's action plan to realize Healthy Ageing for the next decade; and (v) the key performance indicator of the 5 th National Health Plan 2030 (HP 2030).The questionnaire of the KLHAC survey was developed and designed with survey tools that are capable of measuring the research details in each survey area based on the conceptual framework.The characteristics of this cohort allows evaluation and surveying of various healthy ageing-related areas, including intrinsic capacity, health behaviours, social support and the environment.Such data can also be linked to various large scale database retained by the National Health Insurance Service (NHIS) that includes medical and long-term care data of the whole Korean population.Such linkages of cohort data and NHIS database can be used not only to further assess and predict the level of contributing factors to healthy ageing but also analyse effective use of medical and long-term care resources for older persons.The Health Insurance Research Institute is going to launch a study that will conduct the 1 st tracing work that will support the development of healthy ageing indicators.Through this study, we expect to provide a policy base to assess and monitor healthy ageing in Korea. Timeliness and importance of WHO's M&E framework Although the UN Decade of Healthy Ageing outlines important action areas, there is scarce uniform guidance on ways to nationally nor globally monitor or assess healthy ageing.There is also no global consensus on measurement methods for intrinsic capacity nor functional ability, making application difficult in countries.This is why WHO's development of an internationally applicable monitoring and evaluation framework is both timely and significant for Korea as well as all countries globally.With WHO's M&E framework (Framework) and evidence-based recommendations on measures of healthy ageing informed by the systematic review to be published in the special issue, Korea will be able to align its healthy ageing indicators and data collection efforts to be more internationally comparable.Korea welcomes the efforts of WHO and hopes for a two-way synergy where we learn, but also share with the international community our experience and process in developing our cohort and set of indicators that are aligned to the Framework.We are certain that with WHO's M&E framework on Healthy Ageing, all countries, including Korea, will be able to more concretely apply the concept of healthy ageing into health and LTC policies during the UN Decade of Healthy Ageing 2021-2030 and beyond. Declaration of Conflicts of Interest: None. Disclaimer: The authors alone are responsible for the views expressed in this article and they do not necessarily represent the views, decisions or policies of the institutions with which they are affiliated. Declaration of Sources of Funding: This special supplement is funded by European Commission through AAL Programme Budget.
v3-fos-license
2022-05-30T00:33:33.279Z
2018-01-01T00:00:00.000
89705012
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.mdpi.com/2073-4425/9/6/278/pdf", "pdf_hash": "23862086685d62e05b8581dac0a8829ef46698df", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42759", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "sha1": "b67a1eae02fb94789ae5035c6d02dba2db9667e7", "year": 2018 }
pes2o/s2orc
Computational elucidation of regulatory network 1 responding to acid stress in Lactococcus lactis MG1363 predict and validate regulons related to acid stress response in Lactococcus lactis 33 MG1363. A total of 51 regulons were identified, and 14 of them have computational 34 verified significance. Among these 14 regulons, five of them were computationally 35 predicted to be connected with acid stress response with (i) known transcriptional 36 factors in MEME suite database successfully mapped in Lactococcus lactis MG1363; 37 and (ii) differentially expressed genes between pH values of 6.5 (control) and 5.1 38 (treatment). Validated by 36 literature confirmed acid stress response related proteins 39 and genes, 33 genes in Lactococcus lactis MG1363 were found having orthologous 40 genes using BLAST, associated to six regulons. An acid response related regulatory 41 network was constructed, involving two trans-membrane proteins, eight regulons 42 ( llrA , llrC , hllA , ccpA , NHP6A, rcfB , regulons #8 and #39), nine functional modules, 43 and 33 genes with orthologous genes known to be associated to acid stress. Our 44 RECTA pipeline provides an effective way to construct a reliable gene regulatory 45 network based on regulon elucidation. The predicted resistance pathways could serve 46 as promising candidates for better acid tolerance engineering in Lactococcus lactis . It 47 has a strong application power and can be effectively applied to other bacterial 48 genomes, where the elucidation of the transcriptional regulation network is needed. 49 INTRODUCTION Lactococcus lactis (L.lactis) is one of the mesophilic Gram-positive lactic acidproducing bacteria.It is has been widely applied in dairy fermentations such as cheese and milk product (Wegmann et al. 2007).Several studies have provided evidence of its essential roles in desecrating and delivering proteins or vaccine for immune treatment such as diabetes (Ma, Liu, et al. 2014), malaria (Ramasamy et al. 2006), tumor (Bermudez-Humaran et al. 2005, Zhang et al. 2016) and infections (Hanniffy et al. 2007).Holding the advantage of higher acid tolerance to protect vectors from resolving during delivery inside of the animal body, L. lactis has more potential and safety in oral drug development (Hols et al. 1999).Moreover, it has been found that L. lactis, along with some Lactobacillus, Bifidobacterium and other gut microbiota, were associated with obesity (Million et al. 2012).Such studies lead to the possibility and availability of L. lactis in metagenome study to investigate the effect of microbial interaction between L. lactis and other species in the human body.The optimal pH range for their growth is between 6.3-6.9 (Hutkins and Nannen 1993).The production of lactic acid, a weak organic acid with a pKa of 3.86, leads to pH drop in the extracellular media, reduced metabolic capability, decreased growth rate and ultimately reduced cell viability (van de Guchte et al. 2002).It is now well established that Lactococcus have evolved stress-sensing systems, which enable them to tolerate harsh environmental conditions (Carvalho et al. 2013, Hutkins and Nannen 1993, van de Guchte, Serror, Chervaux, Smokvina, Ehrlich and Maguin 2002). The reason that bacteria maintain the protection mechanism to against acid stress is to withstand the deleterious effects caused by the harmful high level of protons in the exposed environment.Acid stress is known to change the level of the alarmones (guanosine tetraphosphate and guanosine pentaphosphate, collectively referred to as (p)ppGpp) (Hauryliuk et al. 2015) and leads to stringent response (Rallu et al. 2000).Many mechanisms or genes related to the acid stress response (ASR) have been identified.Proton-pumping activity as the direct regulator to acid stress response controls the intracellular pH level by pumping extra protons out of the cell (Koebmann et al. 2000, Lund et al. 2014), and the increase of alkaline compounds level also counters the acidification found in streptococci (Shabayek and Spellerberg 2017).Cell repairing for acid damage by chaperones or proteases, such as GroES, GroEL, GrpE, HrcA, DnaK, DnaJ and Clp (Frees et al. 2003, Jayaraman et al. 1997), and hdeA/B and Hsp31 in Escherichia coli (E.coli) (Kern et al. 2007, Mujacic andBaneyx 2007), the arginine deiminase system (ADI) (Budin-Verneuil et al. 2006, Ryan et al. 2009, Sun et al. 2012, Zuniga et al. 2002) and glutamate decarboxylases (GAD) pathways, etc. (Hoskins et al. 2001, Nomura et al. 1999, Sanders et al. 1998) have been proved to be associated with the acid response.Additionally, transcriptional regulators, sigma factors, and two-component signal transduction system (TCSs) have also been demonstrated to be responsible for ASR by modifying gene expression (Cotter and Hill 2003).These genes or pathways suggest low pH has widespread or global adverse effects on cell functions and inflicts response at genomic, metabolic, and macromolecular levels.To better understand the mechanism that controls the acid tolerance and responds to the acid stress in L. lactis, we considered MG1363, a strain extensively studied for acid resistance, to carry out computational analyses (Carvalho, Turner, Fonseca, Solopova, Catarino, Kuipers, Voit, Neves and Santos 2013, Hartke et al. 1996, Linares et al. 2010, Sanders et al. 1999). To adequately describe the transcriptional state and gene regulation responsible for ASR in L. lactis, a gene regulatory network (GRN) integrating all individual pathways is needed.Genomic and transcriptomic analyses have been widely used for elucidating GRN hierarchies and offering insight into the coordination of response capabilities (Arnoldini et al. 2012, Carvalho, Turner, Fonseca, Solopova, Catarino, Kuipers, Voit, Neves and Santos 2013, Levine et al. 2013, Locke et al. 2011). However, an obstacle to this kind of genome-scale network is the lack of welldocumented transcription factors (TFs) and the regulatory mechanisms at the transcription unit level in MG1363.One way to study the mechanism of transcriptional regulation in microbe genomics is the regulon prediction.A regulon is a group of co-regulated operons, which contains single or multiple consecutive genes along the genome (Cao, Ma, et al. 2017, Mao et al. 2015, Zhou et al. 2014).Genes in the same operon are controlled by the same promoter and are co-regulated by one or a set of TFs (Jacob et al. 1960).The elucidation of regulons can improve the identification of transcriptional genes, and thus, reliably predict the gene transcription regulation networks (Liu, Zhou, et al. 2016). There are three ways for regulon prediction, which includes: (i) Predicting new operons for a known regulon (Kumka andBauer 2015, Tan et al. 2001).This method combines motif profiling with a comparative genomic strategy to search for related regulon members and carries out systematical gene regulation study.(ii) Integrating cis-regulatory motif (motif for short) comparison and clustering to find significantly enriched motif candidates (Gupta et al. 2007, Ma et al. 2013).The candidate motifs are then assembled into regulons.(iii) Performing ab initio novel regulon inference using de novo motif finding strategy (Novichkov et al. 2010).The approach uses phylogenetic footprinting technique which mostly relies on the reference verification (Blanchette et al. 2002, Katara et al. 2012, Liu, Zhang, et al. 2016), and can perform a horizontal sequential comparison to predict regulons in target organisms by searching known functional related regulons or TFs from other relevant species.One algorithm for phylogenetic footprinting analysis called Motif Prediction by Phylogenetic Footprinting (MP3) has been developed by Dr. Liu etc. and used for regulon prediction in E. coli (Liu, Zhang, Zhou, Li, Fennell, Wang, Kang, Liu and Ma 2016).MP3 was then integrated into DMINDA web server along with other algorithms like the Database of Prokaryotic Operons 2.0 (DOOR2) (Cao, Ma, Chen andXu 2017, Mao et al. 2014), BOttleneck BROken (BoBro) (Li, Ma, et al. 2011) and BoBro-based motif Comparison (BBC) (Ma, Liu, Zhou, Yin, Li and Xu 2013), to construct a complete pipeline for regulon prediction.In the latest research, a newly developed pipeline called Single-cell Regulatory Network Inference and Clustering (SCENIC) combines motif finding from co-expression gene modules (CEMs) with regulon prediction for single-cell clustering and analysis (Aibar et al. 2017).Such mentality builds up a way of regulon application in single-cell and metagenomic research.Nevertheless, without a suitable regulon database, researchers need to build up the library first through operon identification, CEM analysis, motif prediction and comparison (Jensen et al. 2005). Here, we designed a computational framework of regulon identification based on comparative genomics and transcriptomics analysis (RECTA) to elucidate the GRN responding to acid stress in MG1363.The general framework is showcased in Figure 1 with six steps: (i) MG1363 co-expression gene modules (CEMs) and differentially expressed genes (DEG) were generated from microarray data by hcluster package and Wilcoxon test in R, respectively.MG1363 operons were predicted from genome sequence using DOOR2 webserver and assigned into each CEM; (ii) For each CEM, the 300bp upstream to the promoter was extracted and the sequences were used to find motifs using DMINDA2.0;(iii) The top five significant motifs in each CEM were reassembled by their similarity comparison and clustering to predict regulons; (iv) The motifs were compared to known transcription factor binding sites (TFBSs) in the MEME suite, and the TFs corresponding to these TFBSs were mapped to MG1363 using BLAST.Only regulons with DEGs and mapped TF were kept as ASR-related regulons; (v) Experimentally identified ASR-related genes in other organisms were mapped to MG1363 using BLAST and allocated to corresponding regulons for further verification; and (vi) The relationship between regulons and functional gene modules were established to elucidate the overall ASR mechanism in MG1363.As a result, 14 regulons are identified, literature verified or putative, to be connected to ASR.Eight regulons, related to nine functional modules and 33 associated genes, which are considered as the essential elements in acid resistance in MG1363.This proposed computational pipeline and the above results significantly expand the current understanding of the ASR system, providing a new method to predict systematic regulatory network based on regulon clustering. Data Acquisition L. lactic MG1363 genome sequence was downloaded from NCBI (GenBank accession number: AM406671).The microarray dataset contains eight samples under different acid stress conditions for MG1363 was downloaded from the Gene Expression Omnibus (GEO) Database (Series number: GSE47012).The data has been treated with LOWESS normalization by the provider.The details on cell culture preparation and data processing can be found in the previous study (Carvalho, Turner, Fonseca, Solopova, Catarino, Kuipers, Voit, Neves and Santos 2013).This dataset has all the bacteria grown in the basic conditions: a 2-liter fermenter in chemically defined medium containing 1% (w/v) glucose at 30°C.The control and treatment samples were grown at a pH of 6.5 and 5.1, respectively. Operon identification The genome-scale operons of MG1363 were identified by DOOR2.It is a one-stop operon-centered resource including operons, alternative transcriptional units, motifs, terminators, conserved operons information across multiple species (Mao, Ma, Zhou, Chen, Zhang, Yang, Mao, Lai and Xu 2014).Operons were predicted by the back-end prediction algorithm, with a prediction accuracy of 90-95%, based on the features of intergenic distance, neighborhood conservation, short DNA motifs, length ratio between gene pairs, and newly developed transcriptomic features trained from the strand-specific RNA-seq dataset (Chou et al. 2015, Li, Liu, et al. 2011). Gene differential expression analysis and co-expression analysis DEGs were identified based on the Wilcoxon signed-rank test (Bauer 1972) between the control and treatment, which was performed in R. The gene co-expression analysis was performed using a hierarchical clustering method ('hcluster' package in R (Antoine Lucas 2006)) to detect the CEMs under the acid stress in MG1363. Motif finding and regulon prediction Genes from each CEM were first mapped to the identified operons to retrieve the basic transcription units.Next, 300 bps in the upstream of the translation starting sites for each operon were extracted, in which motif finding was carried out using the web server DMINDA (Ma, Zhang, et al. 2014, Yang et al. 2017).DMINDA is a dominant motif prediction tool, embraced five analytical algorithms to find, scan, and compare motifs (Li, Liu, Ma and Xu 2011, Liu et al. 2017, Ma, Liu, Zhou, Yin, Li and Xu 2013), including a phylogenetic footprint framework to elucidate the mechanism of transcriptional regulation at a system level in prokaryotic genomes (Li, Ma, Mao, Yin, Zhu and Xu 2011, Liu, Zhang, Zhou, Li, Fennell, Wang, Kang, Liu and Ma 2016, Liu, Zhou, Li, Zhang, Zeng, Liu and Ma 2016).All sequences were uploaded to the server and default parameters were used to find the top five significant motifs (p-value < 0.05) in each cluster.The identified motifs were subjected to motif comparison and grouped into regulons based a sequence similarity cutoff using the BBC program in DMINDA (Ma, Liu, Zhou, Yin, Li and Xu 2013). Regulon validation based on TF BLAST and DEG filtering. Each highly conserved motif was considered to contain the same TFBS among species.Therefore, a comparison study was performed using TOMTOM in the MEME Suite (Bailey et al. 2009) between identified motif and public-domain TFBS databases, including DPINTERACT, JASPAR, RegTransBase, Prodoric Release and Yeastract, to find TFBSs and corresponding TFs with significant p-values in other prokaryotic species.Those TFs were then mapped to MG1363 using BLAST by default parameters to predict the connection between regulons and TFs in MG1363.On the other hand, since genes without differential expression were supposed not to react to pH changes, and thus, irrelevant to ASR, regulons without DEG were supposed not involved in the GRN, and thus, excluded from the following steps. Regulon validation based on known ASR proteins from the literature. To validate the performance of the above computational pipeline for regulon prediction, a literature-based validation was performed.Thirty-six ASR-related proteins and genes in other organisms including L. lactis, E. coli, Streptococci, etc. were first manually collected from literature, and their sequence was retrieved from the NCBI and UniProt databases.They were used to examine the existing known mechanisms in response to pH changes in MG1363 using the BLAST program by default parameters on NCBI.Such literature-based validation can either confirm the putative regulons when known ASR-related genes can be found in the significant regulons or expand our results to some insufficiently significant regulons, which indicate both false positive and true negative rate to evaluate the computational pipeline. Predicted operons and CEM generation A total of 1,565 operons with 2,439 coding genes of MG1363 (Dataset S1) were retrieved from the DOOR2 database.By co-expression analysis, the 1,565 operons were grouped into 124 co-expressed clusters.Among these clusters, two large ones contain more than 200 operons.Each of them was removed from the subsequent analyses as larger clusters may have higher chances to induce false positive operons which were connected with true operons by co-expression analysis.For the rest 122 clusters covering 2,122 genes, 26 (21%) of them contain operons no more than ten; the smallest cluster had two operons, and most of the clusters (90%) contained operons in a range of 10 to 50 (Dataset S2 and Figure S1). Predicted regulons based on motif finding and clustering. Using BoBro in the DMINDA web server, multiple motif sequences were identified from the 300 bps in the upstream of the translation start sites for each operon.Only the top five significant motifs (p-value < 0.05) were selected in each cluster, giving rise to a total of 610 (122×5) identified.The motif comparison-and-clustering analysis was then performed on the 610 motifs, and 51 motif clusters were identified with a motif similarity 0.8 as a cutoff.Intuitively, the operons sharing highly similar motifs in each motif cluster are supposed to be regulated by the same TF and tend to be in the same regulon.Hence, these 51 motif clusters correspond to 51 regulons (Dataset S3). Table 1 should be placed here. Additionally, 86 down-regulated genes and 55 up-regulated genes (Dataset S5), resulted from DEG analysis, were integrated into the regulons.Regulons #10, #37, #44 and #47 were found lack of DEGs.Thus gene (llmg_0271), related to regulon #10, was not likely to respond to acid stress in MG1363 even has successfully mapped to MG1363, and was then grouped into the potential candidate.On the contrary, ccpA and llrA were still retained due to their involvements in regulons #15 and #12 with DEGs, respectively. Verified regulons based on literature verification Altogether, 36 literature-supported ASR-related transporters were successfully mapped to MG1363 using blast with an E-value cutoff as 1e-10 and resulted in a total of 33 mapped genes.All the 36 transporters were categorized into nine modules based on their biological functions or regulated pathways, including L-lactate dehydrogenase (LDH), GAD, ADI, urea degradation, F1/F0ATPase, acid stress, protein repair and protease, envelope alterations and DNA repair.The 33 mapped genes compose of 22 operons and six regulons: llrA, llrC, hllA, NHP6A, regulon #8 and #39, which were subjected, one or more, to each functional module (Table 2). Regulon llrA, llrC, and hllA have already been computationally identified in Table 1 and supported again by literature verification results.NHP6A which interestingly has a homologous TF in human and fungi but not in L. lactis (Kolodrubetz andBurgum 1990, Stillman 2010), yet failed to map in MG1363.Here, we are using NHP6A to represent regulon #20, as their relationship has been predicted computationally in Table 1.Regulon #39 was identified to be regulated by llrD, one of the six twocomponent regulatory systems in MG1363 (O'Connell-Motherway, van Sinderen, Morel-Deville, Fitzgerald, Ehrlich and Morel 2000).Regulons #8 (llmg_1803) and #39 (llrD) were not included in the 14 significant regulons in Table 1.For NHP6A, regulons #8 and #39 were the enrichment by literature validation as it expanded regulon results of RECTA pipeline.Among the nine functional modules, llrA were found connected to five of them, and NHP6A related to three.On the other hand, the GAD and urea degradation functional modules were failed to connect to any previous regulons. Table 2 should be placed here. Compare to the regulon verification based on TF blast and DEG, the literature verification identified two more regulons (#8 and #39) that lay in the insignificant group, however, with no sign of ccpA regulon.Thus, such result indicated a possible false positive rate of 1/5 and a true negative rate of 2/37 of our computational pipeline, indicating a reliability and feasibility of using RECTA to predict the ASRrelated regulons.In Figure 2, we are showing the processes and results for both literature verification and computational pipeline in details.The final eight regulons predicted from both parts were then compared to construct a GRN response to acid stress, integrating with other information found in the literature. A model of regulatory network in response to pH change According to the results outlined above, we are presenting a working model of the transcriptional regulatory network in response to acid stress response in MG1363 (Figure 3).The network consists of two transmembrane proteins (Dataset S6), eight regulons, nine functional modules, and 33 orthologous genes known for ASR in other bacteria that are also contributing in MG1363. Figure 3 should be placed here. The network is subjected to respond to the change of intracellular proton level.The signal is captured by H+ sensor and regulons are initiated to be regulated.Although significance was not shown for rcfB in our computational result, it has been reported to recognize and regulate promoter P170 (Madsen et al. 1999), P1 and P3 (Hindre et al. 2004, Rince et al. 1994), which are activated by the ACiD box and essential to acid response (Madsen et al. 2005).With the ACiD box, operons like groESL, lacticin 481 and lacZ have been proved to be regulated by rcfB, while als, aldB, etc. have not (Madsen, Hindre, Le Pennec, Israelsen and Dufour 2005).The homologous comparative study also predicted the existence of the ACiD box in llrA (Akyol et al. 2008, O'Connell-Motherway, van Sinderen, Morel-Deville, Fitzgerald, Ehrlich andMorel 2000).With such evidence, we separated rcfB from regulon 39 and predicted that rcfB is first triggered by H+ sensor and act as the global initiator that controls the other seven regulons.It was reasonable that rcfB-related regulon #39 failed to show significant TF matching result after CEM treatment in the operon clustering step.RcfB worked as a trustworthy global factor; its differential expression should be less significant than regulons directly respond to acid stress, thus lead to the failure of being predicted by the RECTA pipeline.Nevertheless, the less amount of microarray data sets (eight) also limited the real performance to the ASR.However, the mechanism of how H+ sensor is activating and regulating the GRN and rcfB remains unclear.In the seven regulons, three of them, llrA, llrC and hllA, were documentary verified to be related to ASR; regulons #8 and #39 showed less significant in regulon prediction; NHP6A was considered as putative regulon due to its failure mapping in MG1363; and ccpA was another putative regulon without literature support. The six downstream regulons (llrA,llrC,hllA,NHP6A,regulon #39,and regulon #8) other than ccpA, interact with each other to regulate six ASR-related functional modules, including ADI system, DNA repair, LDH, protein repair, envelope alterations, and F0/F1ATPase.The ADI pathway, which generates ATP and protects cells from acid stress (Zuniga, Perez and Gonzalez-Candelas 2002), is under the regulation of NHP6A, llrC, llrA, and hllA.Another important pathway is the LDH (EC 1.1.1.27)under the regulation of NHP6A and llrA, which converts pyruvate and H+ to lactate which is exported outside of cells (Dennis and Kaplan 1960).Chaperons which take part in macromolecule protection and repairing are subjected to regulon llrA.The function of chaperons includes providing protection to against environmental stress, helping protein folding, and repairing damaged proteins, and have been demonstrated to show clear linkage with acid stress in numerous Grampositive bacteria (Frees, Vogensen and Ingmer 2003, Jayaraman, Penders and Burne 1997, Kern, Malki, Abdallah, Tagourti and Richarme 2007, Mujacic and Baneyx 2007).F0/F1ATPase, controlled by llrA and regulon #8, also plays an important role in maintaining normal cellular pH, which pumps H+ out of cells at the expense of ATP (Amachi et al. 1998, Koebmann, Nilsson, Kuipers and Jensen 2000, Lund, Tramonti and De Biase 2014, O'Sullivan and Condon 1999).The GAD (Nomura, Nakajima, Fujita, Kobayashi, Kimoto, Suzuki andAso 1999, Sanders, Leenhouts, Burghoorn, Brands, Venema andKok 1998) and urea degradation (Cotter and Hill 2003) functional modules are missing reliable associations with the regulons in MG1363 while maintaining functions in ASR mechanism in other species. DISCUSSION In this study, we designed a computational framework, RECTA, to construct an eightregulons enrolled ASR regulatory network.The framework provides a useful tool and will be a starting point toward a more systems-level understanding of the question (Cao, Wei, et al. 2017).The identified motifs and regulons suggest the acid resistance is a coordinated response regarding regulons, although most of which have not been identified or experimentally verified.From the three well-identified regulons-llrA, llrC, and hllA, it appears the gene regulation is also complex, as these regulons also interact with other proteins and TFs.F0/F1ATPase is directly involved in the concentration regulations of the intracellular proton.Other pathways are responsible for repairing the damage caused by acid stress, such as DNA repair, protein repair, and cell envelops alterations.However, there were also several reported ASR-related genes or transporters such as htrA in Clostridium spp.(Alsaker et al. 2010), CovS/CovR acid response regulator in Streptococcus (Cumley et al. 2012), cyclopropane fatty acid synthase (cfa) for cell-membrane modification (Budin-Verneuil et al. 2005), and oxidative damage protectant genes like sodA, nox-1 and nox-2 (Santi et al. 2009) that failed to map to MG1363.Using more gene expression datasets for CEM and DEG analyses could be a way to strengthen the result of our computational pipeline, which might cover more significant regulons to construct a more solid and complete regulatory network. Homology mapping at the genomic level showed very long distance in evolution between MG1363 and currently well-annotated model species.Hence, the functional analysis for MG1363 is limited, and it is hard to apply gene functional enrichment to verify our prediction results.With more expression dataset and experiments about protein-protein interactions, the ASR mechanism can be largely improved in L. lactic MG1363. In summary, we found that the ASR at the transcriptome level in MG1363 is an orchestrated complex network.Functional annotation shows these regulons are involved in many levels of biological processes, including but not limited to DNA expression, transcription, and metabolism.Our method builds a TF-regulons-GRN relationship so that the new ASR-related genes can be predicted.Besides, the low false positive and true negative rate indicate the RECTA pipeline as sensitive and reasonable.In fact, considering the high accuracy, we regarded ccpA as the putative regulon, though not connected to any related functional modules, while more robust proves are required.Such results expand current pathways to those that can corroborate cell structures-cell wall, cell membranes-and related functions.Our findings suggest that acid has profound adverse effects and inflict systems-level response.Such predicted response pathways can inform better resistance design. Looking forward to the acid tolerance advantage of L. lactis, which makes its prospective application in drug and vaccine delivery, the effects on anti-obesity, and the metagenomic study, the ASR-related GRN in L. lactis shows an excellent research value.Fully understanding of its theory may contribute to the development of Lactococcus therapy, and can even expand to other close species by genetic modification.Furthermore, our computational pipeline provides an effective way to construct reliable GRN based on regulon prediction, integrating CEMs, DEG analysis, motif finding, and comparative genomics study.It has a durable application power and can be effectively applied to other bacterial genomes, where the elucidation of the transcriptional regulation network is needed.Step 1: microarray data was used to generate co-expressed gene clusters and DEG, and MG1363 genome sequence was used to find operons.Step 2: a motif finding progress was carried out to identify all statistically significant motifs in each of the CEMs.Step 3: a regulon finding procedure was designed to identify all the possible regulon candidates encoded in the genome based on motif comparison and clustering. ADI Step 4: the motifs of each of these regulons were compared to known TFBSs, and DEG analysis between low pH condition and normal condition was used to figure out the ASR-related regulons.Step 5: regulon validation based on literature information verified the significant putative regulons and expanded the results to some insufficiently significant regulons.Step 6: the ASR-related GRN in MG1363 was predicted and described with eight regulons, nine functional modules, and 33 genes.The combination of the above information forms a genome-scale regulatory network constructed for ASR. Figure 2. Regulon prediction using RECTA pipeline (red) and validation and enrichment using literature information and gene blast (blue).All processes were shown in rectangles and results were highlighted with corresponding background colors.In the computational pipeline, 51 regulons with assigned motifs and operons were analyzed sequentially through significant TFBS pairing, DEG conformation, and TF blast.Only regulons contained DEGs (ten) and had related mapped TF (eight) were believed to be the final predicted ASR-related regulons (five).These five regulons were then merged into four and using the corresponding TFs to represent their names.In the literature validation process, known ASR-related transporters were first mapped to the MG1363 genome and resulted in 33 genes.Those genes were then searched in 51 regulons and determine six related regulons.All regulons resulted from both computational pipeline and literature validation were combined, along with the information of functional modules, to determine the GRN.RcfB is assumed to be the overall activator for the rest seven regulons and controls the ASR functional module solely.Three kinds of literature verified and significant ASR-related regulons, llrA, llrC and hllA, two insufficient significant regulon llrD (regulon #39) and regulon #8 (llmg_1803) predicted via our workflow but results under 0.8 motif similarity cutoff or could not find hit, and one putative significant regulon NHP6A control the seven functional modules which are experimentally verified in close species MG1363.The other significant regulon ccpA failed to be confirmed by any literature proved genes or transporters.Two extra functional modules, GAD, and urea degradation show no direct connection to all seven the regulons.One or more homology genes are found in MG1363 for all the nine modules using BLAST.The solid arrows indicate regulation between regulons/TFs and functional modules/genes, and the dashed arrows indicate uncertain control processes.Besides, two ovals indicate two trans-membrane proteins, one is confirmed as F0/F1ATPase and the other one with dash line that we still not find its related information in the public-domain literature. TABLE LEGEND Table 1.14 significant regulons that are verified and mapped to known TFs.According to analyses, operon numbers and DEG determination (yes or no), matched template TFs and mapped TFs were assigned for each significant regulon, respectively, and were aligned based on regulon ID number.Five regulons containing DEG and having the corresponding TF at the same time were bolded, as which were being computationally verified regulons to be responsible for acid stress in MG1363. Figure 1 Figure 1 should be placed here. Figure 2 Figure 2 should be placed here. Figure 3 . Figure 3.A working model of the transcriptional gene regulatory network in response to pH change in L. lactis.The mechanism is activated by the change of proton signal Table 2.Known ASR-related gene mapping from literature in response to pH change.Literature-supported ASR-related genes found in close species or other L. lactis strains.The template transporters and genes were first identified in published studies from the NCBI and UniProt databases.L. lactis Il1403 was used as the organism which is very close to MG1363 if template gene existed.Only 36 templates that successfully mapped to the MG1363 genome were listed, which resulted in 33 genes.All mapped genes and corresponding templated were organized by their regulated pathways which were further used as functional modules.Mapped genes were searched in 51 regulons to build the connections between functional modules and regulons.
v3-fos-license
2014-10-01T00:00:00.000Z
2012-11-01T00:00:00.000
149126
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3201/eid1811.120072", "pdf_hash": "e6a2b6c663b2096b420cfc92964540ceaf36086b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42764", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "sha1": "5ebd563ae846e1d5d4e6afa7b14921835fe01210", "year": 2012 }
pes2o/s2orc
Mycoplasmosis in Ferrets A newly recognized respiratory disease of domestic ferrets is associated with a novel Mycoplasma species. T he number of pet ferrets in the United States has grown rapidly, from an estimated 800,000 in 1996 (1) to an estimated 7-10 million in 2007 (2). Also in the United States, ferrets have become the third most common household pet; their popularity as a pet in Europe is similar (3). The common respiratory diseases in pet ferrets are caused by viruses; canine distemper is probably the most virulent (4). Ferrets also are highly susceptible to human infl uenza virus, but disease is rarely severe (5,6). Bacteria rarely cause disease outbreaks in ferret populations, but they do cause disease in individual ferrets (7)(8)(9). In 2007, in the state of Washington, USA, an outbreak of respiratory disease characterized by a dry, nonproductive cough was observed in 6-to 8-week-old ferrets at a US distribution center of a commercial pet vendor (video of a coughing ferret available at wwwnc.cdc.gov/EID/ article/18/11/12-0072-V1.htm). Over a 4-year period, ≈8,000 ferrets, equal numbers of both sexes, were affected. Every 2-3 weeks, kits had been shipped in groups of 150-200 from a commercial breeding facility in Canada to the distribution center. At 5 weeks of age, before shipment to the distribution center, each kit received a single vaccination for distemper (DISTEM R-TC; Schering Plough, Kenilworth, NJ, USA). Some ferrets exhibited hemoptysis, labored breathing, sneezing, and conjunctivitis. Almost 95% of the ferrets were affected, but almost none died. Symptomatic ferrets were selected from each shipment for testing; results of heartworm screening, PCR and serologic testing for distemper, and serologic testing for infl uenza virus were negative. Cytologic examination of bronchioalveolar lavage (BAL) samples yielded few infl ammatory cells. Thoracic ultrasonography found no abnormalities. Thoracic radiographs showed a mild bronchointerstitial pattern with peribronchial cuffi ng ( Figure 1). Complete blood counts and chemistry results were within reference ranges (10,11). Affected ferrets received broad spectrum antimicrobial drugs, bronchodilators, expectorants, nonsteroidal anti-infl ammatory drugs, and nebulization; all clinical signs except the dry cough temporarily decreased. Numerous ferrets from the distribution center were later surrendered to a ferret rescue and shelter operation, where their cough continued for as long as 4 years. Affected Ferrets In April 2009, a 2-year-old, spayed female ferret at the ferret rescue and shelter, which had originated from the breeding facility in Canada and passed through the US distribution center, became acutely dyspneic and died within 15 minutes. The ferret had shown signs of respiratory disease since arrival at the shelter. Previously at the shelter, 2 ferrets, 4.5 years of age, had shown chronic cough; 1 died of dyspnea in October 2010, and the other was euthanized for humane purposes in November 2010. Both had originated from the breeding facility and passed through the distribution center. Postmortem examinations were performed on all 3 ferrets. After the 2-year-old ferret died in April 2009, BAL samples and ocular swabs were obtained in July 2009 from 3 other ferrets with a history of respiratory disease since their arrival at the ferret shelter. For further diagnostic investigation, BAL samples and ocular swabs were collected from 9 additional affected ferrets in January 2010; one of these was the ferret that died in October 2010. Survey of Healthy Ferrets At a large commercial breeding facility in which signs of respiratory disease had not been observed, BAL samples were obtained from 10 euthanized healthy male ferrets, 5 weeks to 5 years of age. Before postmortem examination, samples were collected from the euthanized ferrets by BAL through an incision in the caudal trachea. Nonbacteriostatic saline (10 mL/kg) was fl ushed into the caudal trachea and lungs and then recovered by aspiration into the syringe. That process was repeated 2× and the fi nal fl ush fl uid was submitted for bacterial culture. Complete postmortem examinations were performed, and sections of lung were collected for bacterial and mycoplasma culture. Additional tissue samples were collected from the lungs, trachea, nasal turbinates, brain, liver, kidneys, spleen, stomach, small and large intestine, thoracic and mesenteric lymph nodes, pancreas, and adrenal glands for routine histopathologic examination. Histologic and Immunohistochemical Analyses and Confocal Microscopy From the 3 ferrets that died April 2009-November 2010, postmortem tissue samples (lungs, trachea, nasal turbinates, brain, liver, kidneys, spleen, stomach, small and large intestine, thoracic and mesenteric lymph nodes, pancreas, and adrenal glands) were collected. They were fi xed in neutral-buffered, 10% formalin solution and processed by standard methods for histopathologic examination. For immunohistochemical examination, paraffi n-embedded samples of lung from the 3 ferrets that died were cut into 5-μm sections. An Enhanced Alkaline Phosphatase Red Detection Kit (Ventana Medical Systems, Inc., Tucson, AZ, USA) and bulk buffers specifi cally designed for use on the BenchMark Automated Staining System (Ventana Medical Systems, Inc.) were used for immunolabeling. Slides were baked in a drying oven at 60°C for 20 min, barcode labeled, and placed in the BenchMark for deparaffi nization and heat-induced epitope retrieval. Slides were then incubated with a mouse monoclonal antibody against mycoplasma (primary antibody) (Chemicon, Billerica, MA, USA) at a concentration of 1:100 for 30 min. The monoclonal antibody was raised against M. bovis strain M23, but it is known to cross-react with numerous other mycoplasma species. The slides were counterstained by using hematoxylin (Ventana Medical Systems, Inc.), then dehydrated, cleared, and mounted. For a positive control, we used formalinfi xed, paraffi n-embedded sections of lung from an M. bovis-positive cow (tested by bacterial culture). For negative controls, we replaced the primary antibody with homologous nonimmune serum. A Zeiss 510 microscope (Jena, Germany) was used for confocal imaging to acquire fl uorescent images, and the Zeiss LSM image analysis software was used for characterizations. The images represented a differential interference contrast/Nomarski image with green (488 nm, argon laser excitation, fl uorescein isothiocyanate [FITC]) and red (543 nm, rhodamine, helium-neon excitation, tetramethylrhodamine-5-[and 6-] isothiocyanate [TRITC])-labeled overlay to demonstrate localization of labels, as described, with slight modifi ed according to Ubels et al. (12). Transmission and Scanning Electron Microscopy For transmission electron microscopy, lung tissue samples that had been fi xed in neutral-buffered, 10% formalin solution were trimmed into 2-mm pieces and postfi xed in 1% osmium tetroxide in 0.1 M sodium phosphate buffer for 2 h. Tissues were serially dehydrated in acetone and embedded in Poly/Bed 812 resin (Polysciences Inc., Warrington, PA, USA) in fl at molds. Sections were obtained with a Power Tome XL ultramicrotome (Boeckeler Instruments, Tucson, AZ, USA). To identify areas of interest, we stained semithin (0.5-μm) sections with epoxy tissue stain and examined them under a light microscope. Then we cut ultrathin (70-nm) sections, mounted them onto 200-mesh copper grids, stained them with uranyl acetate and lead citrate, and examined them under a 100 CXII transmission electron microscope (JEOL, Peabody, MA, USA). For scanning electron microscopy, formalin-fi xed lung tissues were trimmed into 2-4-mm pieces, postfi xed for 1 h in 1% osmium tetroxide, and rinsed for 30 min in 0.1 M sodium phosphate buffer. Tissues were serially dehydrated in ethanol and dried in a critical point dryer (Model 010; Balzers, Witten, Germany) with liquid carbon dioxide as the transitional fl uid. Samples were mounted on aluminum studs by using carbon suspension cement (SPI Supplies, West Chester, PA, USA). Samples were then coated with an osmium coater (NEOC-AT; Meiwa Shoji Co., Tokyo, Japan) and examined in a JSM-7500F (cold fi eld emission electron emitter) scanning electron microscope (JEOL). Bacterial Cultures We submitted 12 BAL samples and 12 ocular swab samples for bacterial and mycoplasma culture by standard microbiologic techniques. The samples were from live ferrets that originated from the distribution center and showed clinical signs of respiratory disease, including coughing. We also submitted 10 BAL samples from 10 healthy ferrets from a different commercial breeding facility not affected by respiratory disease. PCR and Sequence Analysis Only mycoplasmas obtained from BAL samples were analyzed by PCR and nucleic acid sequencing. A plug of agar containing Mycoplasma spp. colonies was gouged from the surface of a mycoplasma agar plate by using a 10-μL disposable inoculation loop and transferred to a microcentrifuge tube. The agar plug was digested by addition of 200 μL of Buffer ATL (QIAGEN, Valencia, CA, USA) and 20 μL of proteinase K solution (QIAGEN), followed by overnight incubation at 56°C. DNA was extracted from the digest by using a DNeasy Blood and Tissue kit (QIAGEN) according to manufacturer's instructions. For PCR, we used 2 sets of primers selective for the bacterial 16S rDNA or the mycoplasma RNA polymerase B (rpoB) gene. The nucleic acid sequences for the 16s rDNA gene were 5′-AGAGTTTGATCMTGGCTCAG-3′ for the forward primer and 5′-GGGTTGCGCTCGTTR-3′ for the reverse primer; this primer set produced an amplicon of ≈1,058 bp. The nucleic acid sequences for the mycoplasma rpoB gene were 5′-GGAAGAATTTGTCC-WATTGAAAC-3′ for the forward primer and 5′-GAATA-AGGMCCAACACTACG-3′ for the reverse primer; this primer set produced an amplicon of ≈1,613 bp. The PCRs were performed by using Platinum Taq DNA Polymerase High Fidelity (Invitrogen Corp., Carlsbad, CA, USA). The reaction mixture consisted of 3 μL DNA; 1 unit of Platinum Taq DNA Polymerase High Fidelity; 60 mmol/L Tris-SO4 (pH 8.9); 18 mmol/L ammonium sulfate; 2 mmol/L magnesium sulfate; 0.2 mmol/L each of dATP, dCTP, dGTP and dTTP; 16.9 μL molecular biology grade water; and 0.5 μmol/L each of the PCR primer. The reaction conditions for the 16s rDNA gene were 1 cycle at 94°C for 4 min; 35 cycles at 94°C for 30 s, 58°C for 45 s, 68°C for 75 s; followed by a fi nal extension step at 68°C for 5 min. The reaction conditions for the rpoB gene were 1 cycle at 94°C for 4 min; 40 cycles at 94°C for 45 s, 55°C for 45 s, 68°C for 90 s; followed by a fi nal extension step at 68°C for 5 min. The PCR products were stained with ethidium bromide and examined after electrophoresis through a 1.5% agarose gel. The PCR amplicons were excised from gels, purifi ed by using the QIAquick Gel Extraction Kit (QIA-GEN), and submitted to the Research Technology Support Facility at Michigan State University for nucleic acid sequencing. Several internal primers were designed to derive the complete sequences of the PCR amplicons. The derived sequences were edited by using Sequencher software (Gene Codes Corporation, Ann Arbor, MI, USA) and analyzed by using BLAST (www.ncbi.nlm.nih.gov/blast/Blast.cgi). The nucleic acid sequences of the mycoplasma isolates and sequences from other Mycoplasma spp. obtained from GenBank were imported into the MEGA4 program (www.megasoftware.net), aligned by using ClustalW in the MEGA4 program, and subjected to phylogenetic analyses. For each isolate analyzed, 933 bp of the 16S rDNA gene sequence and 733 bp of the rpoB gene sequence were available. Phylogenetic trees were constructed by using the neighbor-joining method; data were resampled 1,000× to generate bootstrap percentage values. Gross and Histologic Lesions Gross and histologic lesions from the 3 ferrets that died or were euthanized because of respiratory disease were similar and restricted to the lungs. The lungs were characterized by multifocal, tan to gray, somewhat fi rm nodules centered on airways randomly distributed throughout the pulmonary parenchyma (Figure 2). Hematoxylin and eosin-stained lung sections revealed a moderate bronchointerstitial pneumonia with severe bronchiole-associated lymphoid tissue (BALT) hyperplasia (Figure 3, panel A). BALT hyperplasia was commonly associated with marked narrowing of airway lumina. Additional fi ndings included moderate perivascular lymphoid cuffi ng and diffuse pulmonary congestion. The lumina of some bronchi contained large amounts of mucus admixed with few sloughed epithelial cells and lymphocytes (catarrhal bronchitis). Immunohistochemical examination (with antibodies against mycoplasmas) of affected lung tissue from all 3 ferrets that died exhibited strong labeling along the brush border of terminal respiratory epithelial cells (Figure 3, panel B). There was no penetration of organisms into the adjacent pulmonary parenchyma. With the same antibodies against mycoplasmas labeled with a fl uorescent chromogen, confocal laser microscopy showed positive labeling along the apical border of the lining epithelium of terminal airways (Figure 3, panel C). Additional immunohistochemical examination and reverse transcription PCR for canine distemper and infl uenza A viruses, performed on samples of lung from all 3 ferrets that died, detected no virus. Transmission electron microscopy showed bronchial epithelial cells with loss of cilia and cellular degeneration characterized by swelling of endoplasmic reticulum, vacuolization of mitochondria with loss of christae, and intranuclear chromatin dispersement. Attached to the apical surface of a ciliated cell were pleomorphic, round to ovoid, »0.8-μm mycoplasma-like organisms (Figure 4). Electron microscopy showed severe denudation of bronchial epithelial cells. Cilia were commonly lost or had undergone degenerative changes characterized by bulbous swelling (Figure 5, panel A). Many necrotic bronchial epithelial cells were adhered to the luminal surface, and many pleomorphic mycoplasma-like organisms were diffusely attached to the mucosal surface of bronchi and bronchioles ( Figure 5, panel B). In some areas, focal loss of cilia and cell membrane damage and mycoplasma-like organisms were observed along the periphery of such lesions ( Figure 5, panel C). In other areas, the mucosal surface was covered by many mycoplasma-like organisms that completely obscured the cilia ( Figure 5, panel D). Among the 10 healthy ferrets, no gross or histologic lesions suggestive of mycoplasma infection were identifi ed. Bacteria The 12 BAL samples from affected ferrets were all positive for fast-growing, glucose-fermenting mycoplasmas but negative for other bacteria. Ocular swabs from these ferrets were negative for bacteria. No bacteria or mycoplasmas were isolated from the 10 healthy ferrets. PCR and Sequences Analyses of nucleic acid sequences from the 16S rDNA gene (GenBank accession nos. JQ910955-JQ910966) for each of the 12 mycoplasma isolates showed that the isolates were 99% similar to each other and segregated the isolates into 2 groups defi ned by nucleotide differences at 3 positions. Phylogenetic analysis with partial 16S rDNA gene sequences showed that the isolates were 96% to 97% similar to M. molare (isolated from a canid). Other closely related Mycoplasma spp. included M. lagogenitalium (isolated from Afghan pika), M. neurolyticum (isolated from mice and rats), M. sp. LR5794 (isolated from raccoons), M. collis (isolated from mice and rats), M. cricetuli (isolated from Chi- nese hamsters), and M. sp. EDS (isolated from house musk shrews) (Figure 6, panel A). On the basis of the 16S rDNA gene sequences, these mycoplasmas isolated from ferrets, along with the aforementioned closely related Mycoplasma spp., are in the hominis group of mycoplasmas. Analyses of nucleic acid sequences from the rpoB gene (GenBank accession nos.. JQ910967-JQ910978) for each of the mycoplasma isolates from ferrets segregated the isolates into 2 groups of genetic variants (groups 1 and 2), which were 90% -91% similar to each other. Within a group, the isolates were 99%-100% or 98%-100% similar to each other. Although nucleotide differences were identifi ed in as many as 12 positions within a group and 65 positions between groups, the corresponding amino acid sequences were 100% similar within a group and differed at only 2 aa positions between groups. Phylogenetic analysis showed that the partial rpoB gene sequences of the isolates were only 85%-86% similar with M. molare and 84%-86% similar to M. lagogenitalium, the most closely related Mycoplasma species. (Figure 6, panel B). Grouping of the isolates according to sequences of 16S rDNA and rpoB gene were in agreement for all but 1 isolate. Phylogenetic relatedness of these newly identifi ed mycoplasmas to other Mycoplasma spp. was similar for the 16s rDNA and the rpoB genes. Discussion Mycoplasma spp. are the smallest free-living prokaryotic microorganisms of the class Mollicutes (16). They lack a cell wall and are thought to have developed from genome reduction of gram-positive bacteria (17). Most species are host-specifi c facultative anaerobes and do not usually replicate in the environment (18). Their complex growth requirements include cholesterol, fatty acids, and amino acids (19). In the respiratory tract, mycoplasmas attach to ciliated epithelial cells by surface-exposed adhesions (20). Although the pathogenesis of host cell injury remains largely unknown, proposed virulence mechanisms include induction of proinfl ammatory cytokines by phagocytes (21), oxidative damage to host cells by production of toxic by-products (22), and cleavage of host DNA through nucleases (23). Many mycoplasmas cause B lymphocytes and/or T lymphocytes to commence dividing in a nonspecifi c manner (24). This mitogenic effect probably explains the characteristic BALT hyperplasia observed in infected host tissues. A commonly described strategy for immune evasion is phenotype plasticity, whereby reversible switching or modifi cation of membrane protein antigens results in altered surface antigens (25). This mechanism might support the persistent, chronic nature of mycoplasmosis often observed. The precise role of mycoplasmas in various host species is often diffi cult to interpret because certain mycoplasmas can be isolated from apparently healthy animals. The data presented here describe a recently emerging respiratory disease of ferrets, characterized especially by high morbidity rates and a dry, nonproductive cough, associated with an infection by a novel Mycoplasma species. To our knowledge, no Mycoplasma species have been associated with clinical disease in ferrets or other mustelids. On the basis of limited sequence data, the isolated mycoplasmas most likely represent a novel Mycoplasma specie or species. In 1982, a study from Japan reported isolation of a glucose-fermenting mycoplasma from the oral cavities of 81% of clinically healthy ferrets kept in a laboratory setting (26). This mycoplasma isolate was not antigenically related to any reference strains from dogs, cats, sheep, cattle, mice, raccoon dogs, or a Japanese badger. In 1983, similarly fastgrowing, glucose-fermenting mycoplasmas were isolated from the lungs of healthy mink kits (1-2 months of age) in Denmark (27). This species was named M. mustelae. Because the Mycoplasma spp. isolates from healthy ferrets or mink were not genetically characterized, comparison with the isolates from ferrets with respiratory disease in this study was not possible. The Mycoplasma species isolated from affected ferrets showed the highest sequence similarity to M. molare and M. lagogenitalium. M. molare was fi rst isolated in 1974 from the pharynx of dogs with mild respiratory disease (28). However, the pathogenicity of M. molare in dogs or other species remains speculative. M. lagogenitalium was fi rst isolated in 1997 from prepucial samples from apparently healthy Afghan pikas (29). The 3 mycoplasma isolates obtained from BAL samples from 3 ferrets in July 2009 were highly homogenous according to limited sequence data. All 3 isolates were included in the mycoplasma isolate group 2. Of note, only 3 of the 9 isolates obtained from BAL samples from 9 ferrets in January 2010 had the same partial rpoB amino acid sequence data as the previous isolates and were also included in the mycoplasma isolate group 2. In contrast, the partial rpoB sequence of 5 of the more recent isolates differed by 9%-10% from that of the previous isolates, and the isolates were identifi ed as belonging to mycoplasma isolate group 1. Whether these differences represent multiple Mycoplasma species circulating through the ferret population or a genetic change of the original mycoplasma over time is uncertain, as is the virulence of each of the potential strains. Only 1 isolate was identifi ed in the bronchoalveolar lavage sample from a ferret for which postmortem examination confi rmed lesions consistent with a mycoplasma infection. Experimental reproduction with the different isolates is required to further elucidate the virulence of each putative novel mycoplasma. Respiratory disease attributed to mycoplasma infections in cattle (30), pigs (31), poultry (32), mice, and rats has been well described (33). The clinical signs and microscopic lesions in ferrets with the emerging respiratory disease described here closely resembled signs and lesions described for pigs infected with M. hyopneumoniae (34), rats infected with M. pulmonis (35), and cattle infected with M. bovis (36). For all of these species, chronic pulmonary mycoplasmosis is characterized by lymphoplasmacytic perivascular cuffi ng and extensive BALT hyperplasia, as was observed in ferrets in this study. Furthermore, M. cynos (37) and an untyped Mycoplasma species (38) reportedly cause pulmonary lesions similar to those in dogs and cats, respectively. (13). The bootstrap consensus phylogenetic trees were constructed by using the neighbor-joining method (14). The bootstrap values as shown above the branches were inferred from 1,000 replicates of data resampling to represent the evolutionary distances of the species analyzed (15). The tree is drawn to scale; branch lengths are in the same units as those of the evolutionary distances used to infer the phylogenetic tree (i.e., the units of the number of base substitutions per site). The similarity between the pathologic changes in the ferrets and those in other species with mycoplasmal pneumonia highly supports a causal relationship between the pulmonary disease and the identifi ed novel mycoplasma in these ferrets. In addition, mycoplasmas were the only bacterial pathogens recovered from the respiratory tract of diseased ferrets, there was no microscopic evidence of a viral disease, and immunohistochemical and reverse transcription PCR results for canine distemper and infl uenza A were negative. Furthermore, mycoplasmas were not detected in the sampled population of healthy domestic ferrets 5 weeks to 5 years of age. Because mycoplasmas have been recovered from the respiratory tract of apparently healthy mustelids (26,27), other unknown factors might have predisposed the lungs of these ferrets to colonization. The severity of the clinical signs might have been exacerbated by infections with secondary bacteria, as commonly occurs in other species (30,31,33), and antimicrobial drug therapy might have prevented isolation of such bacteria. A concurrent viral disease seems unlikely because characteristic microscopic lesions were absent and common respiratory viral pathogens in ferrets were not identifi ed. We speculate that the stress of shipment from the breeding facility to the distribution center might have resulted in the disease manifestation. To more fully elucidate pathogenicity and disease dynamics in this species, experimental reproduction of the respiratory disease in ferrets is necessary.
v3-fos-license
2019-12-05T09:31:57.300Z
2019-11-28T00:00:00.000
210936112
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.5890", "pdf_hash": "e165b52413684a109728efedc414febf898a1cdb", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42765", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "4ba5aa08dff2eaa459247e98b2079260ddf3ee83", "year": 2019 }
pes2o/s2orc
Plant functional traits differ in adaptability and are predicted to be differentially affected by climate change Abstract Climate change is testing the resilience of forests worldwide pushing physiological tolerance to climatic extremes. Plant functional traits have been shown to be adapted to climate and have evolved patterns of trait correlations (similar patterns of distribution) and coordinations (mechanistic trade‐off). We predicted that traits would differentiate between populations associated with climatic gradients, suggestive of adaptive variation, and correlated traits would adapt to future climate scenarios in similar ways. We measured genetically determined trait variation and described patterns of correlation for seven traits: photochemical reflectance index (PRI), normalized difference vegetation index (NDVI), leaf size (LS), specific leaf area (SLA), δ13C (integrated water‐use efficiency, WUE), nitrogen concentration (NCONC), and wood density (WD). All measures were conducted in an experimental plantation on 960 trees sourced from 12 populations of a key forest canopy species in southwestern Australia. Significant differences were found between populations for all traits. Narrow‐sense heritability was significant for five traits (0.15–0.21), indicating that natural selection can drive differentiation; however, SLA (0.08) and PRI (0.11) were not significantly heritable. Generalized additive models predicted trait values across the landscape for current and future climatic conditions (>90% variance). The percent change differed markedly among traits between current and future predictions (differing as little as 1.5% (δ13C) or as much as 30% (PRI)). Some trait correlations were predicted to break down in the future (SLA:NCONC, δ13C:PRI, and NCONC:WD). Synthesis: Our results suggest that traits have contrasting genotypic patterns and will be subjected to different climate selection pressures, which may lower the working optimum for functional traits. Further, traits are independently associated with different climate factors, indicating that some trait correlations may be disrupted in the future. Genetic constraints and trait correlations may limit the ability for functional traits to adapt to climate change. | INTRODUC TI ON Forests are under pressure from climate change (Bonan, 2008;Canadell & Raupach, 2008;Chazdon, 2008;Riitters et al., 2002), and impacts are expected to reduce long-term resilience and function (Chazdon, 2008). Many organisms have evolved traits suited to their environment through the long-term process of natural selection resulting in local adaptation (Kawecki & Ebert, 2004). However, changes to climate might negatively impact those long-established patterns of local adaptation (Aitken & Whitlock, 2013;Hoffmann & Sgrò, 2011). While it may be many years until the full effects of global climate change are realized, the effects on local forest populations have already been observed (Harris et al., 2018;Hoffmann et al., 2019). For example, climate change impacts include changes in forest distribution (Kelly & Goulden, 2008;Lenoir et al., 2010) and widespread tree mortality due to increased severity of drought and heat waves (Allen et al., 2010;Matusick et al., 2018;Williams & Dumroese, 2013). The range and complexity of impacts of climate change on forests are likely to disrupt current patterns of adaptation, making it critical to understand the adaptive capacity of natural forests, and associated ecological and evolutionary constraints. The adaptive capacity of trees may facilitate the long-term persistence of natural forests. It is predicted that to account for climate change, many temperate tree species will have to migrate toward the poles or higher elevations (Aitken, Yeaman, Holliday, Wang, & Curtis-McLane, 2008), although this pattern is not ubiquitous (see Crimmins, Dobrowski, Greenberg, Abatzoglou, and Mynsberge (2011), e.g, of downward shift in species' optimum elevation due to changes in water balance). Evidence suggests that species migration is not occurring or has been limited due to hard boundaries such as oceans or human-made barriers (Corlett & Westcott, 2013;Parmesan & Yohe, 2003;Zhu, Woodall, & Clark, 2012). However, many tree species show high levels of genetic variation and may have enough standing genetic variation for positive selection to occur to better match phenotype and future climate (Barrett & Schluter, 2008). Functional traits of tree species are indicative of patterns of adaptation to their environment (Reich, 2014), and the relationship between climate and some traits is well established (Wright et al., 2005). Functional traits can reflect plant performance, stress, and allocation and therefore are shaped by selective pressures as demonstrated by trait variation along climatic gradients indicative of genetic adaptation (Reich et al., 2003). To date, there has been a focus on species mean trait values; however, for species to be able to adapt to climate change they require, heritable, intraspecific trait variation. Yet, a few studies focus on intraspecific trait variation and if these traits are genetically determined (e.g., Aranda et al., 2014;Schreiber, Hacke, & Hamann, 2015;Hajek, Kurjak, von Wühlisch, Delzon, & Schuldt,2016;Madani et al., 2018). There is growing appreciation of the importance of intraspecific variation in functional and complex traits in providing the capacity to adapt to climate change. Functional traits, by definition, are indicative of their relationship to the environment (Shipley et al., 2016), and population differentiation along climate gradients can be used to quantify the relative contribution of climate variables to patterns of trait differentiation (Madani et al., 2018). But these outputs would be unable to estimate the relative trait responsiveness to selection pressures. Estimating narrow-sense heritability is one way to calculate how much trait variation is due to genetics and estimate relative contribution of natural selection on trait differentiation (Geber & Griffen, 2003), allowing us to predict how traits may respond to new climate pressures through natural selection and genetic constraints. Together, trait heritability and climate gradients can help us predict how traits might individually and collectively evolve (or not) in the future. The worldwide leaf economic spectrum (LES) consistently explains the complex relationship between environment and leaf traits and also coordination between leaf structure and function (Donovan, Maherali, Caruso, Huber, & de Kroon, 2011;Reich, Walters, & Ellsworth, 1997;Wright et al., 2004). Specifically, it is expected that thinner leaves (high specific leaf area values) would be adapted to cooler, wetter conditions and this pattern is coordinated with high nitrogen concentrations . In addition, traits can be correlated across a species' distribution, even if no mechanistic coordination is present (e.g., Wright et al., 2007). Relationships among ecologically important plant traits may be an adaptive signature from natural selection (Westoby, Falster, Moles, Vesk, & Wright, 2002). Yet the genetic basis for functional trait variation is largely unknown, including the linkage among correlated or coordinated traits and their adaptive capacity to respond to climate change. Genetic variation in plant traits within species is essential for them to adapt to novel climate conditions by influencing establishment, survival, and fitness (Violle et al., 2007). Importantly, standing genetic variation will be required for traits to keep pace with selection imposed through processes of natural selection, enabling disrupted in the future. Genetic constraints and trait correlations may limit the ability for functional traits to adapt to climate change. K E Y W O R D S climate adaptation, Corymbia calophylla, general additive models, heritability, intraspecific variation, trait coordination a species to occupy a broad range of climates and adapt to novel climate conditions (Alberto et al., 2013). Therefore, measuring variation in functional traits, estimating their heritability, and identifying possible agents of selection in a foundation tree can provide critical information in how a species might continue to persist in a changing climate. Here, we use an experimental multipopulation plantation to assess trait variation in the foundational forest canopy species, Corymbia calophylla R. Br. K.D. Hill and L.A.S. Johnson (Eucalyptus sensu lato; family Myrtaceae), located in southwestern Australia. Measuring traits in an experimental plantation provides a common environment allowing for the isolation of genetic effects associated with phenotypic differences among populations. We expect that trait variation among populations will show unique patterns of adaptation associated with their climate-of-origin. While some traits will be heritable and show strong shifts along climate gradients (e.g., water-use efficiency), other traits will have greater variation within and among populations, resulting in lower levels of heritability (e.g., SLA). We test the hypothesis that traits will be unequally affected by climate change, such that traits with higher levels of heritability will need to adapt to their new climates or migrate with their optimum climates. Finally, we ask if trait correlation and coordination, such as those within the LES paradigm, will maintain similar relationships in the future. We discuss the implications for the capacity to adapt to climate change and the ability to predict the coevolutionary trajectories of functional traits. | Study species Corymbia calophylla is a foundation forest canopy species located in Western Australia (WA). It is considered a foundation species because its characteristics are critical for forest structure and ecological processes (Ellison et al., 2005). This species is an ideal candidate in which to study adaptation of functional traits because its distribution traverses strong environmental gradients over short distances, it has recently experienced mortality events attributed to climate change (Matusick, Ruthrof, Brouwers, Dell, & Hardy, 2013;Ruthrof, Matusick, & Hardy, 2015), and evidence of adaptation to climate has been identified in physiological experiments and genome-environment investigations (Ahrens, Byrne, & Rymer, 2019;Ahrens, Mazanec, et al., 2019;Aspinwall et al., 2017;Blackman, Aspinwall, Tissue, & Rymer, 2017). | Experimental site and population sampling This research was conducted in a plantation near Margaret River, WA (Figure 1), located in the cool-wet region of the distribution of C. calophylla. Seed collection and trial design are described in detail in Ahrens, Mazanec, et al. (2019). Briefly, 18 populations represented by 165 families were established at the experimental site for a total of 3,960 individuals in six replicated blocks. Families are defined here as individuals that have a known, common mother but an unknown father (i.e., half-sibs). We developed two separate data sets: the first was used to estimate trait heritability, while the second was used to explore correlation and coordination among traits and its association with climate-of-origin. For the first data set, we focused on four populations representing four contrasting climate combinations covering the full geographic distribution of C. calophylla (BOO, cool-wet climate; CRI, cool-dry climate; HRI, hot-dry climate; SER, hot-wet climate; Figure 1). A total of 40 families (when available) from the four populations were sampled, with 12 replicate trees from each family, to provide estimates of phenotypic variance within and among families (480 trees). For the second data set, we collected data from 12 populations with four replicates from each of 10 families (when available) to estimate variance within and among populations (480 trees; Table 1; Figure 1). | Trait measurements Traits were measured in March 2017 on C. calophylla trees that were 29 months old and 2-3 m tall. For each individual tree, we removed a north facing, mid-canopy side branch at its intersection with the main stem. The side branch was removed in the morning (between 8 a.m. and 12 noon), stored in a cool box, and measured in the afternoon (between 12 noon and 6 p.m.). For each side branch, we collected data for seven separate traits: leaf-level spectrometer readings to calculate two indices (photochemical reflectance index (PRI) and normalized difference vegetation index (NDVI)), leaf size (LS), specific leaf area (SLA), integrated water-use efficiency (δ 13 C), nitrogen concentration (N CONC ), and wood density (WD). All seven traits have shown close association to climate in past studies. Photochemical reflectance index (PRI) is a spectral physiological index that is an indicator of vegetation stress based on its sensitivity to radiance-use efficiency (RUE) and the xanthophyll cycle (Gamon, Serrano, & Surfus, 1997;Garbulsky, Peñuelas, Gamon, Inoue, & Filella, 2011). Leaf-level normalized difference vegetation index (NDVI), which is generally used to measure chlorophyll content by quantifying leaf greenness, and is closely related to fraction of absorbed photosynthetically active radiation (FPAR) (Myneni et al., 2002;Peng & Gitelson, 2012). While not technically functional traits (PRI and NDVI), traits based on spectral properties of leaves can be indicative of photosynthetic activity and plant stress, and from hereon, we include these complex traits as functional traits for ease of discussion. High water-use efficiency (WUE) is the link between photosynthesis and evaporation (Yang et al., 2016) that translates to climatic tolerance under water limitation. Leaf size may be beneficial in certain circumstances as it can act as a major determinant of boundary layer thickness, particularly in low-wind conditions. Larger leaves generally experience higher temperatures, increasing carboxylation and other catabolic processes such as dark respiration (Jordan & Smith, 1995;Jones, 2013). Specific leaf area (SLA) varies across global climate gradients , and high SLA values increase tree susceptibility to drought-induced mortality (Greenwood et al., 2017). Water-use efficiency is correlated with δ 13 C (an isotopic signature measuring the ratio of 13 C and 12 C (Farquhar & Richards, 1984)) and relates to leaf gas exchange properties (Cernusak et al., 2013;Diefendorf, Mueller, Wing, Koch, & Freeman, 2010). Nitrogen concentration (N CONC ) is indicative of growth. Leaf nitrogen plays an important role in leaf physiological processes such as photosynthesis, respiration, and transpiration (Wang et al., 2016) and is an indicator of productivity (Ramoelo, Skidmore, Schlerf, Mathieu, & Heitkönig, 2011). The association between wood density (WD) and drought is more complicated and context dependent, because it is associated with many ecological signals (Brodersen, 2016;Gleason et al., 2016). Generally, high WD is associated with lower susceptibility to drought (Greenwood et al., 2017; Hacke, Sperry, F I G U R E 1 Distribution of Corymbia calophylla in southwestern Australia, and location of 12 populations overlaid on maps of (a) precipitation of the driest month (P DM ) (mm), and (b) average maximum temperature of the warmest month (T MAX ) (°C). The experimental planting site is denoted by the white point and labeled MR. The populations used for data set 1 are denoted by four colors for the four populations (BOO = Boorara, cool-wet climate; CRI = Cape Riche, cool-dry climate; HRI = Hill River, hotdry climate; SER = Serpentine, hot-wet climate), and all 12 populations (colored and black points) are used for data set 2 Note: The four-population data set (data set 1) was used to estimate heritability, and the 12-population data set (data set 2) was used to test trait correlations and model trait distributions. Abbreviations: 1 AI , 1 aridity index ; P DM , precipitation of the driest month; P MA , mean annual precipitation; T MAX , maximum temperature of the warmest month. A field spectroradiometer (ASD standard-resolution FieldSpec4, Malvern Panalytical) was used to measure leaf reflectance in the visible and reflected infrared spectral regions with 2,151 narrow bands (10 nm full width at half maximum) and 1 nm spacing between band centers. Measurements were made for three leaves using a leaf-clip attachment with its own light source and calibrated to % reflectance using data collected from a Spectralon white reference panel. Means for all bands among the three leaves were calculated for each individual tree. Specific wavelengths were used to estimate the PRI and modified red-edge NDVI. The PRI was calculated with the following equation; R xxx is the % reflectance at xxx nm (Gamon, Penuelas, & Field, 1992;Gamon et al., 1997): The modified red-edge NDVI was calculated using the following equation (Sims & Gamon, 2002): and was developed as an improvement to the standard NDVI to provide a more robust estimate of chlorophyll content (Tucker, 1979) across a wide range of species and leaf structures (Sims & Gamon, 2002). Henceforth, this index will be referred to as "NDVI" in the text. Specific leaf area (SLA) was measured on three fully matured leaves that were representative of the branch. After removing half of the petiole with a razor, the leaves were scanned into a computer using a Canon flatbed scanner (model # LiDE220) at 50 dpi. The leaves were kept in an airtight box with silica gel until they could be dried in an oven at 70°C for 48 hr. δ 13 C and nitrogen concentration (N CONC ) were measured from leaves dried using a benchtop freeze dryer (Alpha 1-4 LDplus Laboratory Freeze Dryer, Martin Christ). The leaves were grounded into a fine powder using a cyclotec mill (Foss Analytics) and sent for isotope analysis (ANU Isotope Laboratory) using a coupled EA-MS system (EA 1110 Carlo Erba; Micromass Isochrom). For wood density, a 3-4 cm piece of the thickest part of the branch was removed, excluding areas that included knots, the bark was removed, and the volume measured using the water displacement method. The piece of wood was then dried in a 70°C oven for 7 days before measuring for dry weight. Final wood density was calculated by dividing the dry weight by the volume (g/cm 3 ). | Statistical analyses Using data set 1 with four populations, mixed-effects linear models were applied to estimate differences between populations using the lme function in the nlme package in R (R Core Team, 2018). For all linear models, family was considered a random effect and population considered a fixed effect. Post hoc Tukey tests were performed on the mixed-effects model results using the glht function in the multcomp package in R to confirm differences among populations. We estimated the family level narrow-sense heritability and the relative effect of selection on each of the seven traits. Narrowsense heritability (ĥ 2 ) captures the proportion of genetic variation attributed to additive genetic variance (Lynch & Walsh, 1998). Here, we used data set 1 with four populations within ASReml version 4.1 (Gilmour & Dutkowski, 2004). Initial assessment of model fit was conducted using the following univariate random model: where b i is the random effect of the ith block, p j is the random effect of the jth population, f i.k is the random effect of the kth family within the ith population, b i × f i.k is the block × family interaction effect, and e ijk is the random error. Narrow-sense heritability was estimated using the following equation: where ĥ2 is the narrow-sense heritability, 2 fam is the family within population variance component, 2 fam×block is the family × block interaction, and 2 error is the error component of variance. Eucalypts are known to have a mixed mating system; therefore, a coefficient of relationship (ρ = 1/2.5) was assumed to correct for selfing effects of about 30%, which, if not corrected for, could result in inflated estimates of heritability for growth traits (Bush, Kain, Matheson, & Kanowski, 2011;Costa e Silva, Hardner, & Potts, 2010;Griffin & Cotterill, 1988;Hodge, Volker, Potts, & Owen, 1996). Significance of family variance components was determined using a log-likelihood ratio test as described in the ASReml manual by dropping the family component from the model and comparing these log-likelihood results to the full model. Using data set 1 with four populations, we developed two principal components analyses (PCA) based on family means to understand the relationship between populations, traits, and climate and used to inform the modeling step described below. The first PCA was performed with the seven measured traits (LS, WD, δ 13 C, N CONC , SLA, PRI, and NDVI) to view relative explanatory power of population differentiation using traits alone and identify possible trait-trait relationships. A second PCA explored the relationship between climate and traits. First, climate data were downloaded from WorldClim (Fick & Hijmans, 2017) (precipitation of the driest month (P DM ), mean annual precipitation (P MA ), precipitation variation (P RANGE ), mean annual temperature (T MA ), maximum temperature of the warmest month (T MAX ), and temperature variation (T RANGE )), and Consortium for Spatial Information (CGIAR-CSI) (aridity index (AI); transformed to 1 AI ), and the data were extracted for the location of each source population. For this study, AI was defined as mean annual precipitation mean annual evapotranspiration and we used the inverse of this ratio because it is more intuitive to interpret (i.e., the higher the number the more arid the climate). Four environmental variables (P DM , P MA , T MAX , 1 AI ) were included in the second PCA that was performed to characterize underlying patterns of correlation among independent and dependant variables and to understand the correlation between climate and traits. We used the prcomp function in R and plotted the PCA using the ggbiplot function. Using data set 2 with 12 populations, we explored four sets of correlations among seven traits. We deliberately use the term "correlation" to describe the relationship between three of the trait pairs because coordination implies established mechanistic relationship, like those in the LES and have not been explicitly established for three of the correlations explored in detail here. The first correlation was between δ 13 C and PRI. PRI has been shown to be a measure of water stress and indirectly photosynthetic radiation-use efficiency, while δ 13 C is a well-known measure of water-use efficiency. Together, radiation-use efficiency and water-use efficiency have been shown to be tightly correlated in wheat and soybean (Caviglia, Sadras, & Andrade, 2004). The second was LS and NDVI, which have similar relationships with fast-growth syndrome. Plants that have a fast-growth strategy also grow larger leaves (Cornelissen, 1999;Wright et al., 2017), as light is by far the most limiting resource for tree growth (Pacala et al. 1996), and light capture depends on the size of the leaves (Falster & Westoby, 2003;Pearcy, Muraoka, & Valladares, 2005;Pearcy, Valladares, Wright, & De Paulis, 2005). We expect a similar relationship for NDVI and have been shown to be a good predictor of aboveground biomass in trees (Goetz & Prince, 1996;Malstrom et al., 1997;Wang, Rich, Price, & Kettle, 2004). Therefore, we would expect NDVI and LS to be highly correlated. The third was WD and N CONC , where we expect that the plants with lower WD would also have higher N CONC (e.g., (Beets, Gilchrist, & Jeffreys, 2001;Lindstrom, 1996)), where faster wood growth can be attributed to higher N CONC . The fourth association tested was between SLA and N CONC , which have been well described within the LES paradigm (Reich, 2014;Reich et al., 1997). Trait correlations were tested using linear models (function lm) on family means between the traits in R. The outputs were plotted using base R commands. To estimate and predict trait change through geographic space and time, general additive models (GAM) were used to detect nonlinear relationships between trait and climate. A GAM uses a robust and efficient smoothing parameter (Wood, Pya, & Säfken, 2016), and we were able to fit nonlinear smoothing terms using regression splines without any a priori assumptions. This property makes GAMs very useful for detecting nonlinear responses across a species distribution, and trait variation is generally nonlinearly associated with environment (Moran, Hartig, & Bell, 2016). The GAMs were performed using data set 2 with 12 populations and current bioclim variables downloaded from WorldClim (P DM , P MA , P RANGE , T MA , T MAX , T RANGE ) and CGIAR-CSI ( 1 AI ) at the sampled population sources. Separate GAMs were developed for each trait individually, in order to evaluate how each trait might respond to climate change. We used the GAM function in the MGCV v1.8-24 package in R (Wood, 2011) to perform the analyses. To minimize overfitting of the data, we only explored different combinations of up to three environmental variables and bound degree of smoothness to three for each variable (Araujo, Pearson, Thuiller, & Erhard, 2005), using cubic regression splines to control the degree of smoothness (Wood, 2006). The model that gave the highest R 2 and deviance explained are reported and discussed. Extrapolation of the GAM model outputs was performed using the predict function in R on the climate rasters of the distribution of C. calophylla, providing a predicted trait value for each pixel under current conditions. These outputs were visualized using ggplot2 (Wickham, 2016). To predict the effects of climate change on trait distributions, raster files from 2070 under emissions scenario RCP 8.5 were downloaded from WorldClim under the CCSM4 simulation. A future aridity index raster file was created using the ENVIREM package in R. The outputs of the GAM models were used to predict future trait distributions by substituting the current rasters with the future climate predicted rasters from RCP 8.5 CCSM4 simulation. In order to visualize where trait changes are predicted to occur, the differ- | Trait differentiation and heritability All seven traits showed patterns of population differentiation ( Figure 2). The two most extreme populations in relation to precipitation of the driest month (P DM ) (populations from HRI and BOO) were significantly different for all traits except for NDVI. Three traits (δ 13 C, PRI, and N CONC ) showed a significant linear response with at least one climate variable (Table S1). | Trait correlation The principal components analysis shows clustering of families within populations and associations between traits. The first two F I G U R E 2 Trait means for 12 populations distributed along a gradient of P DM from the population's climate-of-origin. Data set 1 has colored symbols for the four populations (BOO = cool-wet; CRI = cool-dry; HRI = warm-dry; SER = warm-wet) with SE of the mean based on 10 families with 12 replicates. Data set 2 is shown with gray dots for the means of 12 populations. Letters indicate significant differences between populations from a post hoc Tukey's test (a = 0.05) on a mixed-effects linear model with family as the random variable. N CONC , concentration of nitrogen (%); NDVI, normalized difference vegetation index; P DM , precipitation of the driest month (mm); PRI, photochemical reflectance index; SLA, specific leaf area; WD, wood density; δ 13 C, ratio of 13 C versus 12 C | Adaptation to climate When the PCA included trait and climate variables, the separation of populations became more apparent (Figure 3b). The warm, dry climate (HRI) population was the most divergent from the other three populations. The PC1 best differentiates the populations with 32.4% of the variance explained by a combination of trait and climate variables. We find that SLA is correlated with T MAX , δ 13 C is correlated with P DM , PRI is correlated with P MA , and no trait is correlated with 1 AI . P DM had the greatest loading for PC1 closely followed by T MAX and Figure S1). At least one temperature and one precipitation variable were presented for all traits except for δ 13 C (only precipitation variables) and PRI (only temperature variables). The most common climate variable incorporated into the final GAM for each trait was P MA , and the least common was 1 AI ( Table 2). The deviance explained by the GAM analysis was high (R 2 > 0.6) for all traits. The GAM analysis explained greater than ca. 85% of the deviance for SLA, δ 13 C, N, NDVI, and LS. All traits showed altered distributions between current and future climates ( Figure 5). The traits PRI, NDVI, and N CONC mostly had a reduction in their trait values throughout the distribution, while SLA was the only trait that consistently increased in the future. The δ 13 C, LA, and WD traits had reduced trait values in some geographic regions but increased values in other regions. The magnitude of trait change was greatest for PRI, LS, and N CONC , which are predicted to change by 25%-30%. In contrast, δ 13 C and NDVI had the lowest magnitude of predicted trait change with 1.5% and 10% change, respectively. There are portions of the range that are predicted to be outside of current climate conditions (shown with a gray line in Figure 5); therefore, our predictions in these regions should be treated with caution. All four of the tested trait correlations were highly significant when evaluating the spatial relationships (both current and future) between predicted trait distributions ( Figure 6; p = 2.2e −16 ). However, the SLA/N CONC slopes explained only 6%-7% of the variation, indicating high levels of variability among the data. In contrast, the δ 13 C/PRI slopes explained 45%-60% of the variation for future and current predictions, respectively ( Figure 6). The spatially predicted slopes of the trait correlations compared to the sampled populations only differed for the δ 13 C/PRI correlation ( Figure 6a). Trait correlations are predicted to differ between current and future conditions for all trait pairs except LS/NDVI ( Figure 6b). These differences are not uniform across correlations, as the WD/N CONC slope changes from a negative slope to a positive slope, and the δ 13 C/PRI slope appears to shift along the PRI axis. The traits associated with the LES also change from a negative to a positive slope, but this might be attributable to the large amount of variation in SLA. | D ISCUSS I ON We found evidence to support adaptation of functional traits in C. calophylla across populations in southwest Western Australia. These patterns of adaptation are consistent with previous studies of genetics, growth, and physiology of C. calophylla (Ahrens, Byrne, et al., 2019;Ahrens, Mazanec, et al., 2019;Blackman et al., 2017). This study builds upon previous research by focusing on the genetic determination of functional traits, elucidating the relationship between functional traits, determining the relationship between functional traits and climate, and predicting genetically determined trait distributions under current and future climates. By evaluating trait variation in a common experimental site, we were able to evaluate trait heritability and genotypic differences among populations, with minimal interference from environmental factors. Therefore, the trait values measured are accurate for the climate-of-plantation and are representative of genotypic signatures of local adaptation. There were significant differences among populations for all traits, and all traits exhibited associations with climate-of-origin but some traits were related to climate-of-origin in unexpected ways. We found five of the seven traits had significant heritability indicating an ability to respond to selection and adapt to climate. However, traits differed in their heritability and trajectory along climate space. We predict trait coordination and trait correlation decoupling in the future, where traits may change at different rates to adjust to future climates. | Trait differentiation and heritability Three of the traits measured (NDVI, PRI, and LS) exhibit tendencies that follow with previous studies exploring the adaptation of functional traits adapted to hot, dry environments. For example, the lower NDVI from a hot, dry climate compared to a hot, wet climate suggests an adaptive strategy among populations. This finding agrees with Ahrens, Mazanec, et al. (2019), which shows a strong pattern of slow-growth strategies among northern populations, which is characteristic of hot and arid environments (King, 1990;Moles, 2018;Reich, 2014). The population differences found in PRI are best explained by T MAX (see the GAM analysis) and clearly show a lower value in the population from a hot, dry climate (HRI), indicating a lower photosynthetic rate due to reduced radiation-use efficiency (Garbulsky et al., 2011). This is in agreement with Aspinwall et al. (2017), who found higher photosynthetic rates in the cool, wet population compared to the hot, dry population. Lastly, LS has received significant attention recently, as worldwide LS patterns show significant relationships with temperature and latitude (Wright et al., 2017). In C. calophylla, LS is correlated with temperature of origin F I G U R E 4 Correlation between traits among 114 families from 12 populations. Each point represents a family mean value from four trees. The line of best fit, R 2 , and p-value are calculated from a linear model δ F I G U R E 5 Current, future, and predicted % change in 2070 trait values for the seven traits across Corymbia calophylla's distribution using GAM and the relative change of those traits predicted from current to future climate. Gray lines denote climate space that exceeds current species distribution limits. Population and trait abbreviations are defined in Table 1 and is similar to Eucalyptus loxophleba where LS decreased with temperature (Steane et al., 2017). Greater leaf size may confer advantages with greater gas exchange (Zwieniecki, 2004) and protection against herbivory (Moles & Westoby, 2000), while smaller leaf size could lead to overheating avoidance (Okajima, Taneda, Noguchi, & Terashima, 2012), and leaf size is known to be coordinated with essential plant architecture such as canopy size and plant hydraulics (Jensen & Zwieniecki, 2013;Sack et al., 2012). However, the contrast with the broad findings in Wright et al. (2017) was not surprising because the broader, across species patterns for complex variable traits may not be present within species (Moles, 2018). In fact, many studies exploring how traits change with environment have been performed on unrelated species (e.g., Atkin et al., 2015;Díaz et al., 2016;Wright et al., 2017), while intraspecific trait variation is "largely ignored" in trait-based plant ecology (Shipley et al., 2016), providing limited information on evolutionary responses to climate (Moran et al., 2016). Some functional traits showed a different pattern in relation- ship to climate-of-origin than expected. Four traits (δ 13 C, SLA, N CONC , WD) showed values for the hot, dry climate (HRI) indicative of higher rainfall areas. However, this is likely due to populations of C. calophylla from the hot, dry climate (HRI) having different adaptive mechanisms than anticipated. Carbon isotope composition (δ 13 C) is tightly correlated with WUE, particularly in TA B L E 2 Variable significance and model performance for each general additive model (GAM) among seven traits Note: F-values are provided for each climate variable used in the model with their significance level (*<0.05; **<0.01). The climate factors are as follows: T MA , mean annual temperature; T MAX , maximum temperature of the warmest month; T RANGE , temperature variation; P MA , mean annual precipitation; P DM , precipitation of the driest month; P RANGE , precipitation variation F I G U R E 6 Predicted trait values across the spatial distribution of Corymbia calophylla using current climate data (blue) and future climate data from 2070 (red) estimated by GAM analysis. Each blue and red dot represents a single pixel from the current and future maps, respectively, in Figure 5. Black dots are population-level trait values. Only significant (p < .05) lines of best fit are shown for each of the three data sets field studies (Rundel & Sharifi, 1993), and our findings suggest that HRI has the lowest WUE, when we expected it to have the highest WUE. Likewise, the LES traits (SLA and N CONC ) within the HRI population were different than expected; the high values for SLA and N CONC measurements are indicative of a population that is more susceptible to drought conditions (Greenwood et al., 2017). While we do find differentiation between populations for SLA, this differentiation does not seem to be heritable and confirms findings that SLA is highly plastic (Shipley, 2000). In addition, WD exhibited the opposite pattern than was expected, as the wood was less dense in the population from the hotter, drier climate. We expected northern C. calophylla populations that occur in hotter, drier conditions to exhibit denser wood to increase cavitation resistance (Hacke et al., 2001) and increase survival in harsh climates (Cornwell & Ackerly, 2009;Hacke et al., 2001). Even though we found differences between populations for the WD trait with some evidence of heritability, the WD differences ( All but two traits (SLA and PRI) had a narrow-sense heritability greater than zero, indicating that evolutionary change can occur through processes of natural selection. In particular, the heritability of WUE, as indicated by δ 13 C, is an important finding that has ramifications for the species as the climate changes, particularly in a Mediterranean-type climate, where rainfall is seasonal and climate shifts are predicted (Klausmeyer & Shaw, 2009). However, heritability was not found to be consistent among traits, indicating that our hypothesis of different levels of variability among traits is accepted. In general, these heritabilities are similar to the heritability of height (0.14 ± 0.03 SE), diameter (0.12 ± 0.03), and blight resistance (0.08 ± 0.03) based on a greater sampling effort (24 seedlings per family; 3,960 trees) at the same experimental plantation (Ahrens, Mazanec, et al., 2019). Overall, the heritability continuum measured here suggests that selection pressure due to climate will affect each trait differently, leading to novel patterns of local adaptation and trait combinations. The presence of variation in trait means and heritability describes a system in which the species is able to adapt to a future climate. However, no two traits shared the same explanatory climate variables or modeled associations, resulting in different predicted distributions. We found that some traits would need to evolve at a greater rate than others in order to maintain current trait-climate associations under future conditions (see the change factor in Figure 5 | Trait correlation Our estimates of trait correlations are consistent with other studies and are in-line with our expectations. However, the traits measured differentially respond to climate and our predictions that climate change may affect each trait independently were supported. Our data suggest that traits will need to adapt to new climates at different rates and in different patterns. This is a concern for traits that are known to be dependent on one another, such as those in the LES. However, the other correlations may be able to independently adapt to new conditions. This is particularly concerning for traits that are mechanistically dependent on one another because the coordinated traits will either need to decouple or be limited by one another. We were able to establish the presence of a relationship between PRI (RUE) and δ 13 C (WUE). This correlation is indicative of populations having different photosynthetic inhibition under different light conditions (Grace et al., 2007). In the future, we anticipate that this correlation between PRI and δ 13 C will decouple, as a greater shift in PRI is predicted compared to δ 13 C, but this correlation may not change under new climate conditions because of the effective nonheritability of PRI and the very small change for δ 13 C. Current patterns of coordination between SLA and N CONC are consistent with the overall patterns within the LES paradigm (Reich, 2014;Reich et al., 1997), in that high N is associated with high SLA however the pattern's association with climate is the opposite as expected. The combination of SLA and N mass has been shown to be a good predictor of net photosynthesis (A max ) on a per mass basis (Reich & Walters, 1994;Reich et al., 1997), and lower SLA is due to many anatomical features (e.g., larger cell sizes, greater major vein allocation, greater numbers of mesophyll cell layers, and higher cell mass densities (John et al., 2017)), although we found that SLA is not heritable and highly variable. Our findings suggest that the adaptive potential of these two LES traits could be limited by one another, or SLA (not heritable) could dynamically match N (heritable) concentrations through plasticity. We also expected to find significant correlation between LS and NDVI. As such, we revealed the BOO population (cool-wet climate-of-origin and the fastest growing population; Ahrens, Mazanec, et al., 2019) as having larger leaves and higher NDVI, which is indicative of higher biomass. This is the only trait correlation that we predict to remain intact in the future. The pattern between WD and N CONC was also as expected, in that wood density decreased with increasing N CONC . However, the predicted correlation between WD and N CONC shows a nearly perpendicular change, indicating that the two traits will be required to evolve in opposite directions. This confirms that WD is a difficult trait to predict due to its association with many ecological signals (Brodersen, 2016;Gleason et al., 2016) and that other mechanisms, aside from climate, are likely important for selection of the WD trait. All of the current trait correlations were highly significant but with low explanatory power, indicating that the variation between correlated traits is high, and that the correlation has some leeway for traits to change without affecting patterns of adaptation. Overall, predicted trait correlations exhibit contrasting prediction scenarios, which could force some traits to change disproportionately compared to their counterparts. These findings suggest that if these trait correlations are dependent on one another that they might be a hindrance to adapting to novel climate conditions. On the other hand, if the correlated traits can evolve independently, the different trait heritability levels suggest that some traits are more genetically determined than others, resulting in different trait combinations within populations than what has been measured. | CON CLUS ION Understanding mechanistic patterns of plant traits that undergo processes of natural selection can broadly enhance our understanding of species distributional predictions to inform maintenance of forest ecosystem function under future climate scenarios. Our results suggest that functional traits have contrasting genotypic patterns and will be subjected to different climate selection pressures, which may negatively affect current forest structure and function due to lower working optimum for functional traits. Even though we were able to identify significant adaptive variation and differential trait responses correlated with patterns of precipitation and temperature, demonstrating adaptive capacity to climate change, we reveal that traits are independently associated with different climate factors. Therefore, some trait correlations and their idiosyncratic relationships may be disrupted under future climate scenarios, suggesting that genetic constraints, selection pressure, and trait correlation limitations will affect trait evolution and patterns of adaptation in the future. CO N FLI C T O F I NTE R E S T None declared. AUTH O R CO NTR I B UTI O N S CA and PR developed the idea. All authors collected the data. CA, RM, and MA analyzed the data. CA wrote the first draft, and all authors edited and intellectually contributed to the final manuscript. DATA AVA I L A B I L I T Y S TAT E M E N T Data for this study are available at https ://doi.org/10.5061/dryad.
v3-fos-license
2018-12-11T06:14:46.375Z
2015-02-20T00:00:00.000
55047577
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://academicjournals.org/journal/AJMR/article-full-text-pdf/988227150759.pdf", "pdf_hash": "f52162d39068ac226ba56a7a6eeb583b03dae8f5", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42766", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "f52162d39068ac226ba56a7a6eeb583b03dae8f5", "year": 2015 }
pes2o/s2orc
Frequent carriage of invasive Salmonellae amongst patients infected with schistosomiasis in Sudan Bacteria-parasite association has been documented as a factor that is responsible for continued and prolonged bacterial infection, such as typhoid and paratyphoid fever in schistosomiasis patients. This work aimed to determine the presence of typhoid and paratyphoid Salmonella among schistosomiasis patients and to evaluate the efficacy of Widal test on such population. A cross sectional descriptive study was conducted between November 2005 and May 2006 in Managil region, Gezira State, Sudan. A total of 203 males participated in the study. Urine, stool and blood samples were collected and processed for the investigation of schistosomiasis and Salmonella infection based on standard methods. Widal test was performed to estimate diagnostic cut-off value of enteric fever. Of the 203 studied subjects, 42 (20.7%) were diagnosed with Schistosoma haematobium, whereas eight (3.9%) had Schistosoma mansoni infection. Of these, Salmonella species were detected in 30 (60%) cases, of which Salmonella typhi represented 63.3%, followed by Salmonella paratyphi A and B (16.7%, each) and Salmonella paratyphi C (3.3%). Based on the culture results (n=30) as a diagnostic method used for enteric fever, Widal test was positive in 12 cases, with a sensitivity of 40% and specificity of 75%. Of the Widal positive cases, titers of 1:160, 1:320, 1:640 were detected in 58.3, 33.3 and 8.3% of samples, respectively. In schistosomiasis endemic regions, enteric fever was associated with schistosomiasis, which requires investigation of both infections concomitantly. Regardless of the low sensitivity of Widal test, titer of ≥1/160 is a diagnostic value for enteric fever in this study group. INTRODUCTION Typhoid and paratyphoid fever (enteric fever) is an acute systemic infection caused mainly by the bacterium, Salmonella enteric serotype typhi and other serotypes of Salmonella paratyphi A, B, and C (Chart et al., 2007;Buckle et al., 2012).It continues to be a global health problem, especially in the tropics and sub tropic countries; over 27 million persons suffer from this disease annually (Buckle et al., 2012).Schistosomiasis is a *Corresponding author.E-mail: mutasimhadi87@hotmail.com.Tel: 00966502656995. Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0International License tropical parasitic disease caused by blood fluke worms of the genus Schistosoma such as S. haematobium and S. mansoni (Dabo et al., 2011).Schistosoma infection is endemic in many sub-Saharan African countries where the introduction of river regulation and irrigated agriculture commonly results in increasing distribution and prevalence of schistosomiasis (King et al., 2005).The association between bacteria-parasite has been observed as a factor that results in prolonged bacterial infection, such as typhoid and paratyphoid fever in schistosomiasis patients (Lambertucci et al., 1998;Bouree et al., 2002).Concurrent Schistosoma-Salmonella infections appear when Salmonella species enter the systemic circulation and adhere to the tegument of adult Schistosoma through the fimbriae.This interaction can lead to a massive release of occult Salmonella (Barnhill et al., 2011).Examinations with the scanning electron microscope showed that pili function by joining Salmonella to the surface tegument of S. mansoni and S. haematobium (LoVerde et al., 1980).Typhoid and paratyphoid Salmonella is easily recovered from the blood, feces or urine samples of schistosomiasis patients (Lambertucci et al., 1998).Moreover, enteric fever can be diagnosed by different laboratory methods, including serological tests such as Widal agglutinations or ELISA, culture of clinical specimens of stool, blood and urine (Chart et al., 2007).The Widal test, which detects agglutinating antibodies to somatic lipopolysaccharide O antigens and flagella H antigens was introduced over a century ago and remains a widely used tool for the serological diagnosis of enteric fever (el-Shafie, 1991;House et al., 2001).In Sudan, despite the high endemicity of both schistosomiasis and enteric fever (el-Shafie, 1991;Ahmed et al., 2012;Ibrahim and Ibrahim, 2014), there is little available data on Schistosoma-Salmonella infections (Salih et al., 1977).Therefore, the present study aimed to determine the presence of typhoid and paratyphoid Salmonella among schistosomiasis patients in Managil region, Central Sudan and to detect the most common Salmonella serotypes that cause enteric fever.In addition, it aimed to evaluate the efficacy of the Widal agglutination test used for the diagnosis of enteric fever comparable to cultural methods. Study area and population This is a descriptive cross sectional study conducted between November, 2005 andMay, 2006 in Managil Region (156 km South of Khartoum Capital), Gezira State, Central Sudan.The state is an endemic area for schistosomiasis due to the agricultural activities of the populations in the Gezira-Managil irrigation schemes (Hilali et al., 1995).A total of 203 males between 10 to 55 years old participated in the study.The studied subjects were students of the Quran school (n = 148), employees (n = 28) and farmers (n = 27).Those who were previously infected with the infection or under treatment were excluded from the study.Each participant accepted and agreed to participate in the study after informing his parents about the importance of the study.The study was approved by the Committee of Research Council of Faculty of Medical Laboratory Sciences, University of Khartoum. Samples processing Clinical samples of urine, stool and blood were collected from each individual and processed for the investigation of schistosomiasis and typhoid and paratyphoid Salmonella infection.About 20 ml of urine was collected in sterile plastic container from each subject suspected to have urinary schistosomiasis.To obtain the stool samples, each individual was given a dry and clean container to provide at least 10 g of sample.Stool and urine samples were obtained from each individual, between 10 am and 2 pm, when highest egg excretion occurs (Cheesbrough, 2000b).The diagnosis of Schistosoma infection was carried out in the study field by applying direct microscopic examination of the samples.Two smears were prepared from each stool sample and examined for the presence of S. mansoni eggs using standard Kato-Katz method (Katz et al., 1972).The urine centrifugation technique was used to detect the presence S. haematobium eggs as previously described (Cheesbrough, 2000b).Then, about 5 ml of venous blood was collected from each subject, having schistosomiasis in a clean, dry sterile plain tube, and allowed to clot at room temperature.The sera were separated by centrifugation at 13,000 rpm for 5 min, transferred into clean, sterile plain tubes, and stored at -20C for further Widal agglutination test. Each sample of urine or stool yielded positive result; schistosomiasis was cultured immediately in 5 ml of sterile selenite F broth (SFB) (Oxoid, Basingstoke, England) for further isolation and identification of possible pathogens of typhoid and paratyphoid Salmonella at the Research Laboratory of Faculty of Medical Laboratory, University of Khartoum. Isolation and identification of Salmonella species Isolation of Salmonella species from urine and stool samples was done by following the standard laboratory methods (Cheesbrough, 2000a).All the samples containing SFB were sub-cultured on xylose lysine deoxycholate (XLD) (Oxoid, Basingstoke, England) and deoxycholate citrate agar (DCA) (Oxoid, Basingstoke, England).They were incubated overnight at 37C.The plates were then examined for the presence of non-lactose fermenting colonies.Suspected colonies of Salmonella isolates were identified on the bases of colonial morphology, gram staining, biochemical tests, and they were confirmed serologically using monovalent and polyvalent antisera (Cheesbrough, 2000a). Widal test for investigating enteric fever Widal agglutination test was performed to examine Salmonella serotypes using O and H antigens of Salmonella typhi and Salmonella paratyphi A, B and C antigens as described by House et al. (2001).Before carrying out the test, the serum samples (n=50) were divided into two categories: group A collected from culture proven cases and group B from culture negative cases.Widal agglutination reagent kits (Plasmetec, UK) test was performed in both groups according to the manufacturer's instruction.Briefly, each serum sample was diluted serially starting from 1:80 to 1:1280 with 0.85 NaCl in two rows of test tubes for the detection of O and H agglutination. Single drops of O and H antigens were added to corresponding tubes and were incubated at 37C in a water bath for 18-24 h.The tubes were examined macroscopically and microscopically for the presence of agglutination.Partial or complete agglutination with variable Statistical analysis Data were analyzed using SPSS for Windows version 10.0 (SPSS Inc., Chicago, IL, USA).The prevalence and descriptive analysis was calculated.Considering culture results as the standard method, the sensitivity and specificity of the Widal test results were interpreted and calculated using the following formulas: Sensitivity is a/(a+c), specificity is d/(d+b), Where, a is test positive and true culture positive, b is test positive and true culture negative, c is test negative and true culture positive, and d is test negative and true culture negative. RESULTS Of the 203 subjects whose urine and stool samples were screened for the presence of Schistosoma eggs, 50 (24.6%)were found to be infected with schistosomiasis.The majority of the positive cases were students (n = 46), followed by farmers (n = 3) and the employee (n = 1).Out of the 203-screened subjects, 42 (20.7%)cases were caused by S. haematobium, and 8 (3.9%) cases were due to S. mansoni infection. Distribution of Salmonella serotypes among schistosomiasis patients A total of 50 urine and stool samples were cultured for the presence of Salmonella organisms.Of these, 30 (60%) samples yielded positive results for different serotypes of Salmonella and were considered as a true positive for the presence of enteric fever.The most common Salmonella serotypes isolated from schistosomiasis patients were S. typhi (63.3%; 19/30), followed by S. paratyphi A and B (16.7%; 5/30, each) and S. paratyphi C (3.3%; 1/30) (Table 1). Evaluation of Widal agglutination test Table 2 summarizes the cultural and serological results obtained from the schistosomiasis patients.Based on the culture results (n = 30) as a diagnostic method for detecting the presence of enteric fever, Widal test was found to be positive in 12 cases (group 1), with a sensitivity of 40% (12/30) and specificity of 75% (15/20).Of the 12 Widal positive cases, titer of 1:160 was detected in seven (58.3%) samples, titer of 1/320 was detected in four (33.3%) samples and titer of 1:640 was detected in one (8.3%)sample.Among the 20 culture negative cases (group 2), four (20%) samples were given anti Salmonella antibody titer of 1:80, whereas titer of 1:160 was detected in one (5%) sample (Figure 1).These findings indicated that titer of equal or more than 1:160 value for both O and H agglutinins is a diagnostic titer for detecting the presence of enteric fever. DISCUSSION In our setting, we found that 60% of schistosomiasis patients carried typhoid and paratyphoid Salmonella.The presence of Salmonella organisms in schistosomiasis patients has been reported in other studies (Tuazon et al., 1985;Barnhill et al., 2011).Furthermore, Schistosoma-Salmonella interactions are seen in all species of Schistosoma, notably S. haematobium, S. mansoni, S. intercalatum and S. japonicum (Gendrel, 1993).This association may play an important role in the persistent or delayed Salmonella infections (Bouree et al., 2002).In an earlier study, Gendrel et al. (1986) reported that Salmonella infection was clinically prolonged by bilharziasis in 1 out of 3 patients.This could be explained by a decreased host immune response following schistosomiasis (Bouree et al., 2002).Therefore, bacterium-host-parasite interaction may in part explain why Salmonella infection and schistosomiasis clinically occur frequently together and present difficult therapeutic problem (Young et al., 1973).However, such infections need to be treated concomitantly (Gendrel et al., 1986).Culture methods of clinical specimens remain the most accurate diagnostic procedure for isolating the causative organisms of suspected enteric fever (Chart et al., 2007;Wain and Hosoglu, 2008).In our setting, among the 50 schistosomiasis patients, positive culture results in different types of typhoid, and paratyphoid Salmonella was recorded in 60% (30/50) cases (Table 1).In this study, we found that S. typhi was the most frequent isolate that represented 63.3% of the isolates.Equal isolation rate was recorded for S. paratyphi A and B (16.7%, each), and one (3.3%)isolate was found to be S. paratyphi C.These findings indicate that the incidence of typhoid fever in schistosomiasis patients is more frequent than paratyphoid fever.Similar findings have been reported earlier among Sudanese patients (Salih et al., 1977).Other studies have reported different serotypes of Salmonella among the general population instead of schistosomiasis patients.Shetty et al. (2012) have reported that out of 103 Salmonella isolates, 85 (82.52%) were S. typhi, 16 (15.53%)were Salmonella paratyphi A and two (1.94%) were Salmonella paratyphi B. On the contrary, the isolation rate of S. paratyphi A was 1.5 times higher than that of S. typhi, as reported by others (Palit et al., 2006). In the present study, among the 30 culture proven cases, 40% yielded significant Widal agglutination reactions.This level is similar to that recorded in Turkey (Hosoglu et al., 2008), but lower than that reported in Pakistan, where the Widal test was positive in 73.68% culture positive cases of enteric fever (Khoharo, 2011).Nevertheless, the Widal agglutination test has been widely used in many developing countries for diagnosing enteric fever, but it has a low sensitivity, specificity, which varies between the geographical areas (House et al., 2001;Omuse et al., 2010).In considering the cultural methods as a gold standard test for the diagnosis of enteric fever, we determined the reliability of the Widal test.We found that its sensitivity was 40%, with a specificity of 75%.This is in line with the results obtained in Bangladesh, where the Widal agglutination test yielded a sensitivity of 42.85% and a specificity of 85.0% (Begum et al., 2009).Likewise, many studies have evaluated the efficacy of the Widal agglutination test (Wain et al., 2008;Ley et al., 2010).Sharing of O and H antigens by other Salmonella serotypes and members of Enterobacteriaceae makes the role of Widal test even more controversial in diagnosing typhoid fever (Hosoglu et al., 2008).In this study, our findings indicated that Widal test has a low sensitivity and specificity; hence the need for alternative methods in order to improve laboratory diagnosis of enteric fever. The interpretation of the Widal agglutination test becomes problematic, with a great number of articles reporting different diagnostic cut-off values (Wain and Hosoglu, 2008).Since there are no current data available regarding baseline titers of Widal test among schistosomiasis patients in the Sudan, this study was undertaken to compile the baseline titers for these specific populations.Widal agglutination titer of equal or more than 1:160 was represented among all the culture proven cases.These findings confirmed that the titer of equal or more than 1:160 is a diagnostic titer of enteric fever among schistosomiasis patients.In a previous study among healthy population in Sudan, el-Shafie et al. (1991) reported that a titer above 1:320 suggests the diagnosis of S. typhi; 1:160 for both S. paratyphi B and S. paratyphi A. Regardless of schistosomal infections, different cut-off values of Widal test have been recorded as a diagnostic titer for typhoid and paratyphoid fever in other studies (Ley et al., 2010;Omuse et al., 2010).Therefore, in order to use the Widal test effectively, each endemic area should determine the appropriate titer for the diagnosis of typhoid and paratyphoid Salmonella (Willke et al., 2002). Conclusion The study concludes that in schistosomiasis endemic areas, there is a direct relationship between Schistosoma -Salmonella infection that needs routine screening for the presence of typhoid and paratyphoid fever among schistosomiasis patients.In our setting, S. typyi was found to be the most Salmonella organisms causing this syndrome (63.3%).Bacteriological techniques are more sensitive and accurate than the serological test in the diagnosis of Schistosoma -Salmonella relationship.Regardless of the low sensitivity of Widal test, titer of equal or more than 1:160 is a diagnostic cut-off value for enteric fever in this study group. Figure 1 . Figure 1.Widal agglutination titers determined from culture positive and negative results of enteric fever among schistosomiasis patients (n=50) in Sudan. Table 1 . Distribution of Salmonella serotypes among Schistosomiasis patients in an endemic area in Sudan. Table 2 . Comparison between culture method and Widal test in diagnosis of typhoid fever. degrees of clearing the supernatant fluid was recorded as a positive result.
v3-fos-license
2014-10-01T00:00:00.000Z
2011-01-12T00:00:00.000
3176937
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ij-healthgeographics.biomedcentral.com/track/pdf/10.1186/1476-072X-10-5", "pdf_hash": "64848926bf9c225fe4a2a72928b3434625921485", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42770", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "sha1": "64848926bf9c225fe4a2a72928b3434625921485", "year": 2011 }
pes2o/s2orc
Cardiovascular disease occurrence in two close but different social environments Background Cardiovascular diseases estimate to be the leading cause of death and loss of disability-adjusted life years globally. Conventional risk factors for cardiovascular diseases only partly account for the social gradient. The purpose of this study was to compare the occurrence of the most frequent cardiovascular diseases and cardiovascular mortality in two close cities, the Twin cities. Methods We focused on the total population in two neighbour and equally sized cities with a population of around 135 000 inhabitants each. These twin cities represent two different social environments in the same Swedish county. According to their social history they could be labelled a "blue-collar" and a "white-collar" city. Morbidity data for the two cities was derived from an administrative health care register based on medical records assigned by the physicians at both hospitals and primary care. The morbidity data presented are cumulative incidence rates and the data on mortality for ischemic heart diseases is based on official Swedish statistics. Results The cumulative incidence of different cardiovascular diagnoses for younger and also elderly men and women revealed significantly differences for studied cardiovascular diagnoses. The occurrence rates were in all aspects highest in the population of the "blue-collar" twin city for both sexes. Conclusions This study revealed that there are significant differences in risk for cardiovascular morbidity and mortality between the populations in the studied different social environments. These differences seem to be profound and stable over time and thereby give implication for public health policy to initiate a community intervention program in the "blue-collar" twin city. Background In Sweden mortality from cardiovascular diseases (CVD) gradually increased from the beginning of the 20 th century until the 1960s, when this trend reached a plateau during the 1970s and thereafter in the beginning of the 1980s decreased for both men and women with 50% the last two decades in Sweden as in other industrialised countries [1,2]. This decline in cardiovascular mortality could be explained by two factors: the risk of developing heart infarction has decreased due to better lifestyle and the chance of surviving a heart attack has increased [3][4][5]. Both incidence of and mortality due to CHD are today significantly higher in subjects with low socio-economic status [6]. In many industrial countries here is a clear trend of widening social class differences. Increased socio-economic differences of the risk for myocardial infarction have been reported last decade in Sweden [7]. Conventional risk factors for cardiovascular diseases (CVD) such as high blood pressure and smoking only partly account for the social gradient for CVD that has been widely demonstrated [6,8]. Studies in developed countries show that low income is associated with higher incidence of coronary heart disease and higher mortality after a heart attack [9]. The socio-economic differences in Sweden for cardiovascular disease are still high but compared to fifteen years ago the social differences in terms of educational level and CVD risks are slightly less [10]. Like most western societies, Sweden, is becoming a more multicultural society with a mix of natives and first or second-generation immigrants, which also has implications for the occurrence of cardiovascular diseases in the population. A Swedish study has found that immigrants have an increased incidence of acute myocardial infarction compared to natives. Immigrants also have a higher incidence of myocardial infarction including non-fatal as well as fatal out-of-hospital cases, than Swedish born people after adjustment for age and socioeconomic group [11]. To better understand disease occurrence, treatment, health outcomes, and prevention it has for centuries been evident in medicine to also include different social determinants [12]. The role and importance of social and physical environment for health, the effect of place, and the question of whether we should focus on places or people is still a relevant matter for public health science [13][14][15][16]. A general conclusion of this issue is that who you are and how you live your life as well as the place where you live your life are of importance for health [17]. By tradition there are two perspectives for studying determinants of health in public health and social medicine; one focuses on "upstream" factors and the other on "downstream" factors. The emphasis in the upstream approach is on social, ethnical, cultural and economical factors in the community. According to this perspective, individual differences in lifestyles and living conditions are not the only important factors in explaining health differences between individuals; social, ethnical, cultural and economic factors in the community will also affect public health and generate health differences between individuals [18,19]. A phenomenon noticed is that defined populations in one geographical area could display a particular distribution of risk factors that differ markedly from those found in a similar population but in another geographical area [20,21]. The downstream perspective, which is predominant in health care research and clinical practice, focuses on the individual and on individual living conditions and lifestyles. According to this downstream perspective disease occurrence in an individual must either be related to environmental exposure or genetically inherited. However, in public health research, it is not unusual to combine these two perspectives [22,23]. Neighbourhood of residence has been suggested to affect cardiovascular risk above and beyond personal socio-economic status [24,25]. In a multilevel analysis it was found that neighbourhood income predicted individual systolic blood pressure, but it was concluded that both individual and neighbourhood socio-economic status and race were linked to cardiovascular risk disparities as early as adolescence [26,27]. A Swedish study has reported that structural economic conditions in neighbourhoods do matter with regard to adult social class inequalities [28]. The effect is not restricted to lower social strata, but as individuals in lower social strata more often live in disadvantaged contexts, and also seem to be more vulnerable to the effects of these contexts, it was concluded that economic segregation creates neighbourhood contexts that contribute to social class inequality in incidence of myocardial infarction [29]. In order to address an upstream perspective of cardiovascular disease occurrence in the population we conducted a study focusing on the population in two comparable and equally sized Swedish cities, the Twin cities. Study aims The aim of this study was to compare the occurrence of the most frequent cardiovascular diseases and cardiovascular mortality in two close but different social environments, the Twin cities. Study population A description of some indicators underpinning the differences in social environments of the two twin cities is given in table 1. The two cities a "blue-collar" city (Norrköping) and a "white-collar" city (Linköping) are located only 25 miles apart in the same county in the southeast of Sweden. Both cities are served by the same health care organization; the same County Council is responsible for all public-funded health care in the two cities. The private health care sector is only a marginal phenomenon in the region. At the time of the study, each of the cities had around 135 000 inhabitants and almost the same age distribution. Today the cities are quite similar but their social history differs. We therefore generalize the urban identity of the cities based on their history. The term "white-collar" is generally used to refer to work that does not require strenuous physical labor. The term "blue-collar", conversely, refers to workers whose work requires manual labor. We use the terms to describe the two different social environments and geographical areas. Manual laborers mostly inhabit a "blue-collar" community and a "whitecollar" community contains mostly administrators. During the 17th century, the "blue-collar" twin city received its rights to foreign trade and an international port. This was the beginning of a development that made the "blue-collar" twin city a centre for textile industry, "The Manchester of Sweden". Over time, the textile industry became more mechanised and the number of unskilled workers increased in the "blue-collar" city. Only in recent decades, have non-manual occupations become dominant in "the blue-collar" city and it has also become a university town in the last decade. Meanwhile the "white-collar" twin city was a regional and agricultural centre dominated by a quite life of pupils from the cathedral school. While the industrial revolution went on in the "blue-collar" city the "whitecollar" city remained a rural market town, ruled by both the church and the state. For a long time there has also been a significant military presence, with several regiments. After World War II the "white-collar" city grew in terms of population and became industrialized, mainly due to the aviation industry expansion. Today, high technology companies and a university dominate the "white-collar" city. Although these twin cities today have an extrinsic resemblance in terms of culture, environment and climate, there are tendencies of differences in public health between them which are today manifested in public health indicators such as; life expectancy, prevalence of cardiovascular diseases, sick leave, perceived health and lifestyle factors [30]. There is a long tradition especially in the Scandinavian countries of documenting diseases in registers. In Sweden each inhabitant is assigned a unique personal code based on birth date and gender [31]. By law, the county councils in Sweden are required to report inpatient data to the Swedish Hospital Discharge register on an annual basis, but national registration of outpatient data has not been implemented [31]. These health care databases offer administrative data, but they might also be suitable for population-based epidemiological studies. In one Swedish region (Östergötland County Council) patient data from Primary Health Care (PHC), outpatient hospital care, and hospital care has been recorded for some years in a shared computerized population-based administrative Health Care Register (HCR). We conducted a retrospective register study and used this regional health care database to identify the most common cardiovascular diseases diagnosed and documented in medical records from both hospital care and primary care. Data The administrative Health Care Register (HCR) used in this study is based on computerised data files linked by a unique personal code to birth date and gender of all inhabitants in the region. The same personal code is used for all visits and diagnoses in HCR. The International Classification of Diseases (ICD 10-code) for the CHD disorders presented in this study was: E78 (high cholesterol), I10 (hypertension), I20.9 (angina pectoris), I21 (myocardial infarction), I25 (ischemic heart disease), I50 (cardiac insufficiency), I61.9 (cerebral haemorrhage), I63 (stroke). Physicians in hospitals as well as in primary care assigned these diagnostic codes. The morbidity data presented are cumulative incidence rates. The numerator for the cumulative incidence rates was the number of the first diagnosed cases identified during the six-year study period (2002)(2003)(2004)(2005)(2006)(2007). The denominator was the number of inhabitants calculated as the mean number in each age group of the population during the six-year study period. The cumulative incidence rates are presented as numbers per 1000 inhabitants. Swedish national data on mortality for ischemic heart diseases (mainly heart infarction) in the largest Swedish cities (all cities with over or around 100 000 inhabitants) was also included. It represents early mortality in the population from the age of 15 and above in each city between 2002 and 2006 and are age standardised against the national Swedish population [32]. The historical mortality data of cardiovascular diseases in the twin cities is based on national mortality statistics from The Statistics in Sweden. Data was stored in a shared database and statistically analysed using SPSS 17.0 (SPSS Inc., Chicago, IL, USA). Risk Ratios (RR) and 95% confidence intervals (CI) were calculated based on crude data with the "white-collar" city as the reference group. A p-value of p < 0.05 was considered statistically significant. Cardiovascular mortality Recent Swedish national data (2002)(2003)(2004)(2005)(2006) on mortality in connection with ischemic heart diseases in males and Table 1 Indicators of the social environment in the "white-collar" and "blue-collar" twin city females living in the largest Swedish cities are presented in Figure 1. Data from all Swedish cities with around 100 000 inhabitants or more was chosen for this comparison. In general, mortality rates were higher in males than females. The "blue-collar" twin city had the highest mortality rates in connection with ischemic heart diseases in both males and females compared to all other comparable Swedish cities of the same size. In this respect the "whitecollar" twin city had lower mortality rates than the national mean in this respect in both females and males. Historical data of cardiovascular mortality for the twin cities is presented in Figure 2. A higher rate of coronary heart mortality is evident for the population in the "bluecollar" twin city from the 1920's until recent decades compared with the "white-collar" twin city. Cardiovascular morbidity The cumulative incidence of different cardiovascular diagnoses in younger men and women (aged 45-64 years) is presented in table 2 and table 3. The comparison between the twin cities revealed significant differences in all studied diagnoses, except for high cholesterol levels. The relative rates were higher for both sexes in the "blue-collar" twin city population. The cumulative incidence of different cardiovascular diagnoses in elderly men and women (aged 65-79 years) is presented in table 4 and table 5. In addition, significantly higher relative rates were revealed among the elderly with regards to the studied cardiovascular Historical data of mortality rates from coronary heart diseases in the population of the "white-collar" twin city and the "blue-collar" twin city (Swedish national mean index = 100). Wennerholm et al. International Journal of Health Geographics 2011, 10:5 http://www.ij-healthgeographics.com/content/10/1/5 diagnosis in the "blue-collar" twin city compared with the "white-collar" twin city. One exception was the diagnosis high cholesterol levels that were significantly more common among both males and females in the "whitecollar" twin city. No difference in occurrence in elderly women was seen between the cities regarding myocardial infarction and stroke. Discussion This study revealed that the risk for cardiovascular morbidity and mortality is very different between the population in a "blue-collar" and a "white-collar" city. These differences seem to be profound and stable over time. The two studied cities are quite comparable and referred to as twin cities, although historically they represent different social environments, a "blue-collar" and a "white-collar" city. By this study design we want to uncover the relative importance of the social environment for cardiovascular risks in a defined population. The strength of this study is that it is based on computerised data files (HCR) of all inhabitants linked by birth date and gender. The same personal code is used for all visits and diagnoses in HCR. An individual can thus be followed retrospectively or prospectively through the health care system. The health care institution where the patient was diagnosed represents all health care levels, primary care, outpatient hospital care, and/ or inpatients hospital care. General limitations in register data is that misclassifications do occur, including cases that are not recorded because they are overlooked or given incorrect clinical codes. There could also potentially be some referral bias. However, according to The National Board of Health and Welfare, analysis of the quality and content of the Swedish Hospital Discharge Register indicates up to 98% coverage of main diagnostic codes in in-patient care in this region [31]. Validation of HCR and other comparable administrative health care data registers has shown high specificity in registers covering all type of health care [33][34][35]. Therefore it can be assumed that HCR data will show high specificity. When comparing geographically close cities one could always assume that there are some migration between the cities. However, it is unlikely that this population migration is health related so it could introduce a health selection bias in the study. The diagnose comparisons between the two cities were not ageadjusted but stratified for age. The age strata might appear broad (45-64; 65-79 years) and age is a strong predictor of cardiovascular diseases. However, the agedistribution in these two cities is quite equal leaving only a minimal risk of possible confounding by age. A close connection between socio-economic factors and cardiovascular morbidity has long been known [36]. Low socio-economic status is associated with an increased risk for cardiovascular disease (CVD) in both men and women [6,8,19]. This was confirmed in a study in northern Sweden where both higher incidence and higher mortality in myocardial infarction and stroke were found amongst blue-collar workers compared to subjects in non-manual occupations [37]. Major differences in cardiovascular risk profiles have also been found between skilled and unskilled bluecollar workers [38]. Individual behaviour and lifestyle and thereby socioeconomic factors are of importance when analysing cardiovascular risk determinants [26,39,40]. The highest attained educational level is a good proxy for socioeconomic status. Educational level is further the strongest component in the socio-economic concept [30]. Educational level reflects not only living conditions, but also attitudes and health behaviour in general. Differences in educational levels of the population in the studied twin cities might contribute to the difference in risk of cardiovascular disease. In a city with a mix of more highly educated people (the "white-collar" twin city) we will probably find different health behaviour, life style, consumption patterns and dietary habits than in a city with a less educated people (the "blue-collar" twin city), thus leading to differences in the risk of cardiovascular disease between these cities. Associations between neighbourhood socioeconomic characteristics in the population and hypertension risk have in other studies found to be mediated by differences in body mass index [25]. This indicates that *p = 0.05,**p = 0.001,***p < 0.0001. Similarly, research suggests that people who have obese individuals among their closest friends also tend to have a risk of becoming obese over time [41]. Health behaviour tends to form in the micro-systems and close social relationships within a society. Societies with more educated people, even in the social micro-systems, thereby tend to exhibit different life style and cardiovascular risks than in communities with more lower educated people. Different social norms are developed in cultural groups and subgroups in a society. These norms include explicit as well as implicit rules for what attitudes and behaviours are appropriate in this specific group. Individuals who do not follow the rules might be excluded from the group. These mechanisms could be an explanation why a community-based programme might be more successful than when individual programmes are offered [42]. In the long run, a community-based programme provides the possibility to influence the social norm defining the appropriate way to live in the specific group. In general, the health situation with regard to morbidity, mortality and self-reported health in the "bluecollar" twin city is in general notably worse than in the "white-collar" twin city. These differences have not diminished over time from the late 1970s and onwards. There are noticeable socio-economic and sociocultural differences between the two studied cities that might contribute to the explanation of the cardiovascular differences found in this study. Although the traditional industrial worker hardly exists today, not even in the "blue-collar" city, the older generation in particular still identifies with that role. In many respects, the "blue-collar" twin city has gradually lost its leading role in the region during recent decades in terms of population, political weight, strategic investments etc. Widespread factory close-downs in the 1960s and 1970s resulted in high unemployment in the "blue-collar" twin city. In addition, relatively poor working conditions as well as the powerlessness of people in this situation are detrimental for health. Until the last 10-15 years the "blue-collar" twin city was "a town of sorrow", with an old structure and a history of closed factories. All these factors might have an impact on the risk for cardiovascular disease in the population. However, trends that could have positive effects on health have also emerged in the last decade since the university has also established a university campus in the "blue-collar" twin city. Today, many social structures in the "blue-collar" twin city are approaching the structures of the neighbouring "white-collar" twin city, but public health differences seem to prevail. The "white-collar" twin city has experienced a strong economical and population growth throughout the 20th century especially in the post-war period. This is especially due to the establishment of an aerospace industry with guaranteed governmental orders of airplane production, which created a demand for engineers and thereby manifested the "white-collar" city as a city of high technology. This city managed to turn a precarious situation in the 1960's, mainly through the establishment of the Technical University College and eventually a fully-fledged university. A successful collaboration between the University and the County Council also led to the establishment of a medical faculty in the 1980s. The economic and political success of a city is probably a health-promoting factor in itself. A differentiated industry leading to a diverse labour market in the "white-collar" municipality is certainly even more important. The transformation from an industrial society to technological information society was faster than in the neighbouring "blue-collar" twin city. However, the other side of the coin reveals a rising income gap and increasing ethnical segregation in the "whitecollar" twin city in the last decade. It is possible to change the risk of cardiovascular disease not only on the individual level but also in a population and in a whole community. There are some good international examples of programmes how communitybased intervention actually reduces the risk of cardiovascular disease in a community. Since the early 1970s, community-based programmes have shown to be effective in the field of cardiovascular disease prevention. A major intervention program, The North Karelian project, was launched in eastern Finland, where morbidity and mortality incidence was among the highest in Europe [43,44]. The idea was to change the general riskrelated ways of living of the whole population through community-based actions that included health education and medical check-ups, as well as work with a range of organisations, such as local non-governmental organisations, political decision-makers and the private industrial sector. The results of the project is quite positive but must be evaluated against a background of a general decline in cardiovascular diseases in the society. However, these projects did not take into account the voices of ordinary people living in the area of prevention programmes. Ways that people who suffers from heart diseases understand and make sense of their disease has been reported in medical anthropological research [45]. Another good preventive example is the "Västerbotten Intervention Program" aiming to reduce cardiovascular risks in northern Sweden [46][47][48]. Using an upstream perspective through collaboration between politicians, healthcare providers, primary care and the public, a stable community-based intervention program was launched in the mid-1980s. The results are very encouraging, as a steady reduction of cardiovascular morbidity and mortality in this population has been documented [49,50]. Conclusions There are profound differences in cardiovascular morbidity and mortality between the studied twin cities representing two close but different social environments. These differences seem not to diminish but to be stable over time, although both cities have undergone social and economic changes from industrial to post-industrial cities. Health behaviours and lifestyles seem to prevail in both of these social environments casting a shadow of higher risk of cardiovascular disease in the population of the "blue-collar" city. This data give strong public health implications with an upstream approach to initiate a long-term community intervention programme in the "blue-collar" twin city.
v3-fos-license
2021-04-18T09:04:12.317Z
2021-02-25T00:00:00.000
233291379
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.14393/bj-v37n0a2021-49851", "pdf_hash": "2d3722c4c4e4131a37f7047d1b11d93acca37208", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42772", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2d3722c4c4e4131a37f7047d1b11d93acca37208", "year": 2021 }
pes2o/s2orc
ACUTE EFFECT OF DIFFERENT ORDERS OF CONCURRENT TRAINING ON GLYCEMIA The present study verified the effect of a concurrent training (CT) session in different orders, Strength + Endurance (SE) and Endurance + Strength (ES), on the glycemic control. The crossover study included 20 young men, 21.80 ± 2.90 years, IMC ≥ 23 kg/m, 24.83 ± 3.68% of fat, who performed both CT sessions separated by 72 h. Capillary glycemia was measured at pre, immediately post the end of each exercise session, and during the recovery period at 30, 60, and 90 minutes. The comparisons were performed using Two-way ANOVA (order and time), paired test-t for the area under the curve, as well as Cohen’s d effect size. There was effect of exercise order (F = 5.973; p = 0.03), effect of time (F = 18.345; p = 0.001) and interaction between order and time (F = 2.835; p = 0.03). The area under the curve presented a significant reduction (p = 0.03, effect size = 0.51, moderate). The area under the curve was smaller in SE, as well as glucose concentrations at end and post 30 min of exercise, suggesting better efficiency in glycemic control compared to ES. Introduction Non-communicable chronic diseases (NCDs), such as obesity, diabetes mellitus (DM), systemic arterial hypertension, dyslipidemia, and cardiovascular diseases represent a growing health problem and are directly associated with insulin resistance (Carvalheira and Saad 2006). Data from the literature point to a specific link between body mass index ≥ 23 kg/m 2 (BMI) and insulin resistance as a risk factor for the development of diabetes mellitus (Chung et al. 2012;Okura et al. 2018). On the other hand, physical exercise participates in the prophylactic action of these alterations in glycemic metabolism (Sheri et al. 2016). However, the long-term results of training are obtained depending on the sequence of acute sessions, and thus, to promote the necessary results the effects of the training must be well known (Pescatello et al. 2014). Regarding the type of exercise, it is well established in the literature that both aerobic and resistance exercises (Rodrigues et al. 2016) are considered a non-drug strategy to combat insulin resistance, aiding in glycemic control. The term concurrent training (CT) is used to refer to programs that systematically integrate strength (ST) and endurance training (ET) in a single session (Fyfe et al. 2014). In general, when comparing the effects of CT with one type of training in isolation, ET or ST, a phenomenon known as interference effect is observed (Bell et al. 2000). Although studies point to interferences in aerobic adaptations and ACUTE EFFECT OF DIFFERENT ORDERS OF CONCURRENT TRAINING ON GLYCEMIA performance-related muscular hypertrophy, there are still gaps in the literature on the influence of the order of stimuli on the acute response of glycemia, since altering exercise order can provoke different muscular, neural, and metabolic responses. Thus, the objective of the present study was to investigate the effects of a CT session performed in different orders on glycemic control in young adults with BMI ≥ 23 kg/m 2 . It was hypothesized that glycemic responses would be influenced by the order of the exercises. Design The crossover study design made it possible to verify capillary glycemia levels after two different orders of CT sessions. Collections were performed at pre (rest), immediately after exercise (60 min) and at 30, 60 and 90 minutes of recovery. Two session orders were adopted: session 1 -Strength + Endurance (SE) and session 2 -Endurance + Strength (ES), with a 72h interval between sessions (Figure 1). The strength training consisted of 4 sets with repetitions to eccentric failure and 90 second intervals based on 70% intensity of 1RM, and endurance training consisted of 30 minutes of continuous running on a treadmill at 70% of peak VO2. The data were collected in the period between 4 and 6 p.m. To minimize the effect of diet, under the guidance of a nutritionist, all participants received 300 mL of whole milk blended with a banana and 50 grams of oat flakes one hour before the beginning of the exercise session, without added sugar. Participants Twenty male subjects between the ages of 20 and 25 were recruited through written invitation and, after acceptance and clarification of any questions, all participants signed the term of informed consent. Inclusion criteria were: no health restrictions or limitations identified by the Physical Activity Readiness Questionnaire (PAR-Q), exercising over six months previously to the study and presenting BMI ≥ 23 kg/m 2 . The exclusion criterion was: not using hypoglycemic drugs. The study complied with the ethical rules for research in humans with approval by the Ethics and Research Committee of the Catholic University Center Unisalesiano Lins -Plataforma Brasil (n° 2.753.688/2018). Body fat Skin folds measurements were collected from the following points: triceps, abdominal and suprailiac. A scientific adipometer of the brand Cescorf@ was used, and the determination of body density and conversion into fat percentage were calculated following the guidelines and equations of Guedes (1994). Capillary glycemia Capillary glycemia was measured at the end of the 10-minute baseline period pre-exercise, immediately after the end of each 60 min session, and during the recovery period at 30, 60, and 90 minutes. Was used an Accu-Check Advantage® glucometer, Roche brand, with proven accuracy in accordance with International Organization for Standardization (ISO) 15197 of 2003 and 2013 as target pattern (King et al. 2018). Aerobic power Participants wore a mask connected to a portable gas analyzer (METALYSER 3B -CORTEX) and performed a progressive motorized treadmill test (Imbramed Millenium Super ATL) until voluntary exhaustion. The incremental test began at a speed of 8 km.h -1 with load increments of 1 km.h -1 every three minutes. Heart rate during each test was recorded at every five seconds by a heart rate monitor (Polar S810i, Polar Electro OY, Finland). The highest VO2 obtained during the 30 seconds of each stage was considered as the peak VO2. The peak VO2 was the lowest intensity at which peak VO2 was obtained. If the intensity of peak VO2 was not sustained for at least 1 minute, the intensity of the previous stage was considered as the peak VO2 (Weltman et al. 1990). Determination of the repetition maximum (1rm) The maximum strength test (1RM) was performed for the following exercises considering a classic split routine involving 4 consecutive workouts with 1 day rest (Lin et al. 2012): bench press, incline bench, dumbbell bench press, cable chest fly, triceps forehead, closed footprint bench press and parallel triceps. Up to six attempts were allowed to identify the maximum weight the volunteer could lift in one repetition for the same test, with a rest interval of two to five minutes. The maximum load was considered as the final load at which the subject performed a movement with the appropriate pattern of execution. When the maximum load was not found in up to six attempts, a new test was performed 48 hours after the previous test (Brown and Weir 2001). First, the subjects performed a session to familiarize themselves with the procedures and to determine an "ideal" load for the test in which they found a load to perform the maximal repetition test for each exercise. After 72h, in a second session the subjects performed the maximal repetition test for determination of 1RM. Through this procedure it is possible to minimize the error in the determination of 1 RM (Nascimento et al. 2013). Statistical analysis The data were tested for normality using the Shapiro-Wilk test and sphericity by the Mauchly's test. The two-way repeated measurements analysis of variance (ANOVA) followed by the Tukey's HSD posthoc test was used to analyze measurements of capillary glycemia. Paired Student's t-test was performed to verify the number of repetitions between different orders in strength training. For interpretation of the final data, the area under the curve (AUC) was calculated by the trapezoidal method and compared using the paired Student t-test. The effect size (ES) was calculated by the mean standardized difference, considering effect trivial (< 0.20), small (0.20-0.49), moderate (0.50-0.79) or large (>0.80) (Cohen 1998). Data are expressed as mean and standard deviation. The value of p ≤ 0.05 was adopted as statistically significant. For analysis of the data the program Statistical Package for Social Sciences (SPSS) was used, version 20.0 (IBM Corp, NY, USA). Results The general characteristics, anthropometric data, maximal aerobic capacity and maximum strength of study participants are shown in table 1. When comparing the orders ES with SE, the number of repetitions in each exercise did not present significant difference, which suggests that the muscular fatigue induced by the strength training was similar between the different orders (bench press p=0.49, incline bench p=0.78, dumbbell bench press p=0.56, cable chest fly p=0.19, triceps forehead p=0.33, closed footprint bench press p=0.72 and parallel triceps p=0.46) in figure 2. Two-way ANOVA (order vs time) showed that there is effect of order (F = 5.973; p = 0.03) effect of time (F = 18.345; p = 0.001) and interaction between order and time (F = 2.835; p = 0.03). Kinetics of capillary glycemia performed in different orders was presented in figure 3. At rest, the groups presented no differences (p = 0.25), however a reduction in blood glucose concentrations was found at all collection times in the SE session, while in the ES session the hypoglycemic effect becomes significant only from 30 minutes after the session. A significant difference was observed between the groups at 60 immediately after post exercise (F = 5.901; p = 0.03) and 30 minutes of recovery (F = 8.992; p = 0.01), evidencing that the SE order caused a greater reduction in blood glucose concentrations immediately after exercise (30 min) and remained for a further 30 minutes during the recovery period when compared to the ES order. After 60 minutes of recovery there was no difference in serum glucose concentrations between the studied groups. Figure 4 presents the data regarding the calculation of the area under the glucose concentration curve the two different orders of CT. The SE exercise session presented a smaller area on the glucose curve (p = 0.03) with magnitude of effect size (p≤0.05, effect size = 0.51, moderate). Discussion The results of the present study demonstrate that both orders of execution of CT sessions had a significant effect on the reduction in glycemia concentrations over time. However, the glucose concentrations were smaller in SE order, this suggests higher glucose uptake compared to the ES order. Maintenance of adequate concentrations of glucose in the blood is fundamental for the homeostasis and survival of the organism. In rest situations, changes in glycemic flow in fasting and postprandial periods are controlled in a range appropriate for physiological and hormonal factors (James and Mcfadden 2004). In order to capture glucose at rest, after binding of insulin to its receptor, an intracellular cascade occurs, culminating in the translocation of type 4 (GLUT-4) glucose transporters (Pauli et al. 2010). During exercise, a glycemia-regulating effect may occur as a result of stimulation caused by muscle contraction, increasing the uptake of glucose by the muscle by up to 50-fold (Sylow et al. 2017). Glucose uptake in exercise occurs regardless of insulin level, and it is caused by adenosine triphosphate (ATP) depletion with increased adenosine monophosphate (AMP)/ATP ratio and consequent AMP-activated protein kinase (AMPK) activation, calcium influx, increased nitric oxide, body temperature and blood flow to the muscle (Pereira et al. 2017). The main variables of physical exercise prescription, namely intensity, type, duration and progression, influence different body responses and adaptations (Pascatello et al. 2014). Intensity, duration, and type of exercise have been identified as important factors to determine the glucose uptake during and after the acute exercise session (Rohling et al. 2016;Pereira et al. 2017). In this way, research on the effects of the type of exercise plays a guiding role for prescription, since the effects can be antagonistic. Endurance or traditional resistance exercise is characterized by a high frequency of movements and a small resistance exerted for each muscular contraction, while strength or resistance exercise presents few repetitions and greater resistance against the movement and, in spite of different adaptations in the organism, strength, endurance, or both exercises (concurrent) have been identified as efficient in improving glucose uptake and response to insulin (Rohling et al. 2016). Data from the literature indicate that both endurance and strength exercises can be considered an effective non-drug strategy in the fight against insulin resistance, or to aid glycemic control in individuals with type 2 diabetes (Van dijk et al. 2011). In the present study, CT in two different orders promoted a reduction in glycemia at the end of the sessions. Our data corroborate another interesting study which investigated the effect of 36 concurrent training sessions (force + endurance) on glycemia behavior in people with type 2 diabetes. Although the authors observed a significant hypoglycemic effect in 27 of the 36 sessions performed, the effect was not sustained for 48 hours, and no significant cumulative effect was observed (Bacchi et al. 2012). In the present study, the order of exercises influenced glucose uptake, with the SE sequence being more efficient than ES. Regarding molecular aspects, resistance exercise causes increased activity of the mTORC1 (the mammalian target of rapamycin complex 1) signaling pathway, causing an increase in protein synthesis, while moderate high-volume endurance exercise performs its functions via AMPK (AMP-activated protein kinase) that acts as a key sensor of cellular energy (Ogasawara et al. 2014;Methenitis 2018). Based on these adaptations, studies in animal model, showed that the order of the concurrent exercise can present an influence, with greater phosphorylation of p70S6K (marker of mTORC1 activity), when resistance exercise is performed after endurance; and activation of AMPK when the endurance exercise is performed after resistance, with a concomitant reduction in the activity of the protein kinase B -mammalian target of rapamycin (AKT-mTOR) pathway (Ogasawara et al. 2014;Methenitis 2018). Thus, considering the results found in the present study, it is important to emphasize that the original hypothesis was verified. Few studies have previously addressed the behavior of glycemia in different orders of exercise. The majority of studies consider chronic effects and not acute effects. The most commonly used exercises in research for acute glycemic control have an aerobic characteristic, such as running or swimming. Considering these reports, it is clear that both aerobic and resistance training contribute to the improvement in insulin sensitivity by restoring the action of important intracellular molecules in glucose uptake, since the effects of aerobic training performed after resistance exercise can cause a more pronounced drop in capillary blood glucose than in aerobic exercise alone. However, there are few studies reporting the molecular action and energetic pathways between different orders of execution in CT. Therefore, it is difficult to speculate the specific reason for glucose to present a more pronounced drop in the performance of resistance training followed by aerobic exercise. Further studies are needed in the area to better understand the glycemic behavior in relation to the order of execution in different populations. The present study had some limitations, such as sampling by convenience criterion and covering only BMI. Thus, the selection of volunteers was restricted to a gym that received overweight and obese people. Blood glucose monitoring was not performed by the continuous monitoring method. Instead, the conventional device was used for glucose monitoring across the electrochemical method by capillary blood glucose. Conclusions In conclusion, both orders of execution of the concurrent training sessions had a significant effect on the reduction in glycemia concentrations. However, the SE order was more efficient in glucose uptake when compared to the ES order of execution, showing greater effectiveness in glycemic control, refuting the initial hypothesis. Further studies are needed to better understand the relationship between CT and glycemia, and to verify different variables such as age and attention to the molecular aspect.
v3-fos-license
2018-01-23T22:47:11.685Z
2017-09-25T00:00:00.000
28731648
{ "extfieldsofstudy": [ "Computer Science", "Psychology" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://hal.inria.fr/hal-01678515/file/421760_1_En_29_Chapter.pdf", "pdf_hash": "3b16f7920f43672633f231adf980fb3600f7ede2", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42773", "s2fieldsofstudy": [ "Medicine", "Education" ], "sha1": "f86a87cb5195f866409523d5a4f0ca5e2ef20365", "year": 2017 }
pes2o/s2orc
Puffy: A Mobile Inflatable Interactive Companion for Children with Neurodevelopmental Disorder . Puffy is a robotic companion that has been designed in cooperation with a team of therapists and special educators as a learning & play companion for children with Neurodevelopmental Disorder (NDD). Puffy has a combination of features that support multisensory stimuli and multimodal interaction and make this robot unique with respect to existing robotic devices used for children with NDD. The egg-shaped body of Puffy is inflatable, soft, and mobile. Puffy can interpret child's gestures and movements, facial expressions and emotions; it communicates with the child using voice, lights and projections embedded in its body, as well as movements in space. The paper discusses the principles and requirements underlying the design of Puffy. They take into account the characteristics of NDD and the special needs of children with disorders in the NDD spectrum, and provide guidelines for designers and developers who work in socially assistive robotics for this target group. We also compare Puffy against 21 existing commercial or research robots that have been used with NDD children, and briefly report a preliminary evaluation of our robot. INTRODUCTION Neurodevelopmental Disorder (NDD) is an umbrella term for a group of disabilities with onset in the developmental period that vary from specific limitations of learning and control of executive functions to deficits in social skills and intelligence, affecting personal, social, academic or occupational functioning [1]. Several researchers highlight the importance of early interventions to mitigate the NDD effects on the person's life and pinpoint how interactive technology can help in this respect. Particularly, the use of interactive robots has been proven promising in this arena and a number of studies have investigated the potential of these technologies (particularly for subjects with autism) to help children with NDD to develop cognitive, motor, and social skills. Still, given the wide range of NDD conditions and the specific characteristics of each single subject, there is space for new exploratory experiences involving robotic interaction to identify new forms of therapeutic and educational interventions for this target group. The paper presents an innovative robotic companion called Puffy that has been designed as a learning & play tool for children with different forms of NDD. Puffy is the latest outcome of a set of robotic companions that we have developed for this target group in cooperation with a team of NDD specialists, and evaluated at different therapeutic and education contexts [9][10] [19]. Puffy engages users in free play and in game-based goal-oriented tasks, e.g., to promote imagination, communication, memory, or space and body awareness. Puffy has a combination of features that makes it unique with respect to existing robotic devices for children with NDD. It is mobile and has an egg-shaped inflatable soft body. It supports multimodal interaction, interpreting its body manipulation as well as user gestures, movements, facial expressions and emotions. It provides multisensory stimuli through voice, lights and projections embedded in its body, and movements in space. After a review of the state of the relevant literature (section 2), we describe some general requirements for the design of robots devoted to children with NDD (section 3) that offer guidelines for designers and developers who work in socially assistive robotics for this target group. The requirements take into account the characteristics of NDD and are grounded on the state of the art, the lessons learned from our previous projects, and the feedbacks from the NDD specialists who have collaborated in the development of Puffy. We then discuss the design of our robot (section 4) and how it can be used in educational and therapeutic contexts (section 5). After presenting the enabling technology of (section 6), we evaluate Puffy from two perspectives. In section 7 we compare Puffy against 21 existing robots used for subjects with NDD and highlight the originality and completeness of our robot. This analysis exploits an evaluation schema that is based on the requirements discussed in section 2 and can be used as benchmarking framework for comparing future robots in this field. In section 8 we report a preliminary evaluation of Puffy involving NDD specialists and two children with NDD. Section 9 draws the conclusions and depicts the future directions of our research. RELATED WORK The review in this section focuses on Socially Assistive Robotics (SAR) and inflatable robots in relationship to NDD. While there is a wide number of study that explore the benefits of SAR in general, and some specific SAR features in particular, for subjects with NDD, the application of inflatable robots for this target group is unexplored. 2.1 SAR and NDD Socially assistive robots are characterized by the capability of communicating and interacting with users in a social and engaging manner [25] [68]. In the last years, many researchers have investigated their application for NDD subjects, mainly considering children with ASD -Autism Spectrum Disorder (e.g., [15] [16][21] [22][23] [39] [47][54] [60] [64] [68]). In contrast to other devices such as computers, tablets, and smartphones, socially assistive robots can play, learn and engage with children in the real world physically, socially and emotionally, and offer unique opportunities of guided, personalized, and controlled social interaction for specific learning purposes [67]. Many of the socially assistive robots used in NDD therapy are remotely controlled by caregivers [39] [40] [64]. Autonomous behavior is implemented only in few cases, to support a single type of tasks and to achieve a specific therapeutic goal [59] [TEO] such as imitation, communication or question answering. Autonomous socially assistive robots have been used successfully to attract attention, stimulate imitation, and improve communication, socialization, and behavioral skills [9] [38] needed for independent living. 2.2 Socially Assistive Robots: Body Shape and Mobility Several researches explore the shape and movement capability of robots in relationship to NDD. Different shapes have been explored, from abstract ones to cartoon-like, simplified humanoids, or realistic human-like faces [15]. For example, the shape of Teo [9], a robot designed specifically for children with autism, resembles the popular cartoon characters of Minions or Barbapapà. Results from several researches pinpoint that individuals with NDD show a preference for something that is clearly "artificial" with respect to agents that have human-inspired characteristics [20] [58]. The research reported in [6] and [51] shows that subjects with NDD may respond faster when cued by robotic movement than human movement, and some socially assistive robots used in NDD therapy can move body parts [39] [60] [65] [66]. Mobility in the physical environment offers opportunities to engage children in space-related tasks. IROMEC [59] and Labo-1 [21] for example are mobile differential drive robots but their movements are slow and clumsy movements so that children may lose attention in the interaction. QUEBall [63] has a simple spherical morphology of a relatively small dimension and rolls while moving. It provides multiple visual and sound effects to encourage the child to play but, to enabling rolling, it offers limited tactile affordances and stimuli. Teo [10] includes a holonomic (omnidirectional) base that enables it to move at a speed of up to 1.2m/sec in any direction, which resembles the mobility capability of human beings. Teo supports space-related game tasks involving "joint" (robot + child) movements in the space which exploit the infrared and sonar distance sensors embedded in the robot body and an external depth sensor (Kinect). 2.3 Socially Assistive Robots: Emotional Features Several SAR systems exploit emotional features that seem to benefit children with NDD. Keepon [39] [40] is a creature-like robot that is only capable of expressing its attention (directing its gaze) and emotions (pleasure and excitement). The empirical studies with autistic children showed that the robot triggered a sense of curiosity and security, and the subjects spontaneously approached Keepon and engaged in dyadic interaction with it, which then extended to triadic interactions where they exchanged with adult caregiver's pleasure and surprise they found in Keepon. KISMET ( [12]) is an emotional robot which possesses eyebrows, ears, and mouth and expresses emotions depending on the way a human interacts with the robot. The robot's emotional behavior is designed to generate an analogous social interaction for a robot-human dyad as for an infant-caretaker dyad. Teo [10] supports users' emotional manifestation through the personalization of the robot. It is equipped with a set of a detachable pieces like eyes, eyelids, or mouths that can be attached to its body and enables children to create face expressions. In addition, as Teo's sensorized body can distinguish among caresses, hugs, and two levels of violent punches or slaps, the robot can react emotionally to different manipulations, using light, sound, vibrations, and movement to express different emotional states -happiness, angriness, or fear. 2.4 Inflatable robots Inflatable robots [43] [52] [69] have recently received the interest of the research community as they have several advantages over rigid robots. A soft lightweight inflatable body contributes the increased safety and robustness. It is less likely to cause harm to humans during interaction and works as a shock absorber in a case of an accidental collision or an unexpected fall [4], protecting embedded sensors and devices from a potential damage. The use of inflatable robots has been investigated in some critical environments such as disaster relief and field exploration [43]. To our knowledge, their application to children's learning and play is unexplored. Still, inflatable robots have a potential for children with NDD as they meet the requirements for robustness and safety that are needed, and their deformable structures offer manipulation experiences that can be particularly engaging. DESIGN REQUIREMENTS The requirements (Rs) that informed the design of Puffy are grounded on several design guidelines that are available in the literature about robots for autistic children [15], [31], [47]. Starting from the analysis of the state of the art, existing design principles have been filtered, revised, and enriched to address the broader target of children with NDD. We also capitalize on our own prior experience in assistive technology for children with NDD [9] [10] [19][71] [72][73] and on the collaboration with therapists and special educators specialized in NDD who have been collaborating in our past research and in the design of Puffy. A general consideration that pervades many of the requirements described below concerns the characteristics of the sensory and perceptual system of most children with NDD, and how these affect functioning and behavior. Subjects with NDD often have an abnormal capability of sensory discrimination (the ability of focusing on, and discriminating between, certain stimuli) and sensory integration (the ability of processing and properly interpreting multiple sensory signals at the same time). These deficits are thought to be one of the main reasons for irritability, difficulty in selective and sustained attention, or hyperactivity that characterize many subjects with NDD; They generate discomfort, frustration, and disengagement, and make enormously difficult to accept, express and interpret emotions, and to sustain social relationships. These deficits are also thought to originate functional deficits related to own body awareness and space awareness, elementary mechanisms of abstraction and generalization, problem solving, planning, and language [18] [62]. 3.1 Visual Appearance (R1) As most of children with NDD are visual learners and have frequent loss of attention, the visual appearance of the robot body is fundamental in the experience with the robot. The robot body is a mean to attract the child's attention and engage her in an experience, as well as to communicate and convey meaning. Considering the sensory problems described above and the consequent risk of distress, the robot should avoid visual overstimulation, as the one created by different brightly colored body parts and aggregation of (moving) components of different shapes [31] [61]. Few neutral colors should be used for the body and harmonized to promote relaxation and trust [35]. Their visual attributes should be functional to their affordance and to the goal they are meant to support during interaction. The use and amount of different colors should be carefully calibrated considering the sensorial characteristics of each child and different visual configurations should be available for the same task. The shape of the robot should evoke a familiar element, possibly something that the subject likes such as cartoon characters. Considering Mori's conjecture about the uncanny valley [48], and the difficulty of NDD subjects to interpret the multiple signals expressed by the complexity of human faces and body, abstract minimalistic "harmonic" shapes are thought to be preferable to realistic representations (e.g., human like) [58] [61]. Still, it is important that some facial components are included, particularly eyes, so that the child could easily understand where the robot is facing and "looking at", and establish eye contact with the robot. Children with NDD (especially those with autism) are uncomfortable in making eye contact with humans, and simulating eye contact with the robot might help them to generalize this concept in human-human relationships [57]. To facilitate eye contact with the robot, the most appropriate size for the robot should correspond to the average size of the target group [10][57] [60]. 3.2 Multimodality (R2) Multimodality, i.e., the provision of different interaction modes that involve different (sets of) skills and sensory stimuli, have a number of advantages for children with NDD. Supporting multiple modes of interacting with the robot opens up opportunities to engage the children in different ways, each one focused on specific and evolving learning needs, and to promote cause-effect understanding skills which derive from experiencing the action-feedback loop in different situations. Still, multimodality involves some potential risks for this target group. The perceptual experience of subjects with NDD is often abnormal and these persons may have impairments in filtering or processing simultaneous channels of visual, auditory and tactile inputs. They may perform poorly across different interaction modes and during conditions that require processing different requests and stimuli. The robot should offer a gamut of "single mode" interactions that can be used one by one, and also enable progressive combinations or concatenations of different interaction modes. These must be carefully calibrated so to enable children to practice tasks at the proper level of sensory complexity and to master the difficulty of perceiving inputs across multiple modes. According to the current literature, the interaction modes that have been proved effective for children with NDD are spatial, tangible, vocal, and emotional. Spatial Interaction (R3) Spatial interaction exploits how humans use the space to regulate the reactive behavior of an interactive technological artifact, and has recently been emphasized as one of the important design aspect of a social assistive robot [45], [49], [55]. Spatial interaction requires that the robot is able to move as naturally as possible in the space, to sense and interpret the user position, orientation, movement, direction, and to react consistently. Spatial interaction help children with NDD to learn spatial awareness, i.e., the ability to be aware of oneself in space, which is often weak in these subjects. Spatial awareness requires the creation of a contextualized body schema representation -a sense of where your body is in the space, and involves the understanding of the relationship between physical objects and oneself in the space, both when objects are static and when they change position, learning concepts like distance, speed, "near", "behind", and similar. 3.4 Tangible Interaction (R4) Manipulation and tactile experiences plays a fundamental role in the development of sensory-motor capabilities as well as cognitive functions [70]. Several therapeutic tasks for children with NDD involve touch and manipulation of physical materials as a means to improve the capability to interpret stimuli processed by the tactile system and to improve own-body awareness. Similar skills can be promoted in a playful, safe, and controlled way through tangible interaction with the robot, enabling touch and physical manipulation of its body associated to consistent feedbacks. Physical touch is also one of the most basic, but at the same time most important forms of human communication. Through physical interaction with the robot children with NDD can learn to express and interpret this form of communication. Particularly, while an inappropriate tactile interaction with a human (e.g., pushing) would lead to a negative reaction which can be enormously frustrating for subjects with NDD, the similar action on a robot can trigger stimuli that help them reflect on their behavior and build a sense of stability and confidence [56]. Tactile interaction involving the feature of being "huggable" is recently emphasized as one of important aspects in the design of Robot Assisted Therapy [20] [68] [9][10]. 3.5 Vocal Interaction (R5) Children with NDD may have deficits in speech production and understanding -a complicated process involving the coordination of motor, auditory, somatosensory systems and several cognitive functions. Many therapy programs for these subjects include activities to help them communicate verbally in a useful and practical way. A robot equipped with vocal interaction can support vocalization capability and in principle be integrated in existing speech-therapy treatments. The controlled study reported in [24] show more positive effects on vocalization and speech capability when the considered robot interacted through remotely arm, ears, mouth, and eye movements and speech (remotely generated and controlled by the investigator) compared to the effects achieved when voice features were missing. Qualitative observations emerging from empirical studies on Teo [9] highlights that vocal interaction through reward phrases and songs (generated after a task completion) promotes engagement and fun. 3.6 Emotional Interaction (R6) Emotional information exchange plays an important role in human-human interaction. Current SAR research considers emotional interaction one of the principal ways to achieve trust and believability [8], making the user feel that the robots "understand" and "care" about what happens in the world. In addition, a robot that can both manifest its emotional states and interpret users' emotions helps subjects with NDD to develop the capability of emotional information exchange, i.e., to learn how to interpret and manifest emotions, which is fundamental for human interpersonal communication. Emotions can be expressed through different signals depending on the actuation characteristics of the robot: different face expressions, music, voice tones, movements in space or body movement rhythms convey different emotional signal. 3.7 Multisensory Feedback (R7) The action-feedback mechanism which is intrinsic in interaction promotes two basic and fundamental skills: cause-effect understanding and sense of agency, i.e., the feeling of being able to exercise control and of obtaining a coherent response. Once these skills are established, a person learns that (s)he can also affect different situations and people, and is motivated, for example, to use communication in its many forms (e.g., by requesting, questioning, or refusing) to manipulate situations. Reward stimuli, to acknowledge the correct completion of a multi-action task should also be explicitly acknowledged so that the child feels gratified for the achievement [61]. For example, lighting up part of the robot body or playing music or songs, showing animations is thought to be particularly engaging and encouraging [27] [47]. Still, for the reasons discussed at the beginning of this section, all stimuli must be carefully designed, and calibrated properly to avoid overstimulation and discomfort. Safety (R8) The robot should be "be harmless to patients physically, psychologically, and ethically" [53]. To this extent, sharp edges should be avoided favoring soft textures [47]. In addition, the robot body should be robust enough to reduce the effects of voluntary or involuntary disruptive actions, considering that children with NDD can be uncontrollably exuberant or impulsive at times. 3.9 Configurability (R9) Because children with NDD have unique and evolving needs, the value of technology for this target is directly related to its ability to adapt to specific needs and learning requirements of each person or group. [13] [50]. The robot should be integrated with the possibility for therapists to configure it (e.g., increasing/decreasing/removing sensory stimuli, or changing visual and sound rewards) to adapt the interactive experience to the specific profile of the current user(s). Dyadic Execution Mode (R10) Involving children in robotic interaction requires a combination of remotely-controlled and autonomous modes. Remote control gives the complete control of the robotic behavior to the caregiver. This is important to manage situations created by children's unexpected actions, to unlock stereotyped behaviors (typical of children with autism), to create new stimuli in response to children's interactions with the robot or movements in the space, or to adapt the stimuli to the specific characteristics of each child. Still, remote control is demanding for the caregiver. She must pay a constant attention to the child and at the same time must operate on the robot, controlling it in a believable and timely way and giving the impression that the robot is behaving autonomously and consistently with the current context. This burden can be alleviated by including some autonomous behaviors in the robot, i.e., programmed <stimuli-user action -stimuli> loops or flows of loops that enable the robot to act in the environment and interact consistently with some a pre-define logic. Multiple Roles (R11) The robot plays different functional roles in the interaction with the children [15]: a) Feedback: it reacts to an action performed by the child, promoting cause-effect understanding, and plays as rewarding agent, to offer positive reinforcement and promote selfesteem. b) Facilitator: it suggests what to do and when to do it facilitating task execution c) Prompt: it acts as a behavior eliciting agent enhancing attention and engagement d) Emulator: it plays as an emulator (acts as the child) or something that is imitated, to trigger the child's imitative reaction and skills e) Restrictor: it limits the child's spatial movement possibilities f) Social Mediator: it mediates social interaction between the child and others (therapist, peers); it acts as a communication channel through which the child expresses her communication intents as well as a tangible material for shared activities g) Affective and emotional agent: it facilitates the creation of affective bond between the robot and the child/children, helping subjects to unlock their emotional rigidity and to feel emotions; it stimulates the children's capability of manifesting their own emotions and interpreting the others' emotional expressions [70]. THE DESIGN OF PUFFY Puffy has been co-designed done in partnership with a team of 2 designers, 4 engineers and 15 therapists (psychologists, neuro-psychiatrists, special educators) from two different rehabilitation centers. In what follows we describe the physical and behavioral characteristics of the robot matching them to the requirements discussed in the previous section. 4.1 Physical Characteristics General shape: The visual appearance of Puffy ( Figure 1) meet all requirements stated in R1. Its white shape, externally made of a thin, white, opaque plastic textile is inspired by Baymax, the inflatable healthcare robot of the popular Disney animated movie Big Hero 6, and reminds a familiar character that children like and have positively experienced in everyday life. Puffy is approximately 130 cm high (the average height of our target group) and its only facial elements are two black eyes, to help user understanding what the robot is gazing at. Its wide round belly and its curved silhouette confer a fluffy and comfortable warm appearance and contributes to relax children, to promote affections and trust (R11.g), and to give the impression of playing as a gentle "big brother". Dynamic inflatable structure: Puffy is characterized by an inflatable structure. An embedded fan (that also serves a cooling purpose) is used to blow up Puffy's light plastic "skin" at the beginning of a session and to dynamically transform the shape and size of its inflatable structure in order to convey emotional body signals. Puffy can simulate relaxed breathing (through rhythmic inflating-deflating), manifest satisfaction and confidence (expanding the body through inflation), or express discomfort and sadness (compressing the body by reducing the air inside), enforcing the robot role as Affective and Emotional Agent (R11.g). The resulting body -big, soft, flexible in shape, and humorously rounded -makes Puffy robust, safe and harmless (R8), and pleasurable to touch and hug (R4). These features facilitate the role of Puffy as "Social mediator" (R11.f). For example, Puffy can be manipulated by many children together at the same time (Figures 3-4-5). Lights: Inside Puffy there is a commercial smart lamp (a Philips Hue Go) that generates a gamut of luminous feedbacks (R7) that, perceived across the white opaque fabric of the robot body, give a sense of magic ( Figure 2). Light stimuli are used not only as an aesthetic medium to attract and create engagement (R11.c). Rhythmic dynamic light can complement the rhythmic expansion/compression of the robot's body, and increase the emotional effect (R11.g). As colors and intensity evoke and convey different emotions, luminous stimuli can be used to enhance emotional interaction (R6) and affective bond (R11.g). Music and voice: Music not only offers stimuli for the sound sensory system (R7) but is also known to conveying emotions and affect wellbeing and emotional states (R6 and R 10.g). In some phases of a therapeutic session such as during the introduction and the breaks, Puffy plays soft songs at the frequency of 432 Hz (a frequency that is acknowledged to influence heart rate and improve relaxation), to establish a pleasurable calming atmosphere. While children are engaged in a task, cheerful music offer rewards or is used to recapture user attention (R11.c). Puffy's cheerful voice (R5) offers vocal instructions and feedbacks during interaction and tasks execution, such as back-channeling expressions ("mm mm, uhmmm, mm mmmm…), reinforcement or reward phrases. Vocal interaction capability can range from the production of voice instructions and vocal feedbacks (back-channeling expressions, reinforcement or reward phrases, songs, or rhymes) to the capability of interpreting the users' speech and reacting consistently. Multimedia on-body projections: A compact projector is embedded inside Puffy's body and displays visual stimuli (images, videos, or animations) on its belly (R7). These contents -which in rigid socially assistive robots are displayed using a tablet, a PC screen, or a smartphone placed on the body - [38] provide instructions, suggestions, or feedbacks (R11.a-b) for the tasks in which the children are involved, or are associated to the story Puffy is telling (as discussed later in the paper). Thanks to the white opaque light weighted plastic material and the shape of the inflatable body, projections result curve-rounded and with ambiguous borders that create a pleasant visual effect and enhance the sense of magic of the story being told, contributing to increase the affective bond with the robot (R11.g). Mobility: Puffy does not walk like Baymax character. Nevertheless, thanks to its holonomic base, our robot has a fluid mobility and is free of moving on the floor in any direction at a speed similar to that of humans in indoor environments (R3), wandering around, chasing the child, or getting close or faraway. Puffy's mobility features, described more precisely in the following section, support the robot's roles as Emulator (R11.d) and Restrictor (R11.e). Movements can be also used as prompts, to complement the prompting capability of music, voice, and projected visual contents (R 10.c). PUFFY: interaction and behavior Puffy-children interaction is multimodal (R2). Puffy supports tangible and vocal interaction (R4 and R5) -enabled by its capability of sensing and interpreting touch, sound, and speech, spatial interaction (R3) -enabled by the robot capability of moving and sensing users' movements, distance and position -and emotional interaction (R6) -enabled by the capability of recognizing the user facial expressions and voice signals, and to manifest emotional signals. Both tangible and spatial interaction can be combined with emotional interaction, as described below. Coordinated Spatial + Emotional Interaction: Puffy senses and interprets the physical spatial relationship between itself and the children (e.g., relative position, movement, orientation), which enables various forms of spatial interaction. For example, if an educational session has just started and Puffy locates the child far away, the robot attempts to attract the child's attention: it emits some cheerful sound and turns left and right (as if it were looking around) while its body is illuminated with soft blue light rhythmically changing intensity to increase the sense of movement. Puffy can also combine spatial interaction with emotional interaction, taking into account the psychological and emotional aspects of the spatial relationship between itself and a child. To this end, Puffy exploits an Interactional Spatial Relationship Model ( Figure 6) which considers the following elements: (a) Interpersonal distance, measured by an embedded depth sensor and classified according to the Hall's zone system of proxemics interaction (e.g. public, social, personal, intimate [34]); the definition of the parameters for users preferred interpersonal distance can be customized according to various conditions including a child's age and the current task, e.g., as a competitive or cooperative situation [37]). (b) Relative (child-robot) bodily orientations (e.g. face-to-face, side-by-side). Bodily orientation and interpersonal distance are a form of non-verbal communication that implicitly convey a child's current intention how a child wants to manage his/her relationship with Puffy [46] (c) Child's and robot's movements in space, defined by physical parameters such as direction and speed. Depending on the context, the kinematic behavior is interpreted both as "functional" to the current task, or as a psychological and emotional cue (e.g., "escaping from the robot" may be interpreted as an action required by the game or a signal of discomfort) (d) Child's emotional state, which is detected from the analysis of the child's voice tone [42] and facial expressions (e) Child-robot eye contact, which is again based on the analysis of the child's image; as already mentioned (section 3.1) eye contact is an important nonverbal communication behavior to express interest, attention, and trust towards a conversation partner, and is often missing in children with NDD (especially those with autism), (f) Robot's emotional state: the robot's current emotional state resulting from the Physical Manipulation by the children (see "Tangible") and from movement values. The current status of the robot-child relationship, defined by the set of the above variables, is used for determining the robot spatial and emotional behavior, as in the following example. Let us assume that there is only one child interacting with Puffy and the model variables have the following values: (a) {interpersonal distance=140 cm, i.e. located in a point of transition from "social" to "personal" distance} (b) {relative bodily orientations: child=0, puffy=0, i.e. "face-to-face"} (c) {child's spatial movement= "standing-still", i.e., no significant movement detected}, {robot's spatial movement= "approaching to a child at velocity 0.20 M/sec"}, (d) {child's emotional state= "mild", i.e., child's voice detected at medium level of loudness + smile recognized in the face} (e) {child-robot eye contact= "eye-contact by a child detected", i.e., sufficient level of child's interest and attention towards the robot}, (f) {robot's emotional state= "quiet", i.e., no previous hit; no movement, vibration, or blink}. Figure 6. Model of Interactional Spatial Relationships Based on the above analysis, Puffy decides that it is the appropriate timing to make a greeting, and sets its voice at medium-high level of loudness (as appropriate at the edge of social zone); then it continues moving towards a child only after it detects the absence of negative emotional response from the child. Approaching the "personal" distance zone around the child, Puffy re-evaluates the current status of the robot-child relationship and adapts its behavior by taking into account the potential risk of its current and next reactions. For example, Puffy may estimate that it should slow down its speed. The "personal" distance zone around the child is typically reserved to friends and family members who know and trust each other [34], and to avoid the child's feeling of an undesirable intrusion of his personal space of the child), Puffy must be ready to stop. After the robot detects that there is no negative response from the child, it considers that it is the right time to start some form of explicit communication, and for example engages the subject in a conversation adjusting its voice level at medium-loudness (as appropriate at personal distance). Tangible Interaction: Puffy employs two modes of tangible interaction: Physical Manipulation and Functional Touching. Physical Manipulation involves the tactile contact with Puffy ( Figure 4) and involves elements of emotional interaction. The robot's sensorized body can distinguish among caresses, hugs, and two levels of violent punches or slaps, measuring the intensity and dynamics of the body deformation induced by the physical contact. Depending on the evaluation of the manipulation, Puffy produces a specific emotion-based behavior by employing a map of multimodal emotional expressions [41]. Puffy becomes Happy when its body is softly caressed or touched. It responds to this pleasurable manipulation by emitting sound expressions of pleasure, vibrating, and rotating itself cheerfully, while internal lights becomes green and blink slowly. Puffy becomes Angry when its body is hit with moderate force. It responds to this rude manipulation by moving sharply towards a child, turning light to red, inflating its body and growling (grrr). Puffy becomes Scared when its body is brutally hit. It reacts by slowly retreating itself saying "what a fear!" while lights become yellowish and pulse slowly and the body shape shrinks. Functional Touching allows a child to make a choice (Figure5), express simple instructions to Puffy, or answering a question, by pressing Puffy's body in a specific area. Using the embedded projector, the active areas for functional touching appear dynamically on Puffy's belly, recognizing the position of the child's touch. Digital contents can be personalized by therapists by inserting the tags that are more meaningful for the activity, either realistic images or PCSs (Picture Communication Symbols), commonly used in Augmented Alternative Communication interventions [28]. Voice Interaction: Puffy can react vocally to situations that change its emotional state, as discussed above. Knowing the current state of a task (e.g., "complete"), the robot can use voice to reward the children. In addition, Puffy can interpret at some degree the children's speech, namely, the main sentiment of children's vocalizations and the main concepts expressed, and is programmed to react consistently. Execution Mode: The execution of Puffy's behaviors is performed in two modalitiesremotely controlled and autonomous agent (R10). In remotely controlled mode, the caregiver triggers the desired stimuli on Puffy's body and drives the robot movements using a remote controller (a joystick). As an autonomous agent, Puffy is preprogrammed to act autonomously according to the interaction rules defined in section 4.2 and the logic of the activities defined in section 5.2. 4.3 Configurability Puffy's behaviors, interaction modalities and sequence or intensity of the stimuli offered are customizable by therapists to address the evolving needs and preferences of each specific child (R9). A simple control panel enables therapists to (1) assign each activity to each child; (2) setup a child's curriculum with a progressive set of levels; (3) add, remove and edit projected elements; (4) choose feedbacks and rewards such as onboard lights colors, voice, and multimedia stimuli. USING PUFFY IN EDUCATIONAL OR THERAPEUTIC CONTEXTS Puffy can be presented as a new play companion and exploits the potential of gamebased learning engaging children through two forms of play: free play and structured play. Both free and structured play can be performed by a single child or (preferably) in group, to enhance the social dimension of the experience (R11.f). Free play: Free play consists of spontaneous, intrinsically motivated, unstructured tasks and has a fundamental role for the child's physical, cognitive and social development [14]. Free play with Puffy involves all interaction modes but functional touching. The children spontaneously manipulate the robot, try its physical affordances, and explore the physical space together with the robot, and the flow of stimuli is fully under children's control (with possible interplays of stimuli remotely triggered by the caregivers when needed). Free play can be used to facilitate the progressive mental and emotional states of relaxation [32], affection [2], and engagement [36], which are fundamental in any learning process of children with NDD and are a precondition for the execution of structured, goal oriented activities. Initially, the presence of Puffy in the playground could be potentially worrying, as children with NDD are often afraid of the unknown. They should learn that this "object" is predictable and safe, become confident that it is good, harmless, and inoffensive, and progressively achieve a state of relaxation, both in the relationship with the robot and towards the other children in the group. As the familiarization with the robot proceeds, children also develop the feeling of strong affective bond towards Puffy, i.e., affection, which facilitates a more persistent positive attitude towards the robot. Affection in turn is known to promote engagement, the state of active, voluntary involvement in an activity and the willingness to act upon the associated objects maintained for a relatively prolonged time. Structured play: Structured play is focused on the development of specific skills by executing activities that follow a predefined flow of tasks and stimuli programmed in the Puffy. The types of activities developed so far are "Storytelling", "Choice" and "Tag". Exploiting the general learning potential of storytelling for all young children [29], stories are widely used in educational and therapeutic practices for children with NDD to stimulate imagination, curiosity, and emotional development [33] [42]. Like other SARs for NDD [23] [42], Puffy is able to narrate stories (Figure 7). Prior to starting a storytelling, the robot attempts to establish the appropriate spatial relationship with a child by re-adjusting a suitable position that promotes a shared experience, while it continues monitoring the emotional response of a child. Puffy narrates stories by talking and projecting interactive multimedia contents on a curve-rounded screen of its belly. It can also perform movements and activate light and vibration effects to enhance the emotional effect and underline some moments or situations of the narration. In addition, Puffy can engage children during the story, e.g., asking questions (about events, characters, or places in the plot) or prompting interactions or movements, and consider children's actions and interpersonal spatial relationships to determine its own next behavior. Choice games aim at developing the willingness and capability of making choices as well as memory skills. A game of this types starts with a projection on the robot body where multiple choices are available, then Puffy asks a question and invites children to respond by touching a projected button (Figure 7). Tag games aim at fostering children's movement. For example, Puffy invites the children to play together and to move in the room, and then starts chasing them (Figure 8). Caregivers' role As in any intervention for subjects with NDD, during both free and structured play with Puffy caregivers -educators or therapists are present to assist and monitor the children, and interact with them striking a balance between giving them autonomy and mediating the experience with the robot. The caregivers act as "facilitator", to suggest what to do and when to do it, as "prompt", to promote attention and concentration, and as "play companion", to share emotions, surprise, and fun. For example, they frequently talks with the children and asks (or explains) what happens to Puffy, and sometime help them also with gestures, e.g., moving a child's head or body in a specific direction when it is evident that (s)he has lost attention. The technological experience becomes an opportunity for social connection between children and caregivers, exposing subjects with NDD to verbal communication and stimulating them to "think aloud" and practice language, which are all important elements to develop appropriate social competences. Depending on children's behaviors and emotional reactions during play, may switch from autonomous to controlled mode, to trigger specific stimuli, suspend-replay a task, or activating a different one. TECHNOLOGY An embedded board (Arduino Uno) operates as general control and communication device. The board communicates with a wheeled robotic base (iRobot iCreate) placed on the bottom of the body, six Force Sensing Resistors (FSRs) and the fan. The embedded equipment includes a mini PC, which manages most of the computational aspects of the robot behavior and is connected both with the Arduino Uno and the Microsoft Kinect Sensor placed inside the robot's head. A skeleton sustains a projector, a Philips Hue Go light and the audio speaker ( Figure 10). Using the values coming from these sensors and the time elapsed between one value and the following one (i.e., the duration of a manipulation with the same intensity), we define a set of value ranges associated to different types of Physical Manipulation e.g., caress (low force and low-medium duration time), hug (high force and long duration time), punch or slap (medium force, low duration time), violent punch or slap (high force, low duration time). A caress is recognized as a "soft" deformation of the FSR (force intensity between 400 and 600) that lasts for about 1000ms. A hug occurs when the deformation is approximately the double and lasts approximately 1500ms. A slap or a punch is detected when a big deformation of the FSR (force intensity >900) occurs in a very short time (<1000 ms). Examples of manipulations data along the time are shown in Figure 11. Differently from other robotic companions employed for the care of NDD children (e.g. Teo [10]), it is Puffy in "first person" that evaluates the parameters needed to execute spatial interaction and determines its own behavior in response to children's movement, position, and orientation in space. The Microsoft Kinect placed in the robot head embeds a depth camera (which look like eyes in Puffy's face), a color camera, and an array of microphones. Depth and color cameras generate body skeleton data that are used to evaluate the following variables along the time, which are calculated for the child who is closest to the robot: relative Puffy-user position; Puffy's and user's body orientation; user's movement and direction. These variable are computed using standard Kinect technology (SDKs) as well as data coming the robot base ([71] [72][73]). Examples of these variables are depicted in Figure 6 -section 4. In this figure, Theta_1 defines the breadth of the angle between the child's direction and the segment connecting the robot's and the child's position. Theta_2 is the breath of the angle defined by the robot's direction and the same segment. As discussed in section 4.2, the values of these variables, integrated with the results of speech and image processing (see below) are used by a rule-based Behavioral Engine to decide following behavior for the robot to perform. To recognize facial expressions, we use the stream of data generated by the Kinect color and Depth cameras for the child who is closest to the robot. The image processing and facial analysis algorithms use the Kinect SDK and the EmguCV library, a cross platform .Net wrapper to the OpenCV image processing library. OpenCV (Opensource Computer Vision) was selected for its computational efficiency and its strong focus on real time applications. The Kinect microphone array capture user's speech. The user's speech is interpreted by a Speech Analyzer module, which considers the vocal input of the child who is closest to the robot, and exploits IBM Watson technology and OpenSmile. IBM Speech-to-Text API is used to convert speech into written text. IBM Natural Language Understanding API is used to analyze this text and extract relevant metadata (concepts, sentiment, and emotions). OpenSmile is an open-source and real-time sound feature extraction tool, which we use to automatically extract the tone and pitch of child's speech. Combining these information with the metadata, a rule-based component of the Behavioral Engine generates the textual answer that is converted in Puffy's voice using IBM Text-to-Speech API. The above process has a number of challenges, which are mainly related to group interaction. Particularly, the issue is the identification of the "closest child" and the need of filtering out his(her) data, permitting separation of this child's vocalizations and images from all other speech and other sounds that are produced within the immediate environment as well as from the images of other persons nearby. We are currently working to improve the level of accuracy reached so far. COMPARISON Puffy provides a unique combination of features that are important in robot-enhanced interventions for children with NDD and are seldom supported together in existing robots used in this domain. We have compared Puffy against 21 most relevant robots that, according to the existing literature, have been used in interventions for subjects with NDD: Bobus [47], CHARLIE [7], CPAC [47], DISKCAT [47], FACE [44], Infanoid [15], IROMEC [15], Jumbo [47], KASPAR [57], Keepon [22], Kismet [11], Maestro [47], Nao [30], Paro [66], Roball [47], Robota [5], SAM [19], Tega [38], Teo [10], TREVOR [15], Troy [15]. The source for our analysis has been research publications as well additional public materials e.g., web sites and online videos, which describe the design features of these robots and on-the-field case studies with NDD children. The comparison considers the main requirements discussed in section 3 concerning the roles played by the robot in the interaction with children (Table 1) and the design of the body, the stimuli and the interaction ( Table 2). The robots marked with a "*" have been designed since the very beginning for children with NDD (CHARLIE, DISKCAT, IROMEC, Sam, Teo, Puffy) or at least for subjects with neurological impairments (e.g., Paro). For brevity, these robots are hereinafter referred to as "specialized", while the others are called "generic". Besides offering a comparison that highlights the uniqueness of Puffy, our analysis also looks to the current state of the art from a novel perspective, shedding a light on the main design features of existing robots used in interventions for NDD children. Not surprisingly, Puffy -the design of which has been informed by the requirements stated in section 3 -achieves the highest scores in both tables and is the only robot that matches all parameters. Concerning Table 1, all robots play as Feedback; it seems to be widely acknowledged the importance not simply of supporting interaction, i.e., providing stimuli in response to user's actions, but also of offering positive reinforcement and rewarding the children. Table 1 also shows an expected dependency between the physical capability of the robot and its roles. For example, 2 robots (Nao and Sam) meet all but one requirements, respectively Space Restrictor (Nao is static) and Emotional Agent (Teo is not equipped with emotion detection functionality). Table 1 also highlights the potential of robots to promote affection: 73% of the robots play as Affective Agent; some robots in this subset (e.g., Paro, Teo, SAM) have been explicitly designed for this role while for others (e.g. [42]) the affection power emerged only during the experimentation with the children. The second most popular role is Social Mediator (68% of the robots). This role is played by all specialized robots and also by 6 generic robots. Specialized robots are typically meant to be used by children under the supervision of the caregiver and all acknowledge that the interaction with the robot promotes also human-human interaction; still, the potential of robotic interaction to develop social skills in children with NDD seems to be achieved also by some generic robots, showing that it might be intrinsic in the robotic interaction paradigm per se. Concerning Table 2, again the highest total scores (>7) are assigned to all specialized robots, but this top ranked set also includes some generic robots (Nao, Roball, Maestro). The most common physical feature is the abstract non-humanoid shape (64%) and the least common one is the integration with videos and animations (18%). All specialized robots but IROMEC, plus 3 generic robots, are soft, accounting to 50% of the total. The most popular paradigm of interaction is Functional touch (73%) followed by Physical Manipulation and Vocal Interaction (59%). The latter is supported by 4 out of 7 specialized robots. There is some interest in the conversational paradigm in the NDD arena, for improving language and communication skills; still, we should consider that some children with NDD might be nonverbal and could not exploit the benefits of voice-based interaction. A limited number of robots (32%) exploits spatial interaction, an interaction paradigm that is relatively new and deserves further exploration. The values of the various robots resulting from our analysis do not reflect their therapeutic or educational effectives for children with NDD and is therefore not to be interpreted as a "quality" ranking. Certainly, the design and functionalities of the robot have a significant influence in the educational and therapeutic process; still, in general we cannot state that, on the basis of our comparison, some features are more relevant than others. In addition, from our analysis we cannot draw any conclusion on how each specific design feature may influence the benefits (or limitations) for each child's experience with the robot. This is because the profound diversification of the profiles of children with NDD. Due the nature of these disorders, not all children with NDD will react exactly in the same way. Some may show more receptiveness towards some design features but discomfort to others, and with few exceptions we cannot state that some design aspects are more relevant than others. An exception is safety and robustness, which is truly a precondition for this special target and, according to our analysis, and is satisfied only by 59% of the robots (7 specialized robots and 6 generic ones). A rigorous comparative analysis of these aspect can be hardly derived from public material, and would require an in-depth systematic research through controlled empirical studies. 8.1 Session with NDD specialists After the development of the first prototype, we organized a half day workshop with 10 specialists in NDD (therapists, special educators and psychologists) to collect feedbacks on our design solutions. Beside the technical team, participants included 5 caregivers who have been cooperating with our research in the last 3 years and helped us to identify the requirements stated in section 3, and 10 specialists with no limited experience in robotic interaction. The workshop was organized in 3 main phases. We first presented the set of requirements that informed our design; in the focus-group like session that followed we explored participants' agreements or disagreements on each requirement. We collected a general consensus on the requirements and many highlights on the user needs underlying them, as well as examples of how the current practices address such needs, their limitations, and how a robot meeting such requirements would help. In the second phase, we then demonstrated Puffy and participants played freely with Puffy and performed all the activities reported in section 4, alternating remotely controlled (first operated by the technical team, and then by themselves) and autonomous modes. In the final phase, we asked participants to fill an online questionnaire and evaluate Puffy as a whole and on each single feature, on a 4-points Likert scale, and to include comments and justifications of scores. The results were immediately available and were discussed with participants. Figure 12 reports the most salient findings. Session with children One of the workshop participants invited us to bring Puffy at her primary school 1 and to try it with 2 children with ASD (medium-functioning, aged 7 and 9). At the school, a big thing like Puffy couldn't go unnoticed and several students enthusiastically asked to play with it. So the robot was used among the entire class of the two ASD children (19 children, average age 6). They played with the robot for approximately one hour under the supervision of the technical team, the special educator, and the regular teacher. They were initially involved in free play, and then performed one of each type of activities described in the previous sections, interacting with Puffy in turn. The observation of this very informal experimental session indicate that Puffy was perceived as pleasant and fun by the 2 children with NDD. During the experience they showed behaviors that, according to the educator, they had very seldom manifested in the past. For example, they were not afraid by "the new thing" (which would be a typical reaction of subjects with autism) and immediately perceived it as safe, good, harmless: they were invited to be the first ones to play with the robot ("Puffy is here for you!") and immediately accepted and started touching it. When it was not their turn of interacting, they did not leave the group as it normally happens, but payed attention to the other peer' action and Puffy's reactions. For all children, Puffy acted as an extraordinary catalyzer of attention and a source of fun. Many children, leaving the session room and saying "Hello" to the robot also asked it to come back again soon. 8.3 Discussion Concerning the session with specialists, the survey results show a general agreement of the potential of Puffy for children with NDD (as witnesses by the "General" bar in Figure 12). All participants declared that they would include Puffy in their therapeutic treatment. Figure 12 shows that the light stimuli are a design feature that specialist consider as particularly promising; many of them declared that the soft lights of Puffy have the power of attracting the children as well as to relax them. The most appreciated interaction mode is tangible interaction, particularly hugging. The unique hugging characteristic of Puffy originate from its big inflatable body, and the above result suggests that inflatable shapes deserves further investigation in the NDD arena. Voice features had the lowest score; according to specialists, Puffy does not hold the capability of automatically creating conversation which are appropriate for children with NDD, who often have a weak vocabulary and make many errors. This comment was somehow expected, as our speech engine use standard speech interpretation mechanisms which are not trained to the characteristics of this target group. Among the qualitative comments reported in the survey, some offered interesting insights on risks. Some specialists observed that that in order to become a reliable element in a therapeutic or educational program, it's important that Puffy has not only attractive features but also enables mechanisms for avoiding or mitigating the emergence of undesirable situations, e.g., a child's "negative" reaction such as aggressively or stereotypes, supporting not only physical safety (of the robot and the children) but also psychological and social harmless. This problem deserves further multidisciplinary research. Addressing it requires an accurate case analysis, grounded on the current practice and on an extensive experimentation of Puffy, and would lead to the design of adaptation mechanisms in the robot behavior that are beyond the current state of the art in adaptive robotics. In spite of its brevity and informal nature, also the session at the school supports the hypothesis that Puffy has a good potential for children with NDD. The experience also suggests that this robot might be appropriate not only in the contexts for which it was originally conceived (therapeutic centers) but also in regular schools, and for children with and without disability. This new scenario would require a further analysis to extend the set of activities currently designed for Puffy, in order to address the characteristics of broader targets and contexts. To explore the potential of Puffy also from this perspective, we are currently conducting an exploratory study at a local pre-school involving 79 children (aged 3-5) and their 15 teachers. A final observation concerns the "fun" nature of Puffy, which at school was enhanced by some unpredictable events during remotely controlled interactions. Caregivers triggered by mistake a number of "crazy" behaviors, which were totally inconsistent with the current interaction context, but particularly funny for all children, including those with ASD. Enforcing behavioral consistency, predictability and repeatability is a fundamental requirement in the design of socially assistive robots for NDD; still, serendipity, unpredictability, and surprise could be design concepts worth to explore in relationship of this target group. CONCLUSIONS Our work offers an original contribution to the design of social robots for children with severe and multiple disability in the cognitive and social sphere. Puffy supports learning through play taking into account the specific characteristics and needs of this target group and is the first example of assistive robot that use inflatable materials. Puffy exemplifies how different interaction paradigms (spatial, emotional, and tangible interaction), physical features (mobility, inflatable body, from-inside multimedia projections) and multisensory stimuli can be combined and orchestrated to create a novel robotic companion robot for children with NDD. The design of Puffy was informed by a set of requirements that distill the main results of the current state of the art in this field as well, the knowledge of a team of NDD specialists that are collaborating in our research, and the experience gained in the last 10 years in the development of assistive technology for children with NDD. These requirements can be regarded as design guidelines that can benefit researchers and developers in social assistive robotics in this domain. On the basis of these guidelines, we have provided a systematic comparison between Puffy and 21 existing robots that have been used with children in the NDD spectrum. This analysis offers a novel perspective on the current state of the art in assistive robotics for this target group. We performed two preliminary exploratory sessions with Puffy -a focus group with specialists and an activity at a primary school -which offered encouraging results. Still, Puffy has not yet been evaluated in a systematic field study, which is one of the next steps in our research agenda. Additional future actions include the improvement of the design and implementation of Puffy according to the suggestions emerged during the workshop with the specialists, particularly those related to the need of adaptation mechanism to face critical situation of children's distress or misbehavior, and improvements of the vocal interaction capability.
v3-fos-license
2018-04-04T00:26:41.185Z
2018-04-02T00:00:00.000
4568205
{ "extfieldsofstudy": [ "Environmental Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://biotechnologyforbiofuels.biomedcentral.com/track/pdf/10.1186/s13068-018-1068-1", "pdf_hash": "e93954487c0f55496c128e2e13dcedbb1150f2d7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42774", "s2fieldsofstudy": [ "Engineering" ], "sha1": "e93954487c0f55496c128e2e13dcedbb1150f2d7", "year": 2018 }
pes2o/s2orc
Feasibility of biodiesel production and CO2 emission reduction by Monoraphidium dybowskii LB50 under semi-continuous culture with open raceway ponds in the desert area Background Compared with other general energy crops, microalgae are more compatible with desert conditions. In addition, microalgae cultivated in desert regions can be used to develop biodiesel. Therefore, screening oil-rich microalgae, and researching the algae growth, CO2 fixation and oil yield in desert areas not only effectively utilize the idle desertification lands and other resources, but also reduce CO2 emission. Results Monoraphidium dybowskii LB50 can be efficiently cultured in the desert area using light resources, and lipid yield can be effectively improved using two-stage induction and semi-continuous culture modes in open raceway ponds (ORPs). Lipid content (LC) and lipid productivity (LP) were increased by 20% under two-stage industrial salt induction, whereas biomass productivity (BP) increased by 80% to enhance LP under semi-continuous mode in 5 m2 ORPs. After 3 years of operation, M. dybowskii LB50 was successfully and stably cultivated under semi-continuous mode for a month during five cycles of repeated culture in a 200 m2 ORP in the desert area. This culture mode reduced the supply of the original species. The BP and CO2 fixation rate were maintained at 18 and 33 g m−2 day−1, respectively. Moreover, LC decreased only during the fifth cycle of repeated culture. Evaporation occurred at 0.9–1.8 L m−2 day−1, which corresponded to 6.5–13% of evaporation loss rate. Semi-continuous and two-stage salt induction culture modes can reduce energy consumption and increase energy balance through the energy consumption analysis of life cycle. Conclusion This study demonstrates the feasibility of combining biodiesel production and CO2 fixation using microalgae grown as feedstock under culture modes with ORPs by using the resources in the desert area. The understanding of evaporation loss and the sustainability of semi-continuous culture render this approach practically viable. The novel strategy may be a promising alternative to existing technology for CO2 emission reduction and biofuel production. Electronic supplementary material The online version of this article (10.1186/s13068-018-1068-1) contains supplementary material, which is available to authorized users. Background Renewable and environmentally friendly alternative fuels are urgently needed for future industrial development, because of the diminishing world oil reserves and the environmental deterioration associated with fossil fuel consumption [1,2]. Microalgae are increasingly considered as feedstock for next-generation biofuel production because of their many excellent characteristics, such as broad environmental adaptability, short growth period, high photosynthetic efficiency, and high-quality lipid [3,4]. However, the commercial feasibility of microalgal biodiesel is limited because only few microalgal strains can be grown reliably with high lipid content (LC) outdoors. Lipid productivity (LP) under outdoor conditions is significantly lower than that in the laboratory due to pollution from other microorganisms and fluctuations in environmental parameters [5][6][7]. Large-scale outdoor cultivation using sunlight is the only solution for the sustainable industrial production of microalgal biofuel [8]. Therefore, an essential prerequisite to achieve the industrial-scale application of microalgal biofuel is the selection of robust and highly productive microalgal strains with relatively high LC outdoors. Two cultivation systems are commonly used for largescale outdoor microalgal cultivation: the open system (e.g., open raceway ponds, ORPs) and the closed photobioreactor system (e.g., tubular, flat plate or column photobioreactors) [1,[9][10][11]. Compared with closed photobioreactors, ORPs consume less energy and require lower investment and production costs for microalgal cultivation [12]. Although microalgal cultivation in ORPs offers many advantages, the high cost of cultivation systems impedes the commercialization for lipid production. Thus, developing an economically feasible culture mode to increase the lipid production and thereby reduce cultivation costs is necessary [7,13]. Increasing LP via culture modes can reduce the costs and enhance the economic feasibility of microalgal biodiesel production. The photoautotrophic two-stage cultivation mode is a highly promising approach to increase lipid production in photobioreactors by improving LCs [14][15][16]. However, only few studies have employed this mode in ORPs [17]. The semi-continuous mode is a simple and efficient strategy to increase lipid production in microalgal biomass by continuously increasing biomass [7,18]. This mode can avoid a low cell division rate at the early exponential stage and light limitation at the late stationary stage. Furthermore, it maintains the microalgal culture under exponential growth conditions, resulting in enhanced biodiesel production [7]. However, the number of cycles for semi-continuous culture is limited because of the different nutrient consumption rates of algal cells. Thus, exploring the adequate cultivation time of microalgal cells under whole semi-continuous cultivation mode is important to evaluate their survivability. The large-scale cultivation of microalgae requires large areas of land and water resources. Arid and semiarid regions account for 41% of the global land area [19]. Thus, cultivating microalgae in desertification areas avoids competition with food crops for arable land and water. In addition, the unique climatic conditions (strong solar radiation, long sunshine duration, and large day and night temperature difference) in deserts are beneficial to the accumulation of dry weight (DW) in the cells. Compared with other crops, microalgae are more compatible with desert conditions. Furthermore, cyanobacteria and green microalgae can be stably and efficiently cultivated in desert areas [13,20,21]. Therefore, microalgal cultivation is an effective means to utilize desert lands and sunshine. In the current work, the ability of three microalgae to produce high lipid indoors was determined in a 5 m 2 (1000 L) ORP to select for high environmental adaptability and lipid accumulation capability in the desert area. The influences of two-stage cultivation mode and semi-continuous mode on cell growth, CO 2 fixation rate, and evaporation rate were first investigated under 5 m 2 (1000 L) ORP. Algal strain growth was scaled up to a 200 m 2 (40,000 L) ORP with semi-continuous mode to determine cycle times. Outdoor cultivation test at different times was conducted to assess the stability of the algal strain in long-term semi-continuous operations. Finally, the energy consumption of life cycle was analyzed to assess the feasibility of biodiesel production and CO 2 mitigation in desert area. Organism Monoraphidium dybowskii LB50 and Micractinium sp. XJ-2 were provided by Prof. Xudong Xu of the Institute of Hydrobiology, the Chinese Academy of Sciences. Podohedriella falcata XJ-176 was isolated from Xinjiang Taxi River Reservoir (Additional file 1: Figure S1). The stock cultures were maintained indoors in a sterilized BG11 medium containing 1.5 g NaNO 3 Figure S1). Two scales of ORPs at 5 and 200 m 2 were utilized. The length, width, and maximum depth were 4.80, 1.05, and 0.60 m and 34.50, 5.80, and 0.60 m in 5 and 200 m 2 illuminated areas of ORP, respectively (Additional file 1: Figure S1). The culture depth in raceway ponds was set to 20 cm, with 1000 and 40,000 L culture volumes. A stainless steel paddlewheel, 0.80 m in diameter, was used for the circulation of the cultures in 5 and 200 m 2 ORPs at 0.35 and 0.25 m s −1 , respectively. Microalgae were cultivated using a modified BG11 medium containing 0.25 g L −1 urea, but 0.1 M NaHCO 3 was added to the medium used for M. dybowskii LB50. The medium was thoroughly compounded with groundwater. A series of scale-up pre-cultivation was employed (Additional file 1: Figure S1). Water in the system was replenished every day to prevent serious evaporative losses in the open raceway system. Cell concentration measured as an OD 680 of 0.1 was inoculated into the culture in 5 and 200 m 2 ORPs. After pre-cultivation, the batch culture was conducted with three microalgae in 5 m 2 ORP (1000 L) to select the optimal stain for lipid production. For two-stage salt induction culture in 5 m 2 ORPs, M. dybowskii LB50 was cultivated in 5 m 2 ORPs outdoors. On the 10th day, which is at the late-exponential growth phase, NaCl and industrial salts (Hubei Guangyan Lantioan salt chemical co., Ltd, China. Additional file 2: Table S1) were added at final concentrations of 0 and 20 g L −1 . Industrial salts, often referred in China to NaCl, NaOH (caustic soda), and Na 2 CO 3 are widely used in the industry. In the current study, the main component of industrial salt was NaCl. Industrial salt can be inexpensive and is easily produced because of the low purity. Day 0 was assumed as the time of salt addition. For semi-continuous cultivation, further experiments were conducted with semi-continuous mode in two ORP scales. Two-thirds of the culture was harvested, and the remaining culture was used as the seed for subsequent batches and replaced by the same volume of nutritionrich growth media containing half of the urea concentration. The algal culture was harvested every 3 or 4 days. The semi-continuous experiment was carried out in a 200 m 2 ORP for a month. The water used for algal cultivation was pumped from the ground and contained 89.39 ppm Na + , 62.92 ppm SO 4 Biomass measurement Biomass productivity (BP, mg L −1 day −1 ) was calculated according to Eq. (1): where B2 and B1 represent the DW biomass density at time t (days) and at the start of the experiment, respectively. Algal density was determined by measuring the OD 680 -the optical density of algae at 680 nm. The relationships between the DW (g L −1 ) and the OD 680 values of the algae were described using Eqs. (2)(3)(4): The cells were harvested by centrifugation and baked in an oven. Lipid analysis Total lipid was extracted from approximately 80-100 mg of the dried algae (w 1 ) using a Soxhlet apparatus, with chloroform-methanol (1:2, v/v) as the solvent. Total lipid was transferred into a pre-weighed beaker (w 2 ) and blowdried in a fume cupboard. The lipid was dried to a constant weight in an oven at 10 °C and weighed (w 3 ). LC (%) and the LP (mg L −1 day −1 ) were determined according to Eqs. (5,6): Determination of urea concentration Urea concentration was determined following the protocol outlined by Beale and Croft [22]. The liquid sample collected from the raceway pond was filtered using a 0.22 μm-pore filter and then diluted 60-fold with deionized water for each sample. The sample was collected and mixed with 1 volume of diacetylmonoxime-phenylanthranilic acid reagent (1 volume of 1% w/v diacetylmonoxime in 0.02% acetic acid and 1 volume of phenylanthranilic acid in 20% v/v ethanol with 120 mM NaCO 3 ). Exactly, 1 mL of activated acid phosphate (1.3 M NaH 2 PO 4 , 10 mM MnCl 2 , 0.4 mM NaNO 3 , 0.2 M HCl in 31% v/v H 2 SO 4 ) was added before incubation in boiling water for 15 min. The tubes were left to cool, and their OD 520 were determined using a UV/Vis spectrophotometer. Determination of pH, irradiance, conductivity, and evaporation The temperature, conductivity and pH of the culture medium were determined daily by utilizing respective sampling probes (YSI Instruments, Yellow Springs, Ohio, USA). Irradiance was measured with a luxmeter (Hansatech Instruments, Norfolk, UK). The depth at four fixed positions was determined in the raceway ponds every day, and evaporation (L m −2 day −1 ) was calculated according to Eq. (7): where h2 and h1 represent the average depth at time t (days) and at the start of the experiment, respectively. S represents the area of the raceway ponds. Determination of CO 2 fixation rate According to the mass balance of microalgae, the fixation rate of CO 2 (mg L −1 day −1 , g m −2 day −1 ) was calculated from the relationship between the carbon content and volumetric growth rate of the microalgal cell, as indicated in Eq. (8): where BP is in mg L −1 day −1 or g m −2 day −1 ; C carbon is the carbon content of the biomass (g g −1 ), as determined by an elemental analyzer (Elementar Vario EL cube); M CO 2 is the molar mass of CO 2 ; and M C is the molar mass of carbon (Additional file 3: Table S2). Net energy ratio (NER) and energy balances NER is defined as the ratio of the energy produced over primary energy input as represented in Eq. (9): On the basis of the data obtained in the 200 m 2 ORP for cultivating M. dybowskii LB50 for 1 year, NER is estimated using the method discussed by Jorquera et al. [23]. Energy balance is defined as the difference between energy produced and primary energy input, as represented Eq. (10): Energy balance = Energy produced (lipid or biomass) − Energy requirements. Statistical analysis The values were expressed as mean ± standard deviation. The data were analyzed by one-way ANOVA using SPSS (version 19.0). Statistically significant difference was considered at p < 0.05. Results and discussion Growth, lipid accumulation, and CO 2 fixation rate of the three microalgae in 5 m 2 ORPs outdoors Three strains of potential microalgae (Additional file 4: Table S3) were grown in 5 m 2 ORPS to evaluate their lipid accumulation and CO 2 fixation potential. As shown in Fig. 1 respectively. During the time course of culture, CO 2 fixation rate was low at the beginning and stable stage and was the highest at the exponential growth stage, reaching 163 mg L −1 day −1 . At the late growth stage, CO 2 fixation rate was negative, indicating that the microalgal cells did not grow or died, releasing large amounts of CO 2 possibly through respiratory metabolism. Microbial contamination during large-scale algal cultivation can significantly and consistently reduce biomass production. In this context, eukaryotic contaminants, such as amoebae, ciliates, and rotifers, and clusters of cells based on microscopy were found to cause biomass deterioration in P. falcata XJ-176 cultivation. In the current study, this phenomenon was rarely observed during the cultivation of M. dybowskii LB50 and Micractinium sp. XJ-2. These results showed that the two species demonstrate high environmental tolerance, especially to the high light intensity in the desert (Additional file 6: Figure S2), and could inhibit the excessive growth of bacteria [16,24]. Consequently, M. dybowskii LB50 exhibited improved lipid accumulation potential outdoors, particularly during cultivation in the desert. Two-stage induction culture of microalgae In addition to selecting a fast-growing strain with high LC, improving the LC or biomass to increase lipid yield is also necessary to enhance the economic feasibility of microalgae-based CO 2 removal and biodiesel production [13,16,25]. LC can be improved through many ways [7,26], among which two-stage salt induction is very effective [27]. In our previous study, the LC of M. dybowskii LB50 was increased by 10% through NaCl induction in 140 L photobioreactors outdoors [16]. However, few studies on NaCl induction in ORPs have been conducted [17]. Figure 2 shows that the biomass was not significantly decreased on the first day of NaCl and industrial salt induction (p > 0.05), but was significantly reduced on the third day (p < 0.05). The effect of industrial salt induction on LC was similar to that of NaCl. LC increased by 7% on day 1 of induction and by 10% on day 2 of induction. Thus, LP was 3.3 g m −2 day −1 without significant difference within 1 or 2 days of induction. Only 1 day was required for induction to shorten the culture period. Meanwhile, CO 2 fixation rate was 78 mg L −1 day −1 at the time course of induction ( Table 1). The pH of the culture liquid did not significantly change, after adding NaCl or industrial salt, but the conductivity increased by five times after adding salt ions (Additional file 2: Table S1). Consequently, the two-stage industrial salt induction culture mode in ORPs favorably increased the LC and reduced the costs. Two-stage cultivation has been performed in closed photobioreactors outdoors. Tetraselmis sp. and Chlorella sp. were cultured in 120 L closed photobioreactors, and lipid productivities of microalgae were increased by suitable CO 2 concentration [11,28]. Moreover, NaCl induction in the column photobioreactors was favorable [16]. However, these reports have not been verified in ORPs. Kelley [29] reported LC can be increased by using a twostep method involving N deficiency and light conversion in 3 m 2 ORPs. LP can also be increased by NaCl induction during dual mode cultivation of mixotrophic microalga in culture tubes [17]. In this study, we confirmed that LC was significantly increased not only in the open runway pool (1000 L), but also with industrial salt induction. Semi-continuous culture in 5 m 2 ORPs Given its convenient operation and cost-effectiveness, semi-continuous cultivation is also a good choice [30]. Semi-continuous cultivation has attracted considerable attention in energy microalgae [7,18,31]. Unfortunately, the culture medium used in semi-continuous cultivation cannot be reused for an unlimited number of times because of the difference in nutrients consumption rate of cells. Portions of the nutrient concentration excessively increase with culture time and eventually inhibit cell growth. In the 1000 L ORP, the BP increased from 44.86 to 74.16 mg L −1 day −1 after repeated culture, and the LC remained stable at 30% in M. dybowskii LB50 (Fig. 3). Finally, areal LP (ALP) increased from 2.73 to 4.58 g m −2 day −1 (Table 2), and the CO 2 fixation rate increased from 16.1 to 26.7 g m −2 day −1 after repeated culture. During the whole semi-continuous culture, the CO 2 fixation rate reached 23 g m −2 day −1 (114 mg L −1 day −1 ). The pH of the culture medium did not significantly change (9.14-9.52, Fig. 3c), indicating that the growth consistently improved throughout the semi-continuous culture. However, the fluctuations in light intensity and temperature were large. Increased illumination and prolonged periods of light exposure were favorable factors for microalgal culture in desert areas, but high evaporation due to increased illumination was unfavorable. Evaporation occurred at 1.62 L m −2 day −1 (Fig. 3). The minimum amount of evaporation was 0.68 L m −2 day −1 at low temperature and light intensity (day 6, rainy day), whereas the highest evaporation rate was 2.26 L m −2 day −1 at high temperature and light intensity in the 5 m 2 ORP. The two-stage induction culture exhibited slightly higher LP than the semi-continuous culture in the same culture time in a 5 m 2 ORP. However, the semi-continuous culture was more favorable for CO 2 emission reduction than the two-stage induction culture. The semi-continuous culture prolonged culture period to reduce the supply of the original species. Figure 4 shows the semi-continuous culture of M. dybowskii LB50 in a 200 m 2 ORP (40,000 L) for a month. BP was 15.2 g m −2 day −1 during the initial growth (0-7 days). The highest BP was 26.8 g m −2 day −1 during the first cycle of semi-continuous culture, but was decreased at the second cycle, because of the rainy days (11-12 days, Fig. 4c). The average biomass productivity Table 3) after 1 month of semi-continuous culture at five cycles of replacement. The LC did not significantly change during the four cycles, but significantly decreased at the fifth passage. Therefore, the LP also decreased during fifth passage. The change in CO 2 fixation rate was the same as that during biomass production. The average CO 2 fixation rate was 30.8 or 33.9 g m −2 day −1 at 0-26 or 0-20 days (Table 3). Scaled up semi-continuous cultivation in 200 m 2 ORP Evaporation occurred at 0.88 ± 0.31 L m −2 day −1 in the 200 m 2 ORP, and the maximal evaporation rate was 1.44 L m −2 day −1 under high light intensity (128-1568 μmol m −2 s −1 ). Even during a rainy day, minimal evaporation loss of 0.39 m −2 day −1 , which included the leakages and washout of the ORP, was found. Therefore, the average daily evaporation loss rate was 0.44%, and evaporation loss rate was 8.8-11.44% during the whole semi-continuous culture. Figure 4d shows that a small amount of urea can accumulate after each cycle of replacement. The accumulation of urea in the medium reached 0.05 g L −1 until the fourth cycle of semi-continuous culture. These results suggested that the growth and lipid of cells were affected by the accumulation of partial nutrients and the remaining death cells in the media as cycle times increased. Therefore, five cycles of repeated culture were conducted in this study. However, further scalable work can be continued for long-term cultivation with additional repeated times, (Fig. 5). M. dybowskii LB50 could exhibit stable growth for a month with semicontinuous culture. The biomass and LC were maintained at 18-20 g m −2 day −1 and 30%, respectively. The CO 2 fixation rate remained at 33 g m −2 day −1 , but the evaporation exhibited increased difference in various months. The evaporation rates were 0.39-1.44 L m −2 day −1 ( x = 0.9 L m −2 day −1 ), 0.56-3.29 L m −2 day −1 ( x = 1.6 L m −2 day −1 ), and 0.74-3.72 L m −2 day −1 ( x = 1.8 L m −2 day −1 ) in September 2014, July 2015, and August 2016. The evaporation loss rate of a semicontinuous culture is 6.5-13%. Water resources are a potential limitation for microalgal culture, but evaporation affects its scale and sustainability [32]. Furthermore, regions with high BP receive high solar irradiance and thus result in high evaporation rates [33]. Evaporation of the ponds was assumed to occur at a rate of 0.4 cm day −1 (0.4 L m −2 day −1 ) [34]. In this case, further work on the water cyclic utilization and evaporation reduction can be conducted for sustainable cultivation because of the increased evaporation. Replacement ratio or dilution ratio, the volume ratio of new medium to total culture, is an important parameter in semi-continuous culture because it influences microalgae growth and cell the biochemical components. Ho et al. [35] reported that BP increases with replacement ratio, but lipid causes the opposite effect. The 90% replacement group exhibited the highest overall LP among five replacement ratios (10, 30, 50, 70, and 90%). Some studies reported that a semi-batch process with a 50% medium replacement ratio is suitable for microalgal biomass production and CO 2 fixation [13,36]. In the current study, the LC was unaffected by the 2/3 replacement ratio mainly because of the high and long duration of light in the desert. Although the microalgal concentration in the reactor was not high, cells could grow rapidly. Cycle time is another parameter affecting the continuity of semi-continuous culture. Previously, five to six cycles of repeated semi-continuous culture were conducted and resulted in inhibited growth or decreased LC [35,37]. The LC of Desmodesmus sp. F2 significantly decreased at the sixth repeated cycle when five replacement ratios were adopted for semi-continuous cultivation for six repeated cycles [35]. In the 2/3 replacement test, the LC remained high throughout the five-cycle repeated course in 200 m 2 ORPs. Table 4 shows the LP of microalgae in large-scale culture outdoors. The largest scale was implemented in the cultivation of N. salina in the USA, and LP was 10.7 m 3 ha −1 year −1 [38], followed by the cultivation of M. dybowskii LB50, Graesiella sp. WBG-1, and M. dybowskii Y2 in 200 m 2 ORPs (40,000 L). The LPs (5.3 g m −2 day −1 ) of M. dybowskii LB50 and M. dybowskii Y2 were higher than those of Graesiella sp. WBG-1 (2.9 g m −2 day −1 ) and the others in ORPs and tubular photobioreactors. Increased CO 2 fixation ability (CO 2 fixation rate of 34 g m −2 day −1 ) was obtained under semicontinuous modes with ORPs in the desert area (Table 4). These results indicated that high biomass production was obtained and CO 2 mitigation was feasible by microalgal culture in the desert. The volumetric LP (VLP) in ORPs was lower than that in photobioreactors (Table 4). Finally, all types of bioreactors must focus on the ALP in microalgae industry applications. In brief, the semi-continuous mode in ORPs is more practical than other operation modes in other bioreactors for long-term cultivation. Thus, it is suitable for oleaginous microalgae industry applications because it is economic, convenient, and demonstrates high ALP. Energy consumption evaluation of outdoor cultivation in different culture modes The biodiesel production from microalgae involved a course of cultivation, centrifugation, drying, and extraction via a conventional method. We assumed that 100,000 kg dry weight of biomass was produced within the year (270 days). Other parameters were included in our assessment according to the actual operation. Table 5 shows that the net positive energy for oil production (1.34-2.72) and biomass production (1. 41-2.52) in the two-stage salt induction or semi-continuous culture mode was higher than those in the batch mode in 5 m 2 ORPs. Moreover, in the 200 m 2 ORP, the net positive energy of oil production in the semi-continuous and batch modes was 1.52-2.69, indicating that the semicontinuous culture increased the biomass yield, but not the additional energy consumption. The NER of oil and biomass production increased with a scale-up of the culture system. In addition, the energy demand for producing 1 kg of biodiesel was 14.2-23.3 MJ under semicontinuous mode in 200 m 2 ORP. Figure 6 shows that the energy consumption of cultivation assumed the highest proportion (55-72%) under any culture mode. The energy balance in the two-stage salt induction culture mode was higher than that in the other methods mainly due to the increase of LC by industrial salt induction to increase the energy produced by oil. The energy produced by oil was 1.27 times larger than that under other modes within the same biomass production (100,000 kg), but the energy balance was only about 10% higher than that under semi-continuous mode. These results demonstrate that the energy consumption of the cultivation process was increased and was reduced by scaling up. The energy balance thus increased after scaling up. Moreover, the energy balance under semi-continuous mode was five times higher than that under batch mode in 5 m 2 ORPs and was 1.15 times higher in 200 m 2 ORP. Therefore, reducing energy consumption by intermittent agitation or by optimizing mixing, mixing velocity, and paddlewheel must be prioritized to reduce the energy consumption of the entire industrial chain [39]. Table 5 Comparative energy analyses for biomass or bio-oil production based on 1 year of cultivating M. dybowskii LB50 via different culture modes under OPRs The assumed annual biomass production is 100,000 kg a Data were based on this study b Determined by dividing the illuminated area actual by production the volume of each unit c 3.72 W m −3 from Jorquera et al. [23]. 12.5 W m −3 from the actual date for the 200 m 2 raceway pond d Includes 8 h of daily pumping e Stepan et al. [52]. 539 kWh ton −1 biomass f Stephenson et al. [53]; Gao et al. [54]. 345.34 kWh ton −1 biomass g Energy content of net oil yield (assumed value of 39.04 MJ kg −1 ); Jorquera et al. [23] h Energy content of net biomass yield (assumed value of 31.55 MJ kg −1 ); Jorquera et al. [23] i NER would be above 1 if including coproduct allocation [55] NER is associated with the type of culture system, and the NER of oil is generally less than 1 in tubular photobioreactors and greater than 1 in ORPs [23]. Ponnusamy et al. [40] reported that the energy demand for producing 1 kg of biodiesel is 28.23 MJ. Only 14-23 MJ was required for 1 kg of biodiesel in this study, which significantly decreased the energy consumption. He et al. [13] reported that the semi-continuous mode reduces the total costs (14.18 and 13.31$ gal −1 ) by 14.27 and 36.62% compared with the costs of batch mode in M. dybowskii Y2 and Chlorella sp. L1 in the desert area. Therefore, using semi-continuous culture mode with ORPs in the desert area can result in higher biomass, lower energy consumption, and lower costs compared with other culture modes. Conclusion Three microalgae were investigated for their environmental tolerances and lipid production potential in ORP outdoors, and M. dybowskii LB50 can be efficiently cultivated using resources in the desert. Lipid production can be improved by using two-stage salt induction and semi-continuous culture modes in ORPs. After 3 years of operation, M. dybowskii LB50 was successfully and stably cultivated under semi-continuous mode for a month (five cycles of repeated culture) in 200 m 2 ORPs in the desert, reducing the supply of the original species. The BP and CO 2 fixation rates were maintained at 18 and 33 g m −2 day −1 , respectively. The LC decreased only during the fifth cycle of repeated culture. Evaporation occurred at 0.9-1.8 L m −2 day −1 (6.5-13% of evaporation loss rate). Finally, using the semi-continuous and two-stage salt induction modes for cultivating M. dybowskii, LB50 can reduce energy consumption and increase energy balance via energy analysis of life cycle. Therefore, M. dybowskii LB50 is a promising candidate for the large-scale, outdoor production of biodiesel feedstock in desert areas. The outdoor ORP cultivation system together with the semi-continuous culture method in desert areas is a suitable strategy to further decrease the cultivation cost and increase the biomass/oil production and CO 2 emission potential of M. dybowskii LB50.
v3-fos-license
2016-05-30T16:51:10.000Z
2016-05-30T00:00:00.000
263757513
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.94.032003", "pdf_hash": "8a6ace93fdc5e56bc4a123e0b3f1b93dbb43eee5", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42775", "s2fieldsofstudy": [ "Physics" ], "sha1": "8a6ace93fdc5e56bc4a123e0b3f1b93dbb43eee5", "year": 2016 }
pes2o/s2orc
Search for pair production of gluinos decaying via stop and sbottom in events with $b$-jets and large missing transverse momentum in $pp$ collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector A search for Supersymmetry involving the pair production of gluinos decaying via third-generation squarks to the lightest neutralino is reported. It uses an LHC proton--proton dataset at a center-of-mass energy $\sqrt{s} = 13$ TeV with an integrated luminosity of 3.2 fb$^{-1}$ collected with the ATLAS detector in 2015. The signal is searched for in events containing several energetic jets, of which at least three must be identified as $b$-jets, large missing transverse momentum and, potentially, isolated electrons or muons. Large-radius jets with a high mass are also used to identify highly boosted top quarks. No excess is found above the predicted background. For neutralino masses below approximately 700 GeV, gluino masses of less than 1.78 TeV and 1.76 TeV are excluded at the 95% CL in simplified models of the pair production of gluinos decaying via sbottom and stop, respectively. These results significantly extend the exclusion limits obtained with the $\sqrt{s} = 8$ TeV dataset. Introduction Supersymmetry (SUSY) [1][2][3][4][5][6] is a generalization of space-time symmetries that predicts new bosonic partners to the fermions and new fermionic partners to the bosons of the Standard Model (SM).If Rparity is conserved [7], SUSY particles are produced in pairs and the lightest supersymmetric particle (LSP) is stable.The scalar partners of the left-and right-handed quarks, the squarks qL and qR , can mix to form two mass eigenstates q1 and q2 , ordered by increasing mass.SUSY can solve the hierarchy problem [8][9][10][11] by preventing "unnatural" fine-tuning in the Higgs sector provided that the superpartners of the top quark (stop, t1 and t2 ) have masses not too far above the weak scale.Because of the SM weak isospin symmetry, the mass of the left-handed bottom quark scalar partner (sbottom, bL ) is tied to the mass of the left-handed top quark scalar partner ( tL ), and as a consequence the mass of the lightest sbottom b1 is also expected to be close to the weak scale.The fermionic partners of the gluons, the gluinos (g), are also constrained by naturalness [12,13] to have a mass around the TeV scale in order to limit their contributions to the radiative corrections to the stop masses.For these reasons, and because the gluinos are expected to be pair-produced with a high cross-section at the Large Hadron Collider (LHC), the search for gluino production with decays via stop and sbottom quarks is highly motivated at the LHC.This paper presents the search for gluino pair production where both gluinos either decay to stops via g → t1 t, or to sbottoms via g → b1 b, using a dataset of 3.2 fb −1 of proton-proton data collected with the ATLAS detector [14] at a center-of-mass energy of √ s = 13 TeV.Each stop (sbottom) is then assumed to decay to a top (bottom) quark and the LSP: t1 → t χ0 1 ( b1 → b χ0 1 ).The LSP is assumed to be the lightest neutralino χ0 1 , the lightest linear superposition of the superpartners of the neutral electroweak and Higgs bosons.The χ0 1 interacts only weakly, resulting in final states with substantial missing transverse momentum of magnitude E miss T .Diagrams of the simplified models [15,16] considered, which are referred to as "Gbb" and "Gtt" in the following, are shown in Figures 1(a) and 1(b), respectively.The sbottom and stop are assumed to be produced off-shell such that the gluinos undergo the three-body decay g → b b χ0 1 or g → t t χ0 1 , and that the only parameters of the simplified models are the gluino and χ0 1 masses. 1 The Gbb experimental signature consists of four energetic b-jets (i.e.jets containing b-hadrons) and large E miss T .In order to maintain high signal efficiency, at least three of four required jets must be identified as b-jets (b-tagged).This requirement is very effective in rejecting t t events, which constitute the main background for both the Gbb and Gtt signatures, and which contain only two b-jets unless they are produced with additional heavy-flavor jets.The Gtt experimental signature also contains four b-jets and E miss T , but yields in addition four W bosons originating from the top quark decays t → Wb.Each W boson can either decay leptonically (W → ν) or hadronically (W → q q ).A Gtt event would therefore possess a high jet multiplicity, with as many as 12 jets originating from top quark decays and, potentially, isolated charged leptons.In this paper, pair-produced gluinos decaying via stop and sbottom quarks are searched for using events with high jet multiplicity, of which at least three must be identified as b-jets, large E miss T , and either zero leptons (referred to as Gtt 0-lepton channel) or at least one identified charged lepton 2 (referred to as Gtt 1-lepton channel).For both the Gbb and Gtt models, several signal regions are designed to cover different ranges of gluino and χ0 1 masses.For the Gtt models with a large mass difference (mass splitting) between the gluino and χ0 1 , the top quarks tend to be highly boosted and their decay products collimated.In the corresponding signal regions, at least one large-radius, trimmed [18] jet, which is re-clustered from small-radius jets [19], is required to have a high mass to identify hadronically decaying boosted top quarks. Pair production of gluinos, with subsequent decays via sbottom quarks, was searched for in ATLAS Run 1 with a similar analysis requiring at least three b-tagged jets [17].It excluded gluino masses below 1290 GeV for LSP masses below 400 GeV at 95% confidence level (CL).That analysis also searched for gluinos decaying via stop quarks in events with at least three b-tagged jets and either zero or at least one identified lepton and obtained the best ATLAS limits for the Gtt models with massless and moderately massive LSP [20].Gluino masses below 1400 GeV were excluded at 95% CL for LSP masses below 400 GeV.Pairproduced gluinos with stop-mediated decays have also been searched for by ATLAS in events with high jet multiplicity [21], events with at least one lepton, many jets, and E miss T [22], and events containing pairs 1 Models with on-shell sbottom and stop were studied in Run 1 [17] and the limits on the gluino and the χ0 1 masses were found to be mostly independent of the stop and sbottom masses, except when the stop is very light. 2 The term "lepton" refers exclusively to an electron or a muon in this paper. The dominant background in the signal regions is the production of t t pairs with additional high-p T jets.The sample for the estimation of this background is generated using the Powheg-Box [36,37] generator at next-to-leading order (NLO) with CT10 [38] PDFs and interfaced to Pythia v6.428 [39] for showering and hadronization.The decays of heavy-flavor hadrons are modeled using the EvtGen [40] package.The h damp parameter in Powheg, which controls the p T of the first additional emission beyond the Born level and thus regulates the p T of the recoil emission against the t t system, is set to the mass of the top quark (m top = 172.5 GeV).This setting was found to give the best description of the p T of the t t system at √ s = 7 TeV [41] and √ s = 8 TeV [42].All events with at least one semileptonically decaying top quark are included.Fully hadronic t t events do not contain sufficient E miss T to contribute significantly to the background. Smaller backgrounds in the signal region come from the production of t t pairs in association with W/Z/h and additional jets, single-top production, production of t tt t, W/Z+jets and WW/WZ/ZZ (diboson) events.The production of t t pairs in association with electroweak vector bosons and t tt t production are modeled by samples generated using MadGraph [43] interfaced to Pythia v8.186, while samples to model t th production are generated using MadGraph5_aMC@NLO [33] v2.2.1 and showered with Herwig++ [44] v2.7.1.Single-top production in the s-, t-and Wt-channel are generated by Powheg-Box interfaced to Pythia v6.428.W/Z+jets and diboson processes are simulated using the Sherpa v2.1.1 [45] generator with CT10 PDF sets.Matrix elements for these processes are calculated using the Comix [46] and OpenLoops [47] generators and merged with the Sherpa parton shower [48] using the ME+PS@NLO prescription [49]. All simulated event samples, with the exception of the Gbb signals, are passed through full ATLAS detector simulation using Geant4 [50,51].The Gbb signal samples are passed through a fast simulation that uses a parameterized description to simulate the response of the calorimeter systems [52].The simulated events are reconstructed with the same algorithm as that used for data.All Pythia v6.428 samples use the PERUGIA2012 [53] set of tuned parameters (tune) for the underlying event, while Pythia v8.186 and Herwig++ showering are run with the A14 [54] and UEEE5 [55] underlying-event tunes, respectively.In-time and out-of-time pileup interactions from the same or nearby bunch-crossings are simulated by overlaying additional pp collisions generated by Pythia v8.186 on the hard-scattering events.Details of the sample generation and normalization are summarized in Table 1.Additional samples with different generators and settings are used to estimate systematic uncertainties on the backgrounds, as described in Section 6. The signal samples are normalized using the best cross-sections calculated at NLO in the strong coupling constant, adding the resummation of soft gluon emission at next-to-leading-logarithmic (NLL) accuracy [56][57][58][59][60].The nominal cross-section and the uncertainty are taken from an envelope of cross-section predictions using different PDF sets and factorization and renormalization scales, as described in Ref. [61].The cross-section of gluino pair-production in these simplified models is approximately 325 fb for a gluino mass of 1 TeV, falling to 2.8 fb for 1.8 TeV mass gluinos.All background processes are normalized to the best available theoretical calculation for their respective cross-sections.The order of this calculation in perturbative QCD (pQCD) for each process is listed in Table 1. Object reconstruction Interaction vertices from the proton-proton collisions are reconstructed from at least two tracks with p T > 0.4 GeV, and are required to be consistent with the beamspot envelope.The primary vertex is identified as the one with the largest sum of squares of the transverse momenta from associated tracks ( |p T,track | 2 ) [69]. Basic selection criteria are applied to define candidates for electrons, muons and jets in the event.An overlap removal procedure is applied to these candidates to prevent double-counting.Further requirements are then made to select the final signal leptons and jets from the remaining objects.The details of the object selections and of the overlap removal procedure are given below. Candidate jets are reconstructed from three-dimensional topological energy clusters [70] in the calorimeter using the anti-k t jet algorithm [71] with a radius parameter of 0.4 (small-R jets).Each topological cluster is calibrated to the electromagnetic scale response prior to jet reconstruction.The reconstructed jets are then calibrated to the particle level by the application of a jet energy scale (JES) derived from simulation and corrections based on 8 TeV data [72,73].Quality criteria are imposed to reject events that contain at least one jet arising from non-collision sources or detector noise [74].Further selections are applied to reject jets that originate from pileup interactions [75].Candidate jets are required to have p T > 20 GeV and |η| < 2.8.Signal jets, selected after resolving overlaps with electrons and muons, are required to satisfy the stricter requirement of p T > 30 GeV. A multivariate algorithm using information about the impact parameters of inner detector tracks matched to the jet, the presence of displaced secondary vertices, and the reconstructed flight paths of b-and chadrons inside the jet [76][77][78] is used to tag b-jets.The b-tagging working point with an 85% efficiency, as determined from a simulated sample of t t events, was found to be optimal.The corresponding rejection factors against jets originating from c-quarks, from τ-leptons and from light quarks and gluons in the same sample at this working point are 2.6, 3.8 and 27, respectively. The candidate small-R jets are used as inputs for further jet re-clustering [19] using the anti-k t algorithm with a radius parameter of 1.0.These re-clustered jets are then trimmed [18,19] by removing subjets whose p T falls below f cut = 5% of the p T of the original re-clustered jet.The resulting large-R jets are used to tag high-p T boosted top quarks in the event.Selected large-R jets are required to have p T > 300 GeV and to have |η| < 2.0.A large-R jet is tagged as a top candidate if it has a mass above 100 GeV.When it is not explicitly stated otherwise, the term "jets" in this paper refers to small-R jets. Electron candidates are reconstructed from energy clusters in the electromagnetic calorimeter and inner detector tracks and are required to satisfy a set of "loose" quality criteria [79][80][81].They are also required to have |η| < 2.47.Muon candidates are reconstructed from matching tracks in the inner detector and in the muon spectrometer.They are required to meet "medium" quality criteria, as described in Refs.[82,83] and to have |η| < 2.5.All electron and muon candidates must have p T > 20 GeV and survive the overlap removal procedure.Signal leptons are chosen from the candidates with the following isolation requirement -the scalar sum of p T of additional inner detector tracks in a cone around the lepton track is required to be <5% of the lepton p T .The angular separation between the lepton and the b-jet ensuing from a semileptonic top quark decay narrows as the p T of the top quark increases.This increased collimation is accounted for by varying the radius of the isolation cone as max(0.2,10/p lep T ), where p lep T is the lepton p T expressed in GeV.Signal electrons are further required to meet the "tight" quality criteria, while signal muons are required to satisfy the same "medium" quality criteria as the muon candidates.Electrons (muons) are matched to the primary vertex by requiring the transverse impact parameter d 0 to satisfy |d 0 |/σ(d 0 ) < 5 (3), where σ(d 0 ) is the measured uncertainty in d 0 , and the longitudinal impact parameter z 0 to satisfy |z 0 sin θ| < 0.5 mm.In addition, events containing one or more muon candidates with |d 0 | (|z 0 |) > 0.2 mm (1 mm) are rejected to suppress cosmic rays. The overlap removal procedure between muon and jet candidates is designed to remove those muons that are likely to have originated from the decay of hadrons and to retain the overlapping jet.Jets and muons may also appear in close proximity when the jet results from high-p T muon bremsstrahlung, and in such cases the jet should be removed and the muon retained.Such jets are characterized by having very few matching inner detector tracks.Therefore, if the angular distance ∆R between a muon and a jet is within min(0.4,0.04 + 10 GeV/p T ) of the axis of a jet, 4 the muon is removed only if the jet has ≥3 matching inner detector tracks.If the jet has fewer than three matching tracks, the jet is removed and the muon is kept [84].Overlap removal between electron and jet candidates aims to remove jets that are formed primarily from the showering of a prompt electron and to remove electrons that are produced in the decay chains of hadrons.Since electron showers within the cone of a jet contribute to the measured energy of the jet, any overlap between an electron and the jet must be fully resolved.A p T -dependent cone for the purpose of this overlap removal is thus impractical.Consequently, any non-b-tagged jet whose axis lies ∆R < 0.2 from an electron is discarded.If the electron is within ∆R = 0.4 of the axis of any jet remaining after this initial overlap removal procedure, the jet is retained and the electron is removed.Finally, electron candidates that lie ∆R < 0.01 from muon candidates are removed to suppress contributions from muon bremsstrahlung. The missing transverse momentum (E miss T ) in the event is defined as the magnitude of the negative vector sum transverse momentum ( p T miss ) of all selected and calibrated objects in the event, with an extra term added to account for soft energy that is not associated to any of the selected objects.This soft term is calculated from inner detector tracks matched to the primary vertex to make it more resilient to contamination from pileup interactions [85,86] . Corrections derived from data control samples are applied to simulated events to account for differences between data and simulation in the reconstruction efficiencies, momentum scale and resolution of leptons [80][81][82]87] and in the efficiency and false positive rate for identifying b-jets [77,78]. Event selection The event selection criteria are defined based on kinematic requirements on the objects defined in Section 4 and on the following event variables. Two effective mass variables are used, which would typically have much higher values in pair-produced gluino events than in background events.The Gtt signal regions employ the inclusive effective mass m incl eff : where the first and second sums are over the signal jets and leptons, respectively.The signal regions for the Gbb models, for which four high-p T b-jets are expected, are defined using m 4j eff : where the sum is over the four highest-p T (leading) signal jets in the event. In regions with at least one signal lepton, the transverse mass m T of the leading signal lepton ( ) and E miss T is used to discriminate between the signal and backgrounds from semileptonic t t and W+jets events: Neglecting resolution effects, m T is bounded from above by the W boson mass for these backgrounds and typically has higher values for Gtt events.Another useful transverse mass variable is m b−jets T,min , the minimum transverse mass formed by E miss T and any of the three leading b-tagged jets in the event: It is bounded below the top quark mass for semileptonic t t events while peaking at higher values for Gbb and Gtt events. The signal regions require either zero or at least one lepton.The requirement of a signal lepton, with the additional requirements on jets, E miss T and event variables described in Section 5.1, render the multijet background negligible for the ≥ 1-lepton signal regions.For the 0-lepton signal regions, the minimum azimuthal angle between p T miss and the leading four small-R jets in the event, ∆φ 4j min , is required to be greater than 0.4: 4. This requirement ensures that the multijet background, which can produce large E miss T if containing poorly measured jets or neutrinos emitted close to the axis of a jet, is also negligible in the 0-lepton signal regions (along with the other requirements on jets, E miss T and event variables described in Section 5.1).T,min , and (d) m T (for preselected events with at least one signal lepton).The statistical and experimental systematic uncertainties are included in the uncertainty band, where the systematic uncertainties are defined in Section 6.The lower part of each figure shows the ratio of data to the background prediction.All backgrounds (including t t) are normalized using the best available theoretical calculation described in Section 3. The background category "Others" includes t th, t tt t and diboson events.Example signal models with cross-sections enhanced by a factor of 100 are overlaid for comparison.All backgrounds (including t t) are normalized using the best available theoretical calculation described in Section 3. The background category "Others" includes t th, t tt t and diboson events.Example signal models with cross-sections enhanced by a factor of 100 are overlaid for comparison. Signal regions The signal regions are designed by optimizing the expected signal discovery reach for the 2015 dataset.They are defined in the leftmost column of Tables 2, 3 and 4 for the Gbb, Gtt 0-lepton and Gtt 1-lepton channels, respectively, and are discussed below.These tables also contain the definition of the control regions used to normalize the t t background, discussed in Section 5.2, and the validation regions used to cross-check the background estimate and which are discussed in Section 5.3.The following region nomenclature is used in the remainder of the paper.Signal, control and validation region names start with the prefix "SR", "CR" and "VR", respectively, and with the type of validation region specified for the Gtt validation regions.The name of the region is completed by the type of model targeted and a letter corresponding to the level of mass splitting between the gluino and the LSP.For example the validation region that cross-checks the extrapolation over m T for the Gtt 1-lepton region A is denoted by "VR-m T -Gtt-1L-A". The experimental signature for the Gbb model is characterized by four high-p T b-jets, large E miss T and no leptons (Figure 1(a)).The following requirements are applied to all Gbb signal regions.Events containing a candidate lepton are vetoed and at least four signal small-R jets are required, of which at least three must be b-tagged.The remaining multijet background is rejected by requiring ∆φ ).The Gtt signal regions are classified into regions with a signal lepton veto (0-lepton channel) and regions with at least one signal lepton (1-lepton channel).The Gtt 0-lepton signal regions are defined in the leftmost column of Table 3.In all Gtt 0-lepton signal regions at least eight signal jets, ∆φ eff decrease with the mass splitting between the gluino and the LSP.However, the required number of b-tagged jets N b−jet is tightened to four for the lower mass splitting regions B and C in order to maintain a high background rejection despite the softer signal kinematics. The Gtt 1-lepton signal regions are defined in the leftmost column of Table 4. Two signal regions A and B are defined to cover Gtt models with decreasing mass difference between the gluino and the LSP.In all signal regions at least one signal lepton, at least six signal jets (p T jet > 30 GeV) and m T > 150 GeV are required.Region A has tighter requirements on m incl eff (m incl eff > 1100 GeV) and the number of top-tagged large-R jets (N top ≥ 1).Region B has a softer requirement on m incl eff than region A, but it features a tighter cut on E miss T to achieve a satisfactory background rejection without requiring a top-tagged large-R jet. Background estimation and t t control regions The largest background in all signal regions is t t produced with additional high-p T jets.The other relevant backgrounds are t tW, t tZ, t tt t, t th, single-top, W+jets, Z+jets and diboson events.All of these smaller Criteria common to all Gbb regions: ≥ 4 signal jets, ≥ 3 b-tagged jets Variable Signal region Control region Validation region Criteria common to all regions of the same type backgrounds are estimated with the simulated event samples normalized to the best available theory calculations described in Section 3. The multijet background is estimated to be negligible in all regions. For each signal region, the t t background is normalized in a dedicated control region.The t t normalization factor required for the total predicted yield to match the data in the control region is used to normalize the t t background in the signal region.The control regions are designed to be dominated by t t events and to have negligible signal contamination, while being kinematically as close as possible to the corresponding signal region.The latter requirement minimizes the systematic uncertainties associated with extrapolating the normalization factors from the control to the signal regions. The definitions of the control regions are shown next to the signal regions in Tables 2, 3 and 4 for the Gbb, Gtt 0-lepton and Gtt 1-lepton channels, respectively.In both the Gbb and Gtt 0-lepton channels, exactly one signal lepton is required.This is motivated by background composition studies using simulated events which show that semileptonic t t events, for which the lepton is outside the acceptance or is a hadronically decaying τ-lepton, dominate the t t yield in the signal regions.An upper cut on m T is then applied to ensure orthogonality with the Gtt 1-lepton signal regions and to suppress signal contamination.The jet multiplicity requirement is reduced to seven jets in the Gtt 0-lepton control regions (from eight jets in the Criteria common to all Gtt 0-lepton regions: p T jet > 30 GeV Variable Signal region Control region VR1L VR0L Criteria common to all regions of the same type Region B (Moderate mass splitting) Region C (Small mass splitting) Table 3: Definitions of the Gtt 0-lepton signal, control and validation regions.The unit of all kinematic variables is GeV except ∆φ 4j min , which is in radians.The jet p T requirement is also applied to b-tagged jets. signal regions), to accept more events and to obtain a number of jets from top quark decay and parton shower similar to that in the signal region.Approximately 40-60% of the signal region events contain a hadronically decaying tau-lepton that is counted as a jet.Orthogonality between Gtt 0-lepton and Gtt 1-lepton control regions is ensured by requiring exactly six jets in the Gtt 1-lepton control regions (as opposed to the requirement of at least six jets in the signal regions).For all Gbb and Gtt 0-lepton control regions, the number of b-tagged jets and top-tagged large-R jets is consistent with the signal region.The requirements on E miss T and m eff are, however, relaxed in the control regions to achieve a sufficiently large t t yield and small signal contamination ( 15%).The Gtt 1-lepton control regions are defined by inverting the m T cut and removing the m Criteria common to all regions of the same type Region A (Large mass splitting) Region B (Moderate to small mass splitting) Table 4: Definitions of the Gtt 1-lepton signal, control and validation regions.The unit of all kinematic variables is GeV.The jet p T requirement is also applied to b-tagged jets. Validation regions Validation regions are defined to cross-check the background prediction in regions that are kinematically close to the signal regions but yet have a small signal contamination.They are designed primarily to cross-check the assumption that the t t normalization extracted from the control regions can be accurately extrapolated to the signal regions.Their requirements are shown in the rightmost column(s) of Tables 2, 3 and 4 for the Gbb, Gtt 0-lepton and Gtt 1-lepton channels, respectively.Their signal contamination is less than approximately 30% for the majority of Gbb and Gtt model points not excluded in Run 1. One validation region per signal region is defined for the Gbb model.They feature the same requirements as their corresponding signal region except that upper cuts are applied on m b−jets T,min and m 4j eff to reduce signal contamination and ensure orthogonality with the signal regions.In addition the requirement on E miss T is relaxed to obtain a sufficient t t yield. For the Gtt 0-lepton channel, two validation regions per signal region are defined, one requiring exactly one signal lepton (VR1L) and one with a signal lepton veto (VR0L).The regions VR1L have exactly the same criteria as their corresponding control regions except that they require m b−jets T,min > 80 GeV, similarly to the signal regions, in order to test the extrapolation over m b−jets T,min between the control and the signal regions.Simulation studies show that the heavy-flavor fraction of the additional jets in the t t+jets events (i.e.t t + b b and t t + cc), which suffers from large theoretical uncertainties, is similar in the signal, control and VR1L regions.This is achieved by requiring the same number of b-tagged jets for all three types of regions. While the theoretical uncertainties in the heavy-flavor fraction of the additional jets in the t t+jets events (i.e.t t + b b and t t + cc) are large, they affect signal, control and the 1-lepton validation regions in a similar way, and are thus largely canceled in the semi-data-driven t t normalization based on the observed control region yields. The VR0L regions have similar requirements on their corresponding signal regions except that the requirements on E miss T , m incl eff and the number of b-tagged jets are loosened to achieve sufficient event yields.Furthermore, the criterion m b−jets T,min < 80 GeV is applied to all VR0L regions to ensure orthogonality with the signal regions.The regions VR0L test the extrapolation of the t t normalization from a 1-lepton to a 0-lepton region.Simulation studies show that the VR0L regions have a composition of semileptonic t t events (in particular of hadronically decaying τ-leptons) similar to that in the signal regions, while the control and VR1L regions are by construction dominated by semileptonic t t events with a muon or an electron.T,min is applied (slightly loosened to 140 GeV instead of 160 GeV in region A) and the criterion on m T is inverted.Again, other requirements are generally relaxed.Simulation studies show that t t dilepton events dominate in the signal regions, in particular due to the requirement on m T , while semileptonic t t events dominate in the control regions.This extrapolation is cross-checked by the VR-m T regions, which have a t t dileptonic fraction similar to that in the signal regions. Systematic uncertainties The largest sources of detector-related systematic uncertainties in this analysis relate to the jet energy scale (JES), jet energy resolution (JER) and the b-tagging efficiencies and mistagging rates.The JES uncertainties are obtained by extrapolating the uncertainties derived from √ s = 8 TeV data and simulations to √ s = 13 TeV [72].The uncertainties in the energy scale of the small-R jets are propagated to the re-clustered large-R jets, which use them as inputs.The JES uncertainties are especially important in the Gtt signal regions, since these regions require high jet multiplicities.The impact of these uncertainties on the expected background yields in these regions is between 10% and 25%.Uncertainties in the JER are similarly derived from dijet asymmetry measurements in Run 1 data and extrapolated to √ s = 13 TeV.The impact of the JER uncertainties on the background yields are in the range of 1-10%. Uncertainties in the measured b-tagging efficiencies and mistagging rates are the subleading sources of experimental uncertainties in the Gtt 1-lepton signal regions and the leading source in the Gtt 0-lepton and Gbb regions.Uncertainties measured in √ s = 8 TeV data are extrapolated to √ s = 13 TeV , with the addition of the new IBL system in Run 2 taken into account.Uncertainties for jet p T above 300 GeV are estimated using simulated events.The impact of the b-tagging uncertainties on the expected background yields in the Gbb and Gtt 0-lepton signal regions is around 22-30%, and around 15% in the Gtt 1-lepton signal regions. The uncertainties associated with lepton reconstruction and energy measurements have very small impact on the final results.All lepton and jet measurement uncertainties are propagated to the calculation of E miss T , and additional uncertainties are included in the scale and resolution of the soft term.The overall impact of the E miss T soft term uncertainties is also small.Uncertainties in the modeling of the t t background are evaluated using additional samples varied by each systematic uncertainty.Hadronization and parton showering uncertainties are estimated using a sample generated with Powheg and showered by Herwig++ v2.7.1 [44] with the UEEE5 underlyingevent tune [55].Systematic uncertainties in the modeling of initial-and final-state radiation are explored with two alternative settings of Powheg, both of which are showered by Pythia v6.428 as for the nominal sample.The first of these uses the PERUGIA2012radHi tune and has the renormalization and factorization scales set to twice the nominal value, resulting in more radiation in the final state.It also has h damp set to 2m top .The second sample, using the PERUGIA2012radLo tune, has h damp = m top and the renormalization and factorization scales are set to half of their nominal values, resulting in less radiation in the event.In each case, the uncertainty is taken as the deviation in the expected yield of t t background with respect to the nominal sample.The uncertainty due to the choice of generator is estimated by comparing the expected yields obtained using a t t sample generated with MadGraph5_aMC@NLO , and one that is generated with Powheg.Both of these samples are showered with Herwig++ v2.7.1.Finally, a 30% uncertainty is assigned to the cross-section of t t events with additional heavy-flavor jets in the final state, in accordance with the results of the ATLAS measurement of this cross-section at √ s = 8 TeV [88].Uncertainties in single-top and W/Z+jets background processes are similarly estimated by comparisons between the nominal sample and samples with different generators, showering models and radiation tunes.An additional 5% uncertainty is included in the cross-section of single-top processes [89].A 50% constant uncertainty is assigned to each of the remaining small backgrounds.The variations in the expected background yields due to t t modeling uncertainties range between 10% and 30% for the Gbb signal regions, and between 47% and 57% in most Gtt signal regions.The impact of the modeling uncertainties for the smaller backgrounds on these yields is consistently below 10% in all signal regions.The uncertainties in the cross-sections of signal processes are determined from an envelope of different cross-section predictions, as described in Section 3. The cumulative impact of the systematic uncertainties listed above on the background yields ranges between 23% and 63%, depending on the signal region.The typical impact on the signal yields is in the range 10-30%. Results The SM background expectation is determined separately in each signal region with a profile likelihood fit [90], referred to as a background-only fit.The fit uses as a constraint the observed event yield in the associated control region to adjust the t t normalization, assuming that a signal does not contribute to this yield, and applies that normalization factor to the number t t events predicted by simulation in the signal region.The numbers of observed and predicted events in each control region are described by Poisson probability density functions.The systematic uncertainties in the expected values are included in the fit as nuisance parameters.They are constrained by Gaussian distributions with widths corresponding to the sizes of the uncertainties and are treated as correlated, when appropriate, between the various regions. The product of the various probability density functions forms the likelihood, which the fit maximizes by adjusting the t t normalization and the nuisance parameters.The inputs to the fit for each signal region are the number of events observed in its associated control region and the number of events predicted by simulation in each region for all background processes. Figure 4 shows the results of the background-only fit to the control regions, extrapolated to the validation regions.The number of events predicted by the background-only fit is compared to the data in the upper panel.The pull, defined by the difference between the observed number of events (n obs ) and the predicted background yield (n pred ) divided by the total uncertainty (σ tot ), is shown for each region in the lower panel.No evidence of significant background mismodeling is observed in the validation regions.There is a certain tendency for the predicted background to be above the data, in particular for the Gtt-0L validation regions, but the results in the validations regions of a given channel are not independent.The validation and control regions of different mass splittings can overlap, with the overlap fraction ranging from approximately 30% to 70% for Gtt-0L.Furthermore, the uncertainties in the predicted yield are dominated by the same (correlated) systematic uncertainties. Figure 4: Results of the likelihood fit extrapolated to the validation regions.The t t normalization is obtained from the fit to the control regions.The upper panel shows the observed number of events and the predicted background yield.The background category "Others" includes t th, t tt t and diboson events.The lower panel shows the pulls in each validation region.Tables 5,6 and 7 show the observed number of events and predicted number of background events from the background-only fit in the Gbb, Gtt 0-lepton and Gtt 1-lepton signal regions, respectively.In addition, the tables show the numbers of signal events expected for some example values of gluino and LSP masses in the Gtt and Gbb models.The event yields in the signal regions are also shown in Figure 5, where the pull is shown for each region in the lower panel.No excess is found above the predicted background.The background is dominated by t t events in all Gbb and Gtt signal regions.The subdominant contributions in the Gbb and Gtt 0-lepton signal regions are Z(→ νν)+jets and W(→ ν)+jets events, where for W+jets events the lepton is a nonidentified electron or muon or is a hadronically decaying τ-lepton.In the Gtt 1-lepton signal regions, the subdominant backgrounds are single-top, t tW and t tZ.Table 5: Results of the likelihood fit extrapolated to the Gbb signal regions.The uncertainties shown include all systematic uncertainties.The data in the signal regions are not included in the fit.The row "MC-only background prediction" provides the total background prediction when the t t normalization is obtained from a theoretical calculation [62].The t t normalization factor µ tt obtained from the corresponding t t control region is also provided.The background category "Others" includes t th, t tt t and diboson events.Expected yields for two example Gbb models are also shown. SR-Gbb Figure 6 shows the E miss T distributions in data and simulated samples for SR-Gbb-B, SR-Gtt-0L-C and SR-Gtt-1L-A, after relaxing the E miss T threshold to 200 GeV. Interpretation Since no significant excess over the expected background from SM processes is observed, the data are used to derive one-sided upper limits at 95% CL.Model-independent limits on the number of beyond-the-SM (BSM) events for each signal region are derived with pseudoexperiments using the CL s prescription [91].They can be translated into upper limits on the visible BSM cross-section (σ vis ), where σ vis is defined as the product of acceptance, reconstruction efficiency and production cross-section.The results are given in Table 8, where the observed (S 95 obs ) and expected (S 95 exp ) 95% CL upper limits on the number of BSM events are also provided. The measurement is used to place exclusion limits on gluino and LSP masses in the Gbb and Gtt simplified models.The results are obtained using the CL s prescription in the asymptotic approximation [92].The signal contamination in the control regions and the experimental systematic uncertainties in the signal are taken into account for this calculation.For the Gbb models, the results are obtained from the Gbb signal region with the best expected sensitivity at each point of the parameter space of each model.For the Gtt models, the 0-and 1-lepton channels both contribute to the sensitivity, and they are combined in a simultaneous fit to enhance the sensitivity of the analysis.This is performed by considering all possible SR-Gtt-0L-A SR-Gtt-0L-B SR-Gtt-0L-C Observed events 1 1 1 Fitted background events 2.1 ± 0.5 2.9 ± 1.8 Table 7: Results of the likelihood fit extrapolated to the Gtt 1-lepton signal regions.The uncertainties shown include all systematic uncertainties.The data in the signal regions are not included in the fit.The row "MC-only background prediction" provides the total background prediction when the t t normalization is obtained from a theoretical calculation [62].The t t normalization factor µ tt obtained from the corresponding t t control region is also provided.The category "Others" includes t th, t tt t and diboson events.Expected yields for two example Gtt models are also shown.Table 8: The 95% CL upper limits on the visible cross-section (σ vis ), defined as the product of acceptance, reconstruction efficiency and production cross-section, and the observed and expected 95% CL upper limits on the number of BSM events (S 95 obs and S 95 exp ). ATLAS [GeV] permutations between the three Gtt 0-lepton and the two Gtt 1-lepton signal regions for each point of the parameter space, and the best expected combination is used.The 95% CL observed and expected exclusion limits for the Gbb and Gtt models are shown in the LSP and gluino mass plane in Figures 7(a) and (b), respectively.The ±1σ SUSY theory lines around the observed limits are obtained by changing the SUSY cross-section by one standard deviation (±1σ), as described in Section 3. The yellow band around the expected limit shows the ±1σ uncertainty, including all statistical and systematic uncertainties except the theoretical uncertainties in the SUSY cross-section.It has been checked that the observed exclusion limits obtained from pseudoexperiments differ by less than 25 GeV from the asymptotic approximation in gluino or LSP mass in the combined limits in Figure 7, although the difference can be up to 50 GeV when using single analysis regions.The two methods of computation produce equivalent expected limits. For the Gbb models, gluinos with masses below 1.78 TeV are excluded at 95% CL for LSP masses below 800 GeV.At high gluino masses, the exclusion limits are driven by the SR-Gbb-A and SR-Gbb-B signal regions.The best exclusion limit on the LSP mass is approximately 1.0 TeV, which is reached for a gluino mass of approximately 1.6 TeV.The exclusion limit is dominated by SR-Gbb-C for high LSP masses.For the Gtt models, gluino masses up to 1.8 TeV are excluded for massless LSP.For LSP masses below 700 GeV, gluino masses below 1.76 TeV are excluded.For large gluino masses, the exclusion limits are driven by the combination of SR-Gtt-1L-B and SR-Gtt-0L-A.The LSP exclusion extends up to approximately 975 GeV, corresponding to a gluino mass of approximately 1.5 TeV-1.6 TeV.The best exclusion limits are obtained by the combination of SR-Gtt-1L-B and SR-Gtt-0L-C for high LSP masses.The ATLAS exclusion limits obtained with the full √ s = 8 TeV dataset are also shown in Figure 7.The current results largely improve on the √ s = 8 TeV limits despite the lower integrated luminosity.The exclusion limit on the gluino mass is extended by approximately 500 GeV and 400 GeV for the Gbb and Gtt models for massless LSP, respectively.This improvement is primarily attributable to the increased center-of-mass energy of the LHC.The addition of the IBL pixel layer in Run 2, which improves the capability to tag b-jets [30], also particularly benefits this analysis that employs a dataset requiring at least three b-tagged jets.The sensitivity of the data analysis is also improved with respect to the √ s = 8 TeV analysis [17] by using top-tagged large-R jets, lepton isolation adapted to a busy environment, and the m b−jets T,min variable. Conclusion A search for pair-produced gluinos decaying via sbottom or stop is presented.LHC proton-proton collision data from the full 2015 data-taking period were analyzed, corresponding to an integrated luminosity of 3.2 fb −1 collected at √ s = 13 TeV by the ATLAS detector.Several signal regions are designed for different scenarios of gluino and LSP masses.They require several high-p T jets, of which at least three must be b-tagged, large E miss T and either zero or at least one charged lepton.For the gluino models with stop-mediated decays in which there is a large mass difference between the gluino and the LSP, large-R jets identified as originating from highly boosted top quarks are employed.The background is dominated by t t+jets, which is normalized in dedicated control regions.No excess is found above the predicted background of each signal region.Model-independent limits are set on the visible cross-section for new physics processes.Exclusion limits are set on gluino and LSP masses in the simplified gluino models with stop-mediated and sbottom-mediated decays.For LSP masses below approximately 700 GeV, gluino masses of less than 1.78 TeV and 1.76 TeV are excluded at the 95% CL for the gluino models with sbottom-mediated and stop-mediated decays, respectively.These results significantly extend the exclusion limits obtained with the √ s = 8 TeV dataset. Figure 1 : Figure 1: The decay topologies in the (a) Gbb and (b) Gtt simplified models. Figure 2 Figure 2 : Figure 2 shows the kinematic distributions of E miss T , m incl eff , m b−jets T,min and m T for a preselection that requires E miss T Figure 3 : Figure 3: Distributions of the number of: (a) signal jets, (b) b-tagged jets, (c) top-tagged large-R jets, and(d) signal leptons in the preselection region described in the text.The statistical and experimental systematic uncertainties are included in the uncertainty band, where the systematic uncertainties are defined in Section 6.The lower part of each figure shows the ratio of data to the background prediction.All backgrounds (including t t) are normalized using the best available theoretical calculation described in Section 3. The background category "Others" includes t th, t tt t and diboson events.Example signal models with cross-sections enhanced by a factor of 100 are overlaid for comparison. 4j min > 0.4.The Gbb signal regions are described in the leftmost column of Table 2.The three signal regions A, B and C are designed to cover Gbb models with large ( 1 TeV), moderate (between ≈ 200 GeV and ≈ 1 TeV) and small ( 200 GeV) mass splittings between the gluino and the LSP, respectively.All regions feature stringent cuts on E miss T , m 4j eff and the jet transverse momentum p T jet .The experimental signature for the Gtt model is characterized by several high-p T jets of which four are b-jets, large E miss T and potentially leptons (Figure 1(b) T,min > 80 GeV are required.Three Gtt 0-lepton signal regions are defined to cover Gtt models with decreasing mass splitting between the gluino and the sum of the mass of the two top quarks and the LSP: A ( 1 TeV), B (between ≈ 200 GeV and ≈ 1 TeV) and C ( 200 GeV).In the large and moderate mass splitting scenarios, the top quarks tend to have a large p T , and at least one top-tagged large-R jet is required (N top ≥ 1).The requirements on E miss T and m incl requirement.All other requirements are exactly the same as for the signal regions.Criteria common to all Gtt 1-lepton regions: ≥ 1 signal lepton, p T jet > 30 GeV Variable Signal region Control region VR-m T VR-m b−jets T,min Two requirements are different between Gtt 1-lepton control regions and their corresponding signal regions: the requirement on m b−jets T,min (absent in the control regions) and the requirement on m T (inverted in the control regions).Therefore, two validation regions per signal region are defined for the Gtt 1-lepton channel, VR-m T and VR-m b−jets T,min , which respectively test, one at a time, the extrapolations over m T and m b−jets T,min .Exactly three b-tagged jets are required for all 1-lepton validation regions to limit the signal contamination and to be close to the signal regions.For the VR-m T regions, the same requirement m T > 150 GeV as in the signal region is applied but the criterion on m b−jets T,min is inverted.Other requirements are relaxed to achieve sufficiently large background yields and small signal contamination.For the VR-m b−jets T,min regions, the signal region requirement on m b−jets − 0 2 Figure 5 : Figure5: Results of the likelihood fit extrapolated to the signal regions.The data in the signal regions are not included in the fit.The upper panel shows the observed number of events and the predicted background yield.The background category "Others" includes t th, t tt t and diboson events.The lower panel shows the pulls in each signal region. Figure 6 : Figure 6: Distributions of E miss T for (a) SR-Gbb-B, (b) SR-Gtt-0L-C and (c) SR-Gtt-1L-A.The E miss T threshold is set to 200 GeV for these plots, with the red lines indicating the threshold values in the actual signal regions for SR-Gbb-B and SR-Gtt-0L-C (the E miss T threshold in SR-Gtt-1L-A is 200 GeV).The statistical and experimental systematic uncertainties are included in the uncertainty band.Two example signal models are overlaid. Figure 7 : Figure 7: Exclusion limits in the χ0 1 and g mass plane for the (a) Gbb and (b) Gtt models.The dashed and solid bold lines show the 95% CL expected and observed limits, respectively.The shaded bands around the expected limits show the impact of the experimental and background theoretical uncertainties.The dotted lines show the impact on the observed limit of the variation of the nominal signal cross-section by ±1σ of its theoretical uncertainty.The 95% CL observed limits from the √ s = 8 TeV ATLAS search requiring at least three b-tagged jets[17] are also shown. Table 1 : List of generators used for the different background processes.Information is given about the pQCD highest-order accuracy used for the normalization of the different samples, the underlying-event tunes and PDF sets considered. Table 2 : Definitions of the Gbb signal, control and validation regions.The unit of all kinematic variables is GeV except ∆φ 4j min , which is in radians.The jet p T requirement is also applied to b-tagged jets. Table 6 : [62]lts of the likelihood fit extrapolated to the Gtt 0-lepton signal regions.The uncertainties shown include all systematic uncertainties.The data in the signal regions are not included in the fit.The row "MC-only background prediction" provides the total background prediction when the t t normalization is obtained from a theoretical calculation[62].The t t normalization factor µ tt obtained from the corresponding t t control region is also provided.The category "Others" includes t th, t tt t and diboson events.Expected yields for two example Gtt models are also shown.
v3-fos-license
2016-05-31T19:58:12.500Z
2015-04-22T00:00:00.000
3159265
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/bmri/2015/504784.pdf", "pdf_hash": "8e085202cdb137629cd56b84638de833275c413a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42776", "s2fieldsofstudy": [ "Biology" ], "sha1": "a9bf16b5c27aa891d4d805ddd6854b6399edd74a", "year": 2015 }
pes2o/s2orc
High Variability of Fabry Disease Manifestations in an Extended Italian Family Fabry disease (FD) is an inherited metabolic disorder caused by partial or full inactivation of the lysosomal hydrolase α-galactosidase A (α-GAL). The impairment of α-GAL results in the accumulation of undegraded glycosphingolipids in lysosomes and subsequent cell and microvascular dysfunctions. This study reports the clinical, biochemical, and molecular characterization of 15 members of the same family. Eight members showed the exonic mutation M51I in the GLA gene, a disease-causing mutation associated with the atypical phenotype. The clinical history of this family highlights a wide phenotypic variability, in terms of involved organs and severity. The phenotypic variability of two male patients is not related to differences in α-GAL enzymatic activity: though both have no enzymatic activity, the youngest shows severe symptoms, while the eldest is asymptomatic. It is noticeable that for two female patients with the M51I mutation the initial clinical diagnosis was different from FD. One of them was diagnosed with Familial Mediterranean Fever, the other with Multiple Sclerosis. Overall, this study confirms that the extreme variability of the clinical manifestations of FD is not entirely attributable to different mutations in the GLA gene and emphasizes the need to consider other factors or mechanisms involved in the pathogenesis of Fabry Disease. Background Anderson-Fabry disease is a metabolic lysosomal storage disorder caused by the functional deficit of the enzyme -galactosidase A ( -GAL A) [1]. This deficit alters the metabolism of some glycosphingolipids, mainly globotriaosylceramide (Gb3), that accumulate in lysosomes of many cell types [2]. FD is an X-linked lysosomal enzymopathy caused by mutations in the GLA gene coding for -GAL A located on the long arm of the X Chromosome (Xq22.1) [3]. To date, more than 1000 mutations were described in the GLA gene's exons and introns and discrimination between pathological and neutral mutations is difficult (http://fabry-database.org/, June 2013) [4,5]. Gb3 and other neutral glycolipids gradually accumulate in endothelial cells, smooth muscle cells in blood vessels, renal epithelial cells, pericytes, myocardial cells, neurons of the spinal cord, and neurons of the dorsal root ganglia; this leads progressively to cellular dysfunction, necrosis, apoptosis, inflammation, fibrosis, and poor target-organs perfusion. Clinical manifestations are more severe in male hemizygous subjects than in female heterozygotes subjects, often asymptomatic, according to Mary Lyon's hypothesis of random inactivation of the X Chromosome [6]. Recently it was found that also women can show severe manifestations of FD with irreversible organ damage, thus excluding the theory that women are exclusively carriers of the disease. For this reason, an accurate follow-up is required for female patients, independently from enzymatic activity levels. The clinical manifestations of the disease appear in childhood; harmful, renal, cardiac, and cerebrovascular complications usually arise in the adulthood. The selective damage of tubular epithelial cells and glomerular epithelial cells in FD patients often causes chronic kidney disease that progresses to end-stage renal disease (ESRD) with age [7]. The enzyme replacement therapy (ERT) is the current treatment available for Fabry patients that reduces Gb3 levels and prevents further accumulation [8]. Although the ERT blocks the progression of the disease in most of the cases, it is not able to limit some specific symptoms and to eradicate the disease [9][10][11]. A reliable and early diagnosis of Fabry disease is still hard to perform. Retrospective studies found a significant delay of the diagnosis of Fabry disease in about 40% of males and 70% of females [12]. In particular, the time elapsing between the onset of first signs and symptoms and the correct diagnosis is 13 years for male patients and 17 years for female patients [13]. Probably the difficulties in diagnosis make the impact of the disease hard to assess; the incidence in the general population is estimated at 1 : 40,000, but recent initiatives of neonatal screening found an incidence of 1 : 3,100 born in Italy [14] and 1 : 1,500 in newborns in Taiwan [15]. FD is underestimated mainly because its clinical manifestations can be confused with those of other systemic diseases. For this reason, FD is often treated as single organ pathology. In literature there are cases in which the clinical indicators of FD overlap those of some rheumatic disorders, such as Familial Mediterranean Fever (FMF). In effect, these two diseases share not only the first symptoms, but also the clinical manifestations; the first signs appear during childhood, with recurrent episodes of fever, abdominal and joint pain, gastrointestinal disorders, and kidney damage. FD often involves also the central nervous system causing microand macroangiopathy in the brain. Because of these features and the frequent presence of lesions in MRI scans, FD is often misdiagnosed as Multiple Sclerosis [16]. In this paper, we report the study of an interesting family group and our results confirm several critical issues related to FD and probably evoke new questions. Patients. Peripheral blood samples were collected, using EDTA as an anticoagulant, for genetic analysis and detection of -galactosidase A activity. DNA Isolation. DNA samples were isolated from whole blood by column extraction (GenElute Blood Genomic DNA Kit, Miniprep, Sigma-Aldrich, USA), and their concentrations were determined using a spectrophotometer. HRM Analysis and DNA Sequencing. A presequencing screening was performed on DNA samples by High Resolution Melting (HRM) analysis for the study of the exons of the GLA gene and their flanking regions; the Light Cycler 480 system (Roche Applied Science, Germany) was used. PCR products presenting melting curves different in position or shape from those of the wild type control were sequenced to identify the suspected mutations. Purified PCR products were sequenced using the automated DNA sequencer at BMR Genomics. Results and Discussion This study reports the clinical, biochemical, and molecular characterization of 15 members of the same family. The pedigrees of the family are shown in Figure 1 and relevant enzymatic and molecular data are given in Table 1. The molecular analysis of the GLA gene revealed the known M51I mutation in 8 patients aged from 22 to 58 years (mean 34.6 ± 15.96) [14] as in Figure 2. A young 22-year-old female patient (case 4:1) came at our attention because during her childhood she had suffered from recurrent fever of unknown origin, burning pain in hands and feet, and gastrointestinal disturbances. She underwent several instrumental and genetic tests that led to the clinical diagnosis of FMF. This diagnostic hypothesis was not supported by the genetic analysis of the MEFV gene, showing a single heterozygous mutation (A744S). Therefore, Fabry disease was considered for diagnosis. The genetic analysis of the GLA gene was performed and the M51I mutation was identified. Ten years after the appearance of the first clinical manifestations, the definitive diagnosis was made: Fabry disease. The patient harboring a pathogenic mutation showed an enzymatic activity of 2.6 nmol/h/spot, below the normal values. Genetic and biochemical tests were extended to proband family members. These analyses showed that the proband inherited the M51I mutation from her mother (case 3:1). She is a 56-year-old woman with the enzymatic activity slightly above normal values and with symptoms not clearly related to FD. The proband's father (case 3:2) showed neither mutations nor symptoms and normal enzymatic activity (3.5 nmol/h/spot). The maternal aunt of the proband (case 3:3) was found to carry the same mutation and her enzymatic activity was 2.9 nmol/h/spot, below normal range. This 53-year-old woman, unlike her sister (case 3:1), showed cardiac involvement with dyspnea and arrhythmias. More severe symptoms affected the children of case 3:3, who are actually cousins of the proband. Among them, a young woman aged 21 (case 4:5) with the M51I mutation and low enzymatic activity (2 nmol/h/spot) showed angiokeratomas and moderate proteinuria; case 4:3 is a young man aged 28 with the M51I mutation, no enzymatic activity, and showing a serious cerebrovascular involvement. Another member of this family with no enzymatic activity is case 3:7 that is Conclusions Fabry disease is a lysosomal storage disorder with the glycosphingolipid metabolism deeply compromised. The defect of -galactosidase A leads to the progressive accumulation of globotriaosylceramide in parenchyma cells of various organs and in endothelial cells. This pathology is believed to be rare, but it should be considered uncommon, little known, according to results from literature [13]. Fabry Disease is still hard to diagnose because of its features and because it has clinical manifestations overlapping to other pathologies and subsequently a wide range of differential diagnoses that involve several clinical specialties [18]. In Relative signal (%) Normalized and shifted melting curves particular, subjects affected by the atypical form of the disease are harder to diagnose than subjects with the classic phenotype. In this study we report the clinical, biochemical, and molecular study of 15 members of the same family. In addition to the proband, seven subjects showed alterations in the GLA gene. This aspect highlights the importance of pedigree analysis in families with FD for identifying other possibly affected relatives [19]. Precisely the genetic analysis revealed the M51I mutation, which is considered pathogenic and related to the atypical variants of the disease [14,20]. The M51I mutation is a G to A transition at the codon 51 that causes the substitution between two nonpolar amino acids: Methionine and Isoleucine. Molecular models of this mutated protein showed that this mutation does neither alter the active site of -galactosidase A nor interfere with the enzyme catalytic activity, but it could alter the enzyme stability [21]. The analysis of this family confirmed that for this disorder there is little genotype-phenotype correlation, even at intrafamiliar level, and that it can cause a constellation of symptoms, often overlapping to different pathologies. These critical features of Fabry disease frequently cause misdiagnosis, delay in the correct diagnosis, and difficulties in prognosis. The current pathogenic theory of FD is based on the fact that the GLA gene is the only one associated with this disorder and that the defect in -galactosidase A is responsible for the accumulation of GB3 in lysosomes and for the subsequent cell damage and clinical symptomatology. Symptoms variability could be related to the residual enzyme activity in nonclassic FD patients and to the progression of FD, whose clinical manifestations get worse over time. Nevertheless, this theory is contradicted by some cases found in this and other studies [22,23]. Particular is the case of the two male members of this family hemizygous for the M51I mutation. Though both have no enzymatic activity, they show different clinical manifestations; the youngest (28 years old) has more severe symptoms than the eldest (58 years old), who is asymptomatic. Therefore, considering the current pathogenic theory, it is hard to explain the absence of symptoms in a patient with no enzymatic activity and his survival over the fifth decade of life. The penetrance of different GLA mutations could be influenced by the interindividual variability of other genetic and epigenetic factors or by differences in environmental conditions. It is still not clear if the extreme phenotypic disease variability and the involvement of different target organs could be explained with these processes individually or taken together [24]. Further investigations are required to understand the reasons for this variability to improve the accuracy of the prognosis and diagnosis of FD. Consent Written informed consent was obtained from the patient for publication of this report and any accompanying images. A copy of the written consent is available for review.
v3-fos-license
2023-03-05T14:15:44.052Z
2023-07-01T00:00:00.000
257335242
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "CLOSED", "oa_url": "https://doi.org/10.1016/j.neuroimage.2023.120278", "pdf_hash": "b390709ecda588ec54b3a55a8817e103a0cbf0e2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42779", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "623ca48b4a1ec3884e59558a95394e2c233e0c46", "year": 2023 }
pes2o/s2orc
Bayesian inference of a spectral graph model for brain oscillations The relationship between brain functional connectivity and structural connectivity has caught extensive attention of the neuroscience community, commonly inferred using mathematical modeling. Among many modeling approaches, spectral graph model (SGM) is distinctive as it has a closed-form solution of the wideband frequency spectra of brain oscillations, requiring only global biophysically interpretable parameters. While SGM is parsimonious in parameters, the determination of SGM parameters is non-trivial. Prior works on SGM determine the parameters through a computational intensive annealing algorithm, which only provides a point estimate with no confidence intervals for parameter estimates. To fill this gap, we incorporate the simulation-based inference (SBI) algorithm and develop a Bayesian procedure for inferring the posterior distribution of the SGM parameters. Furthermore, using SBI dramatically reduces the computational burden for inferring the SGM parameters. We evaluate the proposed SBI-SGM framework on the resting-state magnetoencephalography recordings from healthy subjects and show that the proposed procedure has similar performance to the annealing algorithm in recovering power spectra and the spatial distribution of the alpha frequency band. In addition, we also analyze the correlations among the parameters and their uncertainty with the posterior distribution which cannot be done with annealing inference. These analyses provide a richer understanding of the interactions among biophysical parameters of the SGM. In general, the use of simulation-based Bayesian inference enables robust and efficient computations of generative model parameter uncertainties and may pave the way for the use of generative models in clinical translation applications. A.1. Spectral graph model Notation.All the vectors and matrices are written in boldface and the scalars are written in normal font.The frequency of a signal is specified in Hertz (Hz), and the corresponding angular frequency = 2 is used to obtain the Fourier transforms.The connectivity matrix is defined as = , where is the connectivity strength between regions and , normalized by the row degree. Mesoscopic model Given region out of regions, we denote the local excitatory signal as (), local inhibitory signal as (), and the long-range macroscopic signals as (𝑡) ) . Macroscopic model Accounting for long-range connections between brain regions, the macroscopic signal is assumed to conform to the following evolution model: where, is the graph characteristic time constant, is the global coupling constant, are elements of the connectivity matrix, is the delay in signals reaching from the th to the th region, is the corticocortical fiber conduction speed with which the signals are transmitted.The delay is calculated as ∕, where is the distance between regions and and () + () is the input signal determined from Eqs. ( 3) and ( 4).The Gamma-shaped () is written as ) . The neural gain is kept as 1 to ensure parameter identifiability, therefore, SGM only includes 7 identifiable parameters as listed in Table 1. Closed-form model solution in the fourier domain A salient feature of SGM is that it provides a closed-form solution of brain oscillations under the frequency domain.Let  be the Fourier transform at angular frequency = 2 .Note that the mesoscopic models for different regions share the same parameters, therefore, without loss of generality, we can drop the subscript . The solutions for () and () under the frequency domain are = () (), In the 10 repetitions with t(3) noise, the PSD Pearson's correlation between the reconstructed and empirical PSDs is changed in [0.9049, 0.9066] which is very similar to the results with the Gaussian noise ([0.905, 0.907] in Section 3.4).We also present the results from the representative experiment that yields a correlation closest to the mean level in the 10 repetitions for both noise types in Fig. S.1, including the reconstructed PSD, the PSD and spatial (in alpha band) correlations and the posterior densities of the 7 SGM parameters.All the results are very similar under both noise types. The comparison between the two error types indicates that the SBI-SGM is robust to the selection of the noise distribution.
v3-fos-license
2018-04-03T03:56:56.628Z
2017-02-02T00:00:00.000
33929996
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10539-017-9562-6.pdf", "pdf_hash": "e6a4d7db744a3620d41931dce169216e1c81635f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42780", "s2fieldsofstudy": [ "Philosophy" ], "sha1": "7f93109892dc78b05069ce02b2b969fff77d94fc", "year": 2017 }
pes2o/s2orc
Structural representations: causally relevant and different from detectors This paper centers around the notion that internal, mental representations are grounded in structural similarity, i.e., that they are so-called S-representations. We show how S-representations may be causally relevant and argue that they are distinct from mere detectors. First, using the neomechanist theory of explanation and the interventionist account of causal relevance, we provide a precise interpretation of the claim that in S-representations, structural similarity serves as a “fuel of success”, i.e., a relation that is exploitable for the representation using system. Then, we discuss crucial differences between S-representations and indicators or detectors, showing that—contrary to claims made in the literature—there is an important theoretical distinction to be drawn between the two. Introduction Antirepresentationalism has been one of the major recent trends in theorizing about the mind. Some modern antirepresentationalists employ a sort of trivializing argumentative strategy. That is, instead of (or in addition to) developing new theories of cognition that do without the notion of representation, they attempt to show that some of the most prevalent existing notions of mental representation are not suited to do the theoretical jobs that are expected of them. In particular, the idea that representations are covariance-based indicators or detectors has been subjected to this sort of trivializing attack. It has been argued that detectors functionally boil down to mere causal mediators, and thus are not representations (Ramsey 2007), or that they are not contentful (Hutto and Myin 2013). One way that representationalists might want to oppose attempts to trivialize representations is to defend the status quo, for example, to argue that detectors are representations after all. But perhaps a different reaction is more justified and fruitful. Perhaps representationalists should treat the ''trivializing'' strand of antirepresentationalism as an opportunity to develop, strengthen, and indeed reform the mainstream understanding of what representations are so that the resulting new notion is no longer subject to the trivializing arguments. In fact, something like this sort of reaction is starting to take shape in the literature. This is because in parallel to antirepresentationalism, another trend in theorizing about representation has recently gained momentum, wherein people move away from seeing mental representations as indicators or detectors, and towards construing them in terms of internal models (Bartels 2006;Grush 2004;Gładziejewski 2016a, b;Isaac 2012;O'Brien 2014;O'Brien and Opie 2004;Ramsey 2007;Rescorla 2009;Ryder 2004;Shagrir 2012;Shea 2014; for earlier treatments, see also Craik 1943;Grush 1996;Cummins 1996). Here, the model is understood as an internal structural representation, i.e., a representation grounded in structural similarity 1 between the representation itself and its target. Here, we want to address two crucial issues that arise in the context of this approach to representation. Our first concern is with the idea that S-representations come into play when structural similarity can be actively exploited or relied upon by a cognitive system or mechanism. In other words, similarity relations that give rise to S-representations are exploitable similarities (Godfrey-Smith 1996;Shea 2007Shea , 2014. There are at least two reasons to think that the notion of exploitable similarity is of crucial importance for theorizing about S-representation. First, it demonstrates that S-representations do not have to be construed in terms of passive, pragmatically detached ''mirrors of nature.'' Second, the notion can prove useful in resolving problems that seemingly plague similarity-based theories of representation. For example, one might claim that because of the ubiquity and ''cheapness'' of structural similarity, similarity-based theories imply a problematic panrepresentationalism. But this can be avoided if we cut down the class of representationrelevant similarities to exploitable similarities (see Ramsey 2007;Shea 2007Shea , 2014. However, even though the very notion of similarity as an exploitable relation is by no means new in the literature, to our knowledge it has never been elaborated in detail. Some authors (e.g., Ramsey 2007) characterize the role of similarity by Suppose SV = (V, <V) is a system comprising a set V of objects, and a set <V of relations defined on the members of V. (…) We will say that there is a second-order [structural] resemblance between two systems SV = (V, <V) and SO = (O, <O) if, for at least some objects in V and some relations in <V, there is a one-to-one mapping from V to O and a one-to-one mapping from <V to <O such that when a relation in <V holds of objects in V, the corresponding relation in <O holds of the corresponding objects in O. appealing to explanatory considerations, i.e., to the fact that invoking similarity is sometimes necessary to understand the successful operations of cognitive agents. But as it stands, this approach seems compatible with views which treat representation talk as a purely instrumental ''gloss'' (Egan 2014), or ones that treat representational claims as fictions, i.e., as not literally true (see Sprevak 2013). In the present paper (''Exploiting structural similarity: a mechanistic-interventionist account'' section), we want to propose a precise, as well as more metaphysically committed interpretation of exploitable similarity, one which results in a firmly realist treatment of S-representations. Using the neomechanist theory of explanation and the interventionist theory of causal relevance, we claim that exploitable similarity is causally relevant for the successful operation of a cognitive mechanism (''Exploitable similarity as causally relevant similarity'' section). Then we show that this causal role is indeed properly attributed to the relation of similarity as such, and does not boil down to the role played by the representational vehicle alone (''Is similarity really causally relevant?'' section). The second aim of the paper is to critically examine the recent skepticism about whether S-representations are in fact distinct from detectors or indicators. In particular, Morgan (2014) has argued that under scrutiny, S-representations turn out to be functionally equivalent to detectors. If this is true, it would mean that there is no theoretically significant distinction to be made between the two. In particular, it would turn out that it is a mistake to think that S-representations act as internal models, which, on the face of it, is importantly different from acting as an indicator or detector. Furthermore, if S-representations are in fact indistinguishable from detectors, then it could be argued that the former fall under the same trivializing arguments as the latter. Nonetheless, we will argue (in ''S-representations versus. detectors revisited'' section) that there is an important distinction to be made here after all. We start by presenting the reasoning behind the criticism of the S-representation/detector distinction, and then lay out a way of delineating S-representations from detectors. Our proposal appeals to (1) the crucial role that the similarity of structures plays in S-representations but not in detectors, and (2) the fact that S-representing, as a strategy of guiding action and cognition, is not purely reactive (as is the case with mere detectors), but involves an endogenous source of control. Exploiting structural similarity: a mechanistic-interventionist account Exploitable similarity as causally relevant similarity It is usually recognized that the mere existence of structural similarity between two entities is by no means sufficient to confer on one of those entities the status of representation. S-representations only come into play when a cognitive system depends, in some nontrivial sense, on the relation of similarity in its engagements with its representational targets. As Godfrey-Smith (1996) and Shea (2007Shea ( , 2014 put this, the correspondence (here, the structural similarity) between representation and its target should be understood as ''fuel for success'' or a resource that enables Structural representations: causally relevant and… 339 organisms to ''get things done'' in the world. In other words, similarity should be understood as a relation that is exploitable for some larger representation-using system. We now want to address the question of what it means exactly for structural similarity to be exploitable. In particular, we will try to clarify this idea in the context of purely subpersonal S-representations of the sort that we could find inside a mechanical system such as a human brain. Let us start by taking a closer look at the basic, commonsense intuition that underlies the notion of exploitable similarity. Consider an external, artifactual S-representation such as a cartographic map. We can at least sometimes explain someone's success at navigating a particular territory by pointing to the fact that the person in question used an accurate map of this territory (and vice versa, we can explain someone's navigational failure by citing the fact that the person in question used an inaccurate map). Users of cartographic maps owe their success to the similarity that holds between the spatial structure of the representation and the spatial structure of the territory it represents (analogously, the failures can be due to the lack of similarity between the representation and what is represented). This link between similarity and success generalizes to all S-representations, including, we claim, the ones that do not require interpretation by a human being. On the view we are proposing, explanations of success that invoke the similarity between the representation and its target can be true in virtue of similarity being causally relevant to success. That is, the structural correspondence can quite literally cause the representation-user to be successful at whatever she (or it) is using the representation for, and lack of structural correspondence can cause the user to fail at whatever she (or it) is using the representation for. Explanations that invoke S-representations should thus be construed as causal explanations that feature facts regarding similarity as an explanans and success or failure as an explanandum. To exploit structural similarity in this sense is to use a strategy whose success is causally dependent on structural similarity between the representational vehicle 2 and what is represented. Our treatment invokes two concepts that are in need of clarification, especially when applied to internal, subpersonal representations: the notion of success/failure (for which similarity is causally responsible), and the notion of causal relevance. We will now concentrate on each of these notions in turn. Let us start with success and failure. The idea that human agents can succeed or fail at whatever they use S-representations for seems straightforward enough and we will not dwell on it here. But how to 2 As one reviewer pointed out to us, there are two ways of understanding the vehicles of S-representations. On the first interpretation, the vehicle is the whole S-representation, e.g. a model or a map as such. On the second interpretation, endorsed by Ramsey (2007), only components of larger structures (say, symbols placed within a map) are treated as the S-representational vehicles. These components represent by serving as stand-ins for their targets within a larger structure. Now, these two approaches are closely related, as the possibility of treating components as stands-ins presupposes the existence of structural similarity between the larger representing structure and the represented structure. We think that that the choice between the two interpretations should be a matter of one's explanatory or theoretical agenda. In the present paper, our concern is with the role played by structural similarity. This relation of interest for us holds between the relational structure of the representation as whole and some represented target. Therefore, here, we choose to treat the whole representing structure as the vehicle of S-representation. We admit, though, that given different aims, the component-centered interpretation might be preferable. understand success/failure in the case of internal, subpersonal representations of the sort that are of interest to us here? We propose to look at the problem through the lens of the prominent neomechanistic theory of explanation, as applied to cognitive-scientific explanation (Boone and Piccinini 2015;Bechtel 2008;Craver 2007;Miłkowski 2013). Neomechanists see the cognitive system as a collection of mechanisms. A mechanism is a set of organized components and component operations which jointly enable the larger system to exhibit a certain phenomenon (often understood as a capacity of this system). Mechanisms in this sense are at least partly individuated functionally, that is, by reference to the phenomenon that they give rise to-they are essentially mechanisms of this or that cognitive function (mindreading, motor control, attention, perceptual categorization, spatial navigation, etc.). Components and operations derive their functional characterization from the function of the larger mechanism they are embedded in. That is, the function of a component is determined by an operation such that it is through the performance of this particular operation that the component in question contributes to a phenomenon for which the larger mechanism is responsible (see Craver 2007). This is why, say, the function of the heart as a component of a mechanism responsible for blood circulation lies in its pumping blood, and not in its emitting rhythmic sounds; it is the former, and not the latter operation through which the heart contributes to blood circulation. The vehicles of internal S-representations can be treated as components of cognitive mechanisms, and are targets of various cognitive operations. Each mechanism equipped with an S-representation as its component part underlies a certain phenomenon, i.e., some cognitive capacity. S-representations construed as mechanism components owe their functional characterization to how they contribute to the phenomenon that the larger mechanism is responsible for. What we mean by this is, essentially, that structural similarity between the representation and what it represents is what contributes toward the mechanism's proper functioning. To put it more precisely, any mechanism responsible for some capacity C which includes an S-representation as its component can fail to realize or enable C as a result of the fact that the component in question is not (sufficiently) structurally similar to the representational target; and analogously, when the mechanism succeeds at realizing or enabling C, this is at least in part due to the fact that this component is (sufficiently) structurally similar to the target. So structural similarity is causally relevant to success/failure because the ability of any S-representation-involving mechanism to perform its function depends on the degree of structural similarity between the representational vehicle and the target. Success and failure are treated here as success or failure at contributing to some function or capacity of a mechanism. We now turn to the question of what it means for similarity to be causally relevant to success (or failure) thus understood. Here we aim to make use of James Woodward's (2003Woodward's ( , 2008 popular interventionist theory of causal relevance. 3 It is beyond the scope of the present discussion to present Woodward's theory in detail so a rough sketch will have to suffice. The core idea behind the interventionist view is that claims of causal relevance connect two variables, say, X and Y. 4 What it takes for X to be causally relevant to Y is that appropriate interventions into X (i.e., interventions that change the value of X) are associated with changes in Y (i.e., the values of Y): (M) X causes Y if and only if there are background circumstances B such that if some (single) intervention that changes the value of X (and no other variable) were to occur in B, then Y would change. (see Woodward 2003Woodward , 2008 The intervention in question can be helpfully understood as an experimental manipulation of X in controlled settings, although Woodward's theory does not require human agency to be involved in establishing causal relations-any change of the value of X could potentially count as an intervention, even one that is not dependent at all on human action. Importantly, there are certain conditions that an intervention must meet in order to establish a causal connection between X and Y. For example, the intervention must not change the value of Y through any causal route except the one that leads through X (e.g., it must not change the value of Y directly or by directly changing the value of a variable that mediates causally between X and Y) and it must not be correlated with any causes of Y other than X or those that lie on the causal route from X to Y. By employing the interventionist view, we can now understand the causal relevance of similarity for success in the following way. The structural similarity between the representational vehicle and the target is causally relevant for success by virtue of the fact that interventions in similarity would be associated with changes in the success of whatever capacity that is based on, or guided by the representation in question. That is, manipulations on similarity would also be manipulations on the ability of the representation-user-be it a human being or some internal cognitive mechanism-to be successful at whatever she or it is employing the representation for. To make this proposal more precise, let us apply (M) to the similarity-success relation. The variable X corresponds to similarity between the vehicle and what is represented. It would probably be a gross simplification if we treated X as a binary variable, with one value corresponding to the existence, and the other to the lack of similarity. Luckily, structural similarity can be easily construed as a gradable relation, depending on the degree to which the structure of one relatum actually preserves the structure of the another relatum (see note 1; for another account that explicitly defines similarity as coming in degrees, see : Tversky 1977;Weisberg 2013). This way we can treat X as capable of taking a range of values {X 1 , X 2 ,…, X n }, where each increasing value corresponds to an increased degree of similarity between the vehicle and the target. Therefore, between the lack of any similarity and a complete structural indistinguishability, there is a range of intermediate possibilities. What about Y, the variable that corresponds to success/failure? As far as we can see, S-representations could turn out to feature in a diverse set of mechanisms which give rise to a diverse set of cognitive functions, like motor control and motor planning, perceptual categorization, mindreading, decision making, etc. Now, cognitive systems can be more or less effective at realizing each such function: they can perform better or worse at motor control and planning, perceptually categorizing objects, attributing mental states, making decisions, etc. In this sense, we can treat the variable Y as corresponding to degrees of success of the mechanism in question at enabling an effective performance of a given capacity. Increasing values of Y = {Y 1 , Y 2 ,…, Y n } would correspond to increasing degrees of success thus understood. But what sorts of values can we have in mind exactly? Here we want to remain as open as possible. Any scientifically respectable way of measuring success can do. For example, the success could be measured by the average frequency of instances of a certain level of performance at some cognitive task, or the probability of a certain level of performance at some task, or a distribution of probabilities of possible levels of performance at some task, etc. The details will always depend on the sort of function in question, as well as on the experimental paradigm used to test or measure it. We may now formulate our thesis as follows. For similarity to cause success, interventions into the value of X (which corresponds to the degree of structural similarity between the representational vehicle and what it represents) should result in systematic changes in the value of Y (which corresponds to the degree of success of the mechanism that makes use of an S-representation in performing its mechanistic function or capacity). In particular, by intervening in X so that its value increases, we should increase the value of Y; and by intervening in X so that its value decreases, we should decrease the value of Y. Before we move on, it needs to be noted that the relationship between similarity and success is nuanced in the following way. Good S-representations resemble relevant parts of the world only partially. Maps never mirror the territory in all its detail; instead, they are intentionally simplified, selective, and even distorted. The same applies to subpersonal S-representations. There are at least two reasons to think that. First, S-representations that resemble the target too much become excessively complex themselves. We should then expect there to be a trade-off between a representation's structural complexity and the temporal or computational resources (costs) that real-life cognitive systems have at their disposal. It is doubtful that limited agents could generate S-representations that come even close to mirroring the structural complexities of the world. Second, in a world as complex as ours, generating maximally accurate S-representations tends to result in overfitting the data, which decreases the representation's predictive value (this latter point applies to S-representations that are statistical models of the environment). This general observation can be expressed using our preferred interventionist framework. Suppose that increasing values of variable X correspond to increasing structural similarity between the vehicle and what is represented, and the increasing values of variable Y correspond to increasing success. Now, to accommodate our point, we may say that although in real-life cases of S-representation, there is a positive causal relation between X and Y, it only holds within a limited range of values of X. For simplicity, we may suppose that the relation holds from the lowest value of X to some specific larger value, but it disappears when X exceeds this value. That is, once the value of X exceeds a certain level, then (e.g. due to low costeffectiveness or overfitting) its relationship to Y breaks down, e.g. increasing the value of X may begin to decrease the value of Y. Crucially, the lesson to be drawn here is not that similarity is functionally irrelevant, but simply that too much similarity can render the S-representation inefficient at serving its purpose. Our proposal is therefore that structural similarity is causally relevant only in a certain range, and the exact range depends on the overall structural trade-offs of the similarity-based system. The following empirical illustration should illuminate our view. In the philosophical literature, hippocampal spatial maps in rats have been proposed as a good example of an internal S-representation (Ramsey 2016;Rescorla 2009;Shea 2014). The rat's hippocampus is thought to implement an internal map of the spatial layout of the environment, encoded in a Cartesian coordinate system. According to this hypothesis, the co-activation patterns of so-called place cells in the hippocampus correspond to the spatial structure of the rat's environment (Shea 2014). That is, the pattern of co-activation relationships between place cells (roughly, the tendency of particular cells to show joint activity) resembles the structure of metric relations between locations within the environment. 5 This hippocampal map constitutes a component of a cognitive mechanism which underlies the ability to navigate the environment (Craver 2007). The rat's capacity to find its way within the environment, even in the absence of external cues or landmarks, depends on the fact that it has an internal mechanism equipped with a map of the terrain. This capacity for navigation is usually tested by verifying the rat's ability to find a reward (food) within a maze in which the animal has no reliable access to external orientation points (see Craver 2007; Redish 1999 for reviews). As has been already argued in the literature, spatial navigation using hippocampal maps is an instance in which the structural similarity between the map and the territory is being actively exploited by the organism (Shea 2014). Similarity serves as a resource that the rat depends on in its dealings with problems that require spatial navigation. Our proposal provides what we think is a clear and precise interpretation of this claim. The map-world similarity is causally relevant to the rat's success at finding its way in the environment. This means that we could manipulate the rat's capacity to navigate in space by intervening in the degree to which its internal map resembles structurally (the relevant part of) the environment. We know, for example, that rats are quite efficient at constructing and storing separate maps for particular mazes (Alme et al. 2014). We may imagine an experiment in which we place the rat in a previously-learned maze and then intervene on the coactivation structure of place cells in a way that distorts (i.e., decreases) the structural correspondence between the map and the maze to a particular degree. If the similarity is really being exploited, then intervention of this sort should decrease the rat's ability to navigate the particular territory, and we should be able to observe and measure this decrease by investigating the change in the rat's performance at finding rewards in the maze. What is more, the rat's navigational capacity (variable Y) should be reduced to a degree which is in proportion to the degree to which we decreased similarity (X) between its internal map and the spatial structure of the maze. And crucially, our intervention should change the rat's performance only insofar as it constitutes an intervention on similarity as such. Is similarity really causally relevant? The following issue might well be raised in the context of our mechanisticinterventionist treatment of the notion of exploitable similarity. One could wonder whether it is really similarity as such that is causally relevant to success. Notice that it is impossible to perform an intervention on the similarity relation in any way other than by intervening in the structure of at least one of its relata (here, the representational vehicle or the represented target). But this invites a worry. Would it not be much more parsimonious to simply state that what is causally relevant for success are structural properties of the vehicle and/or the target? After all, it is by intervening in either of them that we manipulate success. Why bother attributing the causal role to similarity itself? For example, to change a rat's performance at navigating mazes, it will suffice to intervene on the hippocampal map. Why not simply say that it is the structure of the map (the representational vehicle) that is causally relevant to the rat's success at spatial navigation? Why treat the relation between the map and the environment as causally relevant? To reply to this objection, we need to be careful to make the distinction between interventions that change the way some cognitive system acts (behaviorally or cognitively) and interventions that change the success of its actions. The change of action can, but does not have to change the success of the organism at whatever it is doing. If the change in the way the system acts is accompanied by an appropriate change in the external environment, the success can stay at the same level (e.g., we could change the rat's behavior in a maze without changing its ability to find food if the maze itself changes accordingly). At the same time, the same manipulation of action can change the success of the organism either by increasing it or decreasing it-again, the direction of influence will depend on properties of the environment (e.g., on the structure of the maze that the rat is traversing). So there is no contextfree, one-to-one correspondence between action and success. The reason for this is that success and failure in the sense we are using are essentially ecological categories. They co-depend both on what a given system is doing, and on the world within which it is doing it. Notice now that by concentrating solely on the properties of the representational vehicle, we would completely miss the point just made. Surely, interventions in the structural properties of the vehicle (e.g., the hippocampal map) would change the cognitive system's actions (e.g., the rat's behavior when placed in a maze). That much is not debatable. But manipulating actions is not the same as manipulating success. Because of this, the effect that the structure of the vehicle has on action does not imply that the same sort of relationship exists between the vehicle's structure and success. It is impossible to say how manipulating the vehicle's structure (and so the organism's action) will change success independently of facts about the target; or more precisely, independently of the facts regarding structural similarity between the vehicle and the target. In other words, interventions on the vehicle's structure change the success only insofar as they change the degree of similarity between the vehicle and the target. They increase success if they increase the structural fit between the vehicle and the target. They decrease success only if they decrease the structural fit. And they do not change the success if they do not bring about any change in the structural fit. In any case, what the success depends on is not just the vehicle, but also structural similarity. Of course, again, the only way to intervene on similarity is by manipulating the relata. But it is just wrong to conclude from this that similarity itself is not what is causally relevant here. Let us formulate our point using some technicalities of Woodward's account of causal relevance. Suppose that the independent variable X corresponds not to similarity between the vehicle and the target, but to purely vehicular-structural properties of the representation. More precisely, imagine that each value of X corresponds to a different potential structural pattern of the vehicle, regardless of its relationship to anything outside the mechanism. The dependent variable Y remains the same, i.e., it measures the degree of success at realizing some capacity. Now, there are certain constraints that Woodward (2003) puts on any scientifically respectable causal relationships. Two of them are relevant for our present purposes. First, interventions should not simply effect some changes in Y. Rather, the relation between X and Y should be systematic in that we should be able to establish which values of X correspond to which values of Y. Second, the relationship between X and Y should be stable, viz. it should hold across a wide range of different background conditions. But notice that neither of those constraints is met on the interpretation of X and Y that we are now considering. First, because of the reasons we mentioned above, there is no clear mapping from values of X to values of Y, which prevents the relationship between those variables from being systematic in the relevant sense. Setting X at some value could well increase the value of Y, decrease it or even not change it all. Second, the relation between X and Y is by no means stable. In fact, it is fundamentally unstable because of how dependent it is on the states of the environment. It is not possible to say how manipulation of X will change the value of Y independently of the state of the target. Again, the same manipulation of X (e.g., setting the structure of the spatial map in the hippocampus) could bring about drastically different results depending on external circumstances (e.g., depending on the spatial structure of the maze that the rat navigates). Both Woodward's constraints are however met if we go back to our original view and consider the variable X to correspond to the degree of similarity between the representational vehicle and the target. The relation between X and Y is then both systematic and stable. It is systematic because we can map increasing values of X onto increasing values of Y. And it is stable at least in the sense that it cannot be broken down by changes in the target. After all, the value of X itself partially depends precisely on properties of the target. 6 Overall, we think that these considerations provide strong reasons to think that in an S-representational mechanism, what is causally relevant to success is really the relation of structural similarity. S-representations versus detectors revisited Let us now turn our attention to the problem of distinguishing S-representations proper from mere detectors or indicators. Some authors challenge the very notion that there is a genuine distinction to be made here (Morgan 2014) because they think that one cannot differentiate systems or mechanisms that operate on the basis of covariance from ones that exploit structural similarity. It is claimed that when some system or mechanism operates by using a detector whose states reliably covary with states of the target, it is straightforwardly possible to show that the system or mechanism in question relies on the similarity that holds between the detector's structure and some target. Consider the notorious thermostat, equipped with a bimetallic strip whose shape reliably reacts to (hence, covaries with) variations in the ambient temperature, and, in turn, switches the thermostat's furnace to keep the temperature at a certain level. It is usually claimed that if it is even justified to treat it as a representation (which is far from uncontroversial in itself, see (Ramsey 2007, Ch. 4), the bi-metallic strip counts as, at most, a detector or an indicator of some state of affairs. However, on closer inspection, it turns out that reliable causal covariance is not the only relation that connects the strip to ambient temperature. They are also related by way of structural similarity (see also O'Brien 2014). Namely, there exists a structure-preserving mapping between the pattern of the bimetallic strip's possible shapes and the pattern of possible variations in ambient temperature. Furthermore, it may seem that the thermostat would not have the ability to adapt its behavior to the changing environment without the existence of a mapping between the metal strip and ambient temperature. Perhaps, then, we should regard the thermostat as a device that makes use of an S-representation after all? Or is there a genuine theoretical distinction to be made here at all? Maybe the conclusion to make is that detectors ''just are'' S-representations (Morgan 2014)? This is a serious challenge for anyone who wants to see construing representation in terms of exploitable similarity as significantly different, and perhaps also deeper in some important ways than construing it in terms of indication or detection. Nonetheless, we want to argue that there is, in fact, a principled way of drawing the distinction between S-representations and detectors. We claim that there are two fundamental differences between S-representations and detectors in virtue of which the distinction is justified. The first difference pertains to the fact that that although detectors can exhibit structural resemblance to their targets, this relation is not relevant for their workings in quite the same way that it is relevant in the case of S-representations proper. 7 The second difference relates to what distinguishes the functioning of S-representations from the way detectors function. We discuss those differences in turn. Regarding the first difference, let us reconsider the thermostat example. One can ascribe the structure to the bi-metallic strip because of the relations between different shapes that it can take depending on the surrounding temperature. But notice that for a system such as the thermostat, it is not necessary or essential for the relational structure of possible indicator states to replicate the relations between different variants of ambient temperature in any particular way. Of course, it is the case that, say, the strip curvature that indicates 33°C is closer to the one that indicates 34°than to the one that indicates 17°. But we can imagine an intervention in a thermostat which breaks this structural resemblance while leaving the thermostat's workings intact. We may imagine that following this intervention, the detector strip reacts to the temperature being 33°by taking shape that is closer to the one that corresponds to 17°than to the one that corresponds to 34°. However, this fact is simply irrelevant as long as the 33-degree-detector-state is specific to the environmental circumstances, such that (1) the detector enters this state as result of the temperature being 33°, (2) it switches the furnace into a state that is appropriate or functional given the temperature being 33°. The relation that this state bears to other states is irrelevant or accidental. To generalize, in detectors or indicators, the relations between alternative detector states need not replicate the relational pattern of the target. In this sense, the relation of structural similarity is epiphenomenal in their case: as such, it does not play a role in enabling the detector system to work properly. By contrast, an S-representation cannot do its job (i.e., enable success) without being structurally similar to the target. Here, the pattern of relations between components of the S-representation plays a crucial role. For example, a map-be it artifactual map or neurally-realized cognitive map-needs to stand in a structural resemblance relation to the terrain if it is to perform its S-representational job; and any figure placed within a map can act as an S-representational surrogate only insofar as it stands in certain relations to other figures or lines on the map. In other words, in S-representations, the structure (i.e., the relational pattern) of the vehicle-and the resemblance that it bears to the target structure-plays a major role which is missing in the case of detectors. The second difference that underlies the S-representation/detector distinction does not pertain to the nature of the relation that connects the representational vehicle to its target. Rather, it relates to what distinguishes the functioning of S-representations from the way detectors function. Let us start with a simple illustration. Consider two people facing the problem of navigating their way from one location to another in a city. Person A has traversed this route many times in the past, to the point that she does not need to elaborately plan how to reach her destination. All that she has to do is to react, at appropriate times and in appropriate ways, to particular environmental cues and landmarks (say, by turning left upon seeing a church, then right at the second crossing, etc.). Her navigational choices are fully dictated or determined by the territory itself: all that she does is respond to it in ways which enable her to eventually reach the destination. Now, person B has no previous experience with the city and so traverses the same route using satellite navigation. In the case of person B, it is not possible to explain her success at reaching the destination by simply pointing to how she reacts to environmental cues. This is because her ongoing navigational decisions depend on what happens in the navigation system. What guides B's actions are the systemderived anticipations and instructions, not the world itself. There surely are purely ''receptive'' aspects to using satellite navigation-B obviously needs to interact with the environment itself in order to verify satellite-based suggestions, and the system's receiver must itself interact with the satellite to track B's current position. However, there is an important sense in which person B's actions are controlled by the satellite navigation itself, as opposed to being fully controlled by the terrain. Here is our point. The strategy employed by person A is representative of what is crucial, from a functional standpoint, to cognitive strategies that employ detectors. In detector-based strategies, the represented part of the world constitutes the locus of control of an action or a cognitive process. Detectors are functionally bound to their targets. This is because all the there is to working as a detector or indicator is to be causally selective in useful ways. Detectors tend to react exclusively to certain states of affairs, and, in turn, generate cognitive or behavioral responses that are appropriate given the circumstances. This the case with the thermostat as a detectorbased mechanism: its bi-metallic strip reacts or responds to the target in a way that is useful to the larger mechanism. The way the thermostat behaves is under the control of the environment itself. One important thing to note here is that the ''causal selectivity'' at issue can be realized in ways other than through direct causal relation between the detector and its purported target. It could be established by indirect, mediated causal chain, or by the detector and the target sharing a common cause. In yet other cases, the detector may be causally related to a state which is merely spatiotemporally correlated with the target. Take, for instance, magnetosomes in magnetotactic bacteria. Magnetosomes react causally to the magnetic North; but given that the magnetic North is reliably correlated with the location of oxygen-free water, they can drive the bacteria towards its preferred environment. Here the detector is already causally disconnected from what it supposedly detects (the target). Still, its workings boil down to being reactive to the environment in useful ways. On the contrary, using S-representations is not a matter of simply selectively reacting to targets. In our toy example, this is apparent in person B's case in that it is not the environment itself, but rather what happens in satellite navigation that constitutes the locus of control of the navigation process. To generalize this point, what is characteristic of S-representation-based strategies is that they employ an internal or endogenous source of control over action or cognitive processes; they are, so to speak, active, and not simply reactive strategies (for an illuminating discussion of endogenously active mechanisms in nature, see Bechtel 2008Bechtel , 2013. Furthermore, what is also crucial is that processes or manipulations over S-representations exhibit a certain degree of functional freedom from their targetsfreedom which is absent in the case of detectors. That is, the way the S-representation gets manipulated or updated is endogenously controlled; it depends on the internal set-up of the S-representations itself and is not dictated (although it may be affected) by the causal coupling with the target. Again, what satellite navigation system anticipates to be the case as one traverses the terrain is not (just) a matter of what happens in the world, but (also) of what is encoded in the map itself. One of the major consequences of S-representations' being endogenouslycontrolled and functionally free in the sense described above is that they can naturally perform their duties even when the processes that they undergo do not correlate with any concurrent target or represented processes; that is, when the S-representation cannot be said to track anything that actually takes place. In other words, S-representations can perform their function even if they do not change ''in response'' to targets-at least on any useful or illuminating interpretation of what ''in response'' could possibly mean in this context. Take S-representations which function in a robustly off-line manner. In such cases, the represented entity could be so spatiotemporally distant from the representation user so as to count as absent for it (Clark and Toribio 1994). Just think of a person who manipulates an interactive digital map in order to plan a future trip, or even to consider some route purely counterfactually. In this strong sense, deploying S-representations off-line consists of manipulating them for the purpose of representing things located in the distant past, future, or ones that are merely counterfactual. Notice also how in the case of (S-)representing future and counterfactual states of affairs, there is no possibilityat least not without some serious metaphysical gymnastics-of saying that the representation is reactive to what is represented (after all, the latter is nonexistent at the time the representation is employed). 8 One might raise the question of how this functional story about S-representations relates to the story about exploitable similarity. To answer this issue, we need to note that being an endogenously-controlled process is not sufficient for this process to count as employing S-representation. Exploitable similarity needs to be involved. Let us elucidate this point by showing how structural similarity is exploited in cases of S-representations that are used off-line, this time concentrating on subpersonal representations of the sort that could feature in cognitive mechanisms. Three ingredients are involved in the off-line use of an S-representation thus construed. First, the S-representation is actively transformed or manipulated within the mechanism. That is, the S-representational vehicle undergoes an endogenouslycontrolled process in which its structure changes over time. The structure of the vehicle is being effectively put to use. Second, manipulations of this sort are employed by the larger mechanism to perform a certain function. For example, the 8 Importantly, we are not claiming that the possibility of off-line use itself is what distinguishes S-representations from detectors. Rather, we treat this fact about S-representations as resulting from them being functionally disconnected from their targets. Our claim is simply that, from an engineering standpoint, S-representations are naturally poised to subserve off-line cognition. However, we do not want to wholesale deny that there may be some sense in which indicators could function off-line, e.g. when the causal chain that leads from the target appearing to the detector entering some state is so long or slow that once the detector enters this state, the target is no longer present in the environment. effects of manipulations could serve (for some consumer component) as a basis for a decision about which course of action-out of some possible range-to take. Third, and crucially, the degree to which the effects of such manipulations of the S-representational vehicle's structure are actually functional should depend causally on how well those manipulations and their outcomes resemble targets. That is, if they are to successfully guide action or cognition, the internal manipulations need to actually resemble or simulate how corresponding target processes would unfold. Take the rat's spatial navigation system again. First, it has been suggested that place cells in a hippocampal spatial map can fire in sequences in a purely off-line manner, e.g., when the rat is asleep or is planning a route (see Johnson and Redish 2007;Pfeiffer and Foster 2013;Shea 2014;Miłkowski 2015). The map is internally manipulated and the firing sequences correspond to routes that the rat could take when navigating an actual territory. Second, these manipulations are functional for the navigational mechanism in that they (presumably) serve as a basis for route planning. Perhaps alternative routes leading to a reward are simulated in order to select one that is the shortest (Johnson and Redish 2007;Shea 2014). Third, this offline planning is effective to the degree to which internal simulations can be actually projected to actual interactions with the environment. That is, we could manipulate the rat's ability to effectively plan short, cost-effective routes through the environment by intervening in the degree to which its hippocampal map (and processes or manipulations performed over it) resembles structurally the terrain (and possible routes that the rat could take in it). In other words, if the rat is to be successful at planning, the unfolding of simulated actions should resemble how corresponding actions would unfold if the rat was to actually engage in them. By concentrating on the off-line use, we do not mean to suggest that S-representations are restricted to the domain of off-line cognition (see also note 8). The satellite navigation example mentioned before is a case in point: here, the S-representation controls an ongoing, direct interaction with the world (if this case still counts as off-line, then only in some rather minimal sense). This point can be generalized to encompass purely subpersonal cases. Mechanisms can use S-representations to regulate on-line interactions with the environment. Imagine a cognitive system whose internal states change concurrently to changes in the external environment, and control behavior so that it is adaptive given the circumstances. Someone might, mistakenly, consider it to be a detector system, not that different from a simple thermostat. However, when we investigate the system's workings, it turns out that its internal machinery is cut off from the target; it has no sensory apparatus. What explains its successful behavior is that it has an internal structure that continuously simulates the changing environment. This simulation is not a matter of responding to the target. Rather, it is an endogenously controlled process whose unfolding resembles the relevant dynamics in the environment, enabling the system to behave in accordance with the world it inhabits. The best, and, in fact, the only way to explain how the system manages to cope with the environment is to point to similarity between its internal processes and processes in the environment. Hence, despite working in a purely on-line manner, the system in question turns out to employ an S-representation of its environment. Lastly, it needs to be noted that no real-life S-representational system, even one whose cognitive processes unfold in a purely on-line manner, would work if it were completely unresponsive to the changes in the external environment. It would be impossible for such an encapsulated agent to detect and correct errors in the endogenous simulation of the environment, which could lead to catastrophic consequences. It is much more reasonable to postulate a mixed strategy which combines detector-based and S-representation-based ways of dealing with the environment on-line (as is the case in satellite-based navigation). What we mean is a system that simulates the environment but is at the same time equipped with response-selective detectors. The internal model could make predictions about the way the detectors will be affected by states of the world, with the mismatch between that prediction and feedback that is actually generated serving as a way of ''measuring'' the representational error. This sort of prediction-error-based cognitive strategy is postulated by recent predictive processing approaches to cognition (Clark 2013;Friston and Stephan 2007;Friston 2010;Hohwy 2013). According to the predictive processing story, on-line perception and action are underpinned by an internal generative model which encodes the causal-probabilistic structure of the organism-external environment (Gładziejewski 2016b). The model is constantly updated in a way that aims at simulating the ongoing changes in the environment, and it constantly predicts incoming sensory stimuli (hence the qualification ''generative''). Updating and prediction are endogenous in nature, as the updating crucially depends on pre-stored likelihood and prior probability distributions, and the predictions are trafficked in a top-down manner. Thus, the generative model constitutes an endogenous source of control of perception and action. However, as mentioned, the process of internal simulation could go catastrophically astray if it were impossible for it to get corrected in case of error. And here is where detectors come into play. In predictive processing, the sensory system is reliably causally dependent on the environment (hence, acts as a detector of sorts), and the difference between its actual states and internally generated predictions results in a prediction error signal, which is propagated in a bottom-up manner. This way the internal model can be corrected in light of the prediction error. In other words, the S-representation (the generative model) and detectors (the sensory apparatus) work together. To generalize this point, although S-representations are not detectors, they will sometimes need detectors to help them with their representational duties. Conclusions In this paper, we attempted to clarify the claim that internal representations are S-representations. First, we proposed a mechanist-interventionist interpretation of the idea of similarity as an ''exploitable'' relation. This interpretation appeals to the causal role that similarity plays in enabling the successful operation of cognitive mechanisms. Second, we provided reasons for thinking that S-representations are indeed a separate type of representation, distinct from purported indicator or detector representations. On the view that we opted for, the key to this distinction lies in the fact that (1) S-representations' workings depend on the structural similarity in a way that is not the case with detectors, and (2) they constitute an endogenous source of control that exhibits a degree of functional freedom from states of the environment. Overall, we hope that our proposals further pave the way that leads away from seeing representations as a matter of reacting to the world detector-style, and towards the idea that representing the world is a matter of actively modeling it. Before we close the discussion, there is one last issue that merits mention. Some authors have argued that the domain in which S-representations can be explanatory is restricted to low-level cognition and that S-representations are not quite suited to explain more sophisticated, human-level cognitive capacities (Morgan 2014; see also Garzón and Rodríguez 2009). Notice, however, that our approach assumes relatively minimal, empirically uncommitted criteria of what counts as an S-representation. Because of this, our criteria can be met by internal structures that vary, perhaps drastically, in terms of their cognitive sophistication. There are a couple dimensions along which there could be such variance. First, the vehicles of S-representations can vary in their relational complexity (and there should be corresponding variance in the complexity of their representational objects). Second, the manipulations performed over those vehicles can vary in their dynamic or computational complexity. Third, S-representations can differ in how decoupled from the environment they are; i.e., they can function in a way that is more or less off-line. Fourth, perhaps a case could be made that flexibility and contextdependence of components that act as consumers of S-representations can vary (see Cao 2012). Now, if we agree that S-representations differ along those dimensions, what we end up with is a continuum of S-representations of increasing sophistication. If this is a workable position-and we see no reason to doubt this-then it should no longer be mysterious how S-representations could underlie both simple and phylogenetically old cognitive capacities, as well as complex capacities that are phylogenetically new and perhaps even human-specific, such as reasoning, imagery, or mental time travel. Roughly, more sophisticated cognitive functions are underpinned by more sophisticated S-representations. In fact, our own empirical bet is that human-level off-line cognition is largely a matter of being equipped with highly sophisticated S-representations-S-representations that actually earn the status of ''mental models.''
v3-fos-license
2020-06-24T13:06:55.407Z
2020-06-22T00:00:00.000
219984176
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://academic.oup.com/g3journal/article-pdf/10/8/2819/38825753/g3journal2819.pdf", "pdf_hash": "c2ab2aa205071019c687d1a0447fdb7ca42356fc", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42781", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "sha1": "70797c8a6c5d1778f90a154cf883b4690e131cd8", "year": 2020 }
pes2o/s2orc
Identification of Loci That Confer Resistance to Bacterial and Fungal Diseases of Maize Crops are hosts to numerous plant pathogenic microorganisms. Maize has several major disease issues; thus, breeding multiple disease resistant (MDR) varieties is critical. While the genetic basis of resistance to multiple fungal pathogens has been studied in maize, less is known about the relationship between fungal and bacterial resistance. In this study, we evaluated a disease resistance introgression line (DRIL) population for the foliar disease Goss’s bacterial wilt and blight (GW) and conducted quantitative trait locus (QTL) mapping. We identified a total of ten QTL across multiple environments. We then combined our GW data with data on four additional foliar diseases (northern corn leaf blight, southern corn leaf blight, gray leaf spot, and bacterial leaf streak) and conducted multivariate analysis to identify regions conferring resistance to multiple diseases. We identified 20 chromosomal bins with putative multiple disease effects. We examined the five chromosomal regions (bins 1.05, 3.04, 4.06, 8.03, and 9.02) with the strongest statistical support. By examining how each haplotype effected each disease, we identified several regions associated with increased resistance to multiple diseases and three regions associated with opposite effects for bacterial and fungal diseases. In summary, we identified several promising candidate regions for multiple disease resistance in maize and specific DRILs to expedite interrogation. Multiple disease resistance maize Goss's wilt quantitative disease resistance Genetics of Immunity Plants need to defend themselves from many pathogenic microbes present in their environment. Furthermore, the widespread cultivation of varieties with limited genetic diversity increases the risk of pathogen attack (Strange and Scott 2005). Crops are seldom attacked by just a single pathogen, and thus, breeding is usually conducted for resistance to multiple pathogens (Khush 1989;Ceballos et al. 1991). Multiple disease resistance (MDR) is defined as host plant resistance to more than one disease and is controlled by single to many genes (Wiesner-Hanks and Nelson 2016;Nene 1988). Despite the widespread need across many crops for multiple disease resistant varieties, little is known about the genetic determinants of MDR. A few cloned disease resistance quantitative trait loci (QTL) have been shown to provide protection against multiple diseases including Lr34 and Lr67 in wheat (Krattinger et al. 2009;Moore et al. 2015) and GH3-2 in rice (Fu et al. 2011). Genes conferring resistance to multiple diseases include those that encode signaling pathways, pathogen recognition, hormone-associated defense initiation, antimicrobial peptides, sugar signaling and partitioning pathways, and cell death-related pathways (Wiesner-Hanks and Nelson 2016). A more thorough understanding of MDR in crops will facilitate the development of varieties resistant to multiple diseases. Maize is a staple cereal affected by over 32 major diseases that can cause substantial yield losses (Mueller et al. 2016;Munkvold and White 2016). Foliar diseases can cause significant production constraints, particularly in conducive environments. A survey from 2012 to 2015 showed that foliar diseases of maize lead to the largest estimated yield losses in the northern U.S. corn belt in non-drought years (Mueller et al. 2016). Pesticides are available to manage fungal foliar diseases but are costly and have environmental impacts (Paul et al. 2011;Bartlett et al. 2002). No labeled effective chemical control is available for the major bacterial foliar diseases. An effective and environmentally benign method of disease management is host plant resistance (Nelson et al. 2018). The heritability for foliar diseases are moderate to high, indicating breeding to develop resistant varieties is possible (Dingerdissen et al. 1996;Lopez-Zuniga et al. 2019;Zwonitzer et al. 2010;Ceballos et al. 1991). Many MDR mapping studies in maize have focused on fungal diseases, and less is known about the relationship between resistance to fungal and bacterial diseases. In a synthesis study Wisser et al. (2006) examined the relationship between fungal, bacterial, and viral resistance and identified loci that conferred resistance to fungal and bacterial diseases. Subsequent studies identified regions, and even genes, that confer resistance to the three most significant fungal foliar diseasessouthern corn leaf blight (SCLB), northern corn leaf blight (NCLB), and gray leaf spot (GLS) (Lopez-Zuniga et al. 2019;Zwonitzer et al. 2010;Belcher et al. 2012;Yang et al. 2017b). Relatively few regions have been identified that confer resistance to both a fungal and a bacterial pathogen in maize (Chung et al. 2010;Chung et al. 2011;Qiu et al. 2020;Jamann et al. 2014;Jamann et al. 2016;Hu et al. 2018). In this study, we focused on two bacterial diseases bacterial leaf streak (BLS) and Goss's bacterial wilt and blight (GW), as well as three fungal diseases: SCLB, NCLB, and GLS. Goss's wilt and bacterial blight is one of the most destructive foliar diseases of maize (Mueller et al. 2016) and is caused by Clavibacter nebraskensis (Li et al. 2018). The blight phase of the disease is characterized by water-soaked tan to gray linear lesions with irregular margins parallel to, but not bounded by, leaf veins. The bacteria colonize the xylem, and vascular wilt symptoms can develop in susceptible lines Jackson et al. 2007;Schuster 1972). The bacteria usually enter the leaves through wounds, but can also enter through natural openings in the absence of wounding in high-humidity conditions (Mallowa et al. 2016). First identified in 1969, GW is now found throughout the midwestern United States and Canada (Malvick et al. 2010;Howard et al. 2015;Singh et al. 2015;Schuster 1972;Mueller et al. 2016;Jackson et al. 2007). Bacterial leaf streak, caused by Xanthomonas vasicola pv. vasculorum (Xvv), is an emerging disease in the Americas (Damicone et al. 2018;Jamann et al. 2019;Korus et al. 2017;Leite et al. 2019). The bacteria enter and exit through wounds and stomata to colonize intercellular spaces, but do not enter the vasculature (Ortiz-Castro et al. 2018). NCLB, GLS, and SCLB are among the most important fungal foliar diseases. NCLB is of global importance and is caused by the hemibiotrophic pathogen Setosphaeria turcica (syn. Exserohilum turcicum). In inoculated trials using susceptible germplasm, NCLB caused a 30-62% grain yield reduction (Perkins and Pedersen 1987;Raymundo and Hooker 1981). Humid conditions and moderate temperatures favor NCLB development. Gray leaf spot is also of global importance and is caused by the necrotrophic fungi Cercospora zeae-maydis and Cercospora zeina. Gray leaf spot can cause as much as a 50% yield loss (Ward et al. 1999) and develops quickly in high humidity conditions. Southern corn leaf blight, caused by Bipolaris maydis, is usually found in hot and humid regions and can cause up to a 40% yield loss if the varieties are susceptible, and the conditions are favorable (Byrnes et al. 1989). All the diseases are favored by high humidity environments. There are overlapping pathogenesis and tissue-level pathogen localization between diseases. For example, the pathogens causing NCLB and GW both colonize the xylem (Chung et al. 2010;Mbofung et al. 2016), and for both BLS and GLS the pathogen enters the stomata (Beckman and Payne 1982;Ortiz-Castro et al. 2018). We conducted linkage mapping for GW in a chromsome segment substitution line (CSSL) population, referred to as a disease resistance introgression line (DRIL) population. We selected a DRIL population because it was developed to study multiple disease resistance (Lopez-Zuniga et al. 2019). Data for BLS (Qiu et al. 2020), SCLB, NCLB, andGLS (Lopez-Zuniga et al. 2019) were combined with the GW data to examine MDR. We evaluated the DRIL78 population, an ideal population for this study, as the donor line NC344 is resistant and the recurrent parent Oh7B is susceptible for all the diseases studied (Cooper et al. 2019;Qiu et al. 2020;Lopez-Zuniga et al. 2019). Thus, we hypothesized that we could identify regions for resistance to fungal and bacterial pathogens in this population. Multivariate analysis was used to identify potential MDR loci. Multivariate analysis based on Mahalanobis distance (Md) has been used for genome scans in both human and plant studies (Luu et al. 2017;Tian et al. 2008;Lotterhos et al. 2017). In this study, we used Md to combine the mapping results from the five diseases. Md is not traitspecific; instead, it is a test for outlier markers across all traits and takes multiple mapping result datasets into consideration. The outlier markers, reported as putative MDR markers, are those that do not follow the pattern of the majority of the data point cloud (Rousseeuw and Van Zomeren 1990). The overall objective of this study was to compare the genomic basis of resistance to fungal and bacterial diseases in maize. Mapping was conducted for GW using phenotypic data collected in three environments and combined with previously published studies for BLS, NCLB, SCLB, and GLS (Lopez-Zuniga et al. 2019;Qiu et al. 2020). Here, we: 1) identify novel QTL associated with GW through linkage mapping; 2) explore the relationship between the five diseases in this population; and 3) estimate the effect of potential MDR haplotypes on the five diseases. Plant materials Disease resistance introgression line population DRIL78 is an ideal CSSL population for multiple disease evaluation, as the donor parent (NC344) is multiple disease resistant and the recurrent parent (Oh7B) is multiple disease susceptible (Lopez-Zuniga et al. 2019;Cooper et al. 2019;Qiu et al. 2020;Wisser et al. 2011). The population was developed by a cross between NC344 and Oh7B, three generations of backcrosses, and four subsequent generations of self-pollinating via single-seed descent to obtain BC 3 F 4:5 lines (Lopez-Zuniga et al. 2019). This population was selected because preliminary data showed significant differences between the parents of this population for all diseases examined. Phenotypic evaluation The DRIL78 population was evaluated in three environments: Urbana 2016, Monmouth 2017, and Urbana 2017. The Urbana trials were conducted at the University of Illinois Crop Science Research and Education Center South Farms located in Urbana, IL. The Monmouth trial was conducted at the University of Illinois Monmouth Research Station located in Monmouth, IL. In Urbana 2016, 260 lines were evaluated for GW in one replication. In 2017, 229 and 233 lines were evaluated in Monmouth and Urbana, respectively, each with two replications. Differences in the number of lines evaluated was due to seed availability and not relative to disease phenotype. For Monmouth and Urbana 2017, we generated an incomplete block design using the agricolae package in R (Version 3.5.1) (de Mendiburu and de Mendiburu 2019; R Core Team 2018). For Monmouth 2017 and Urbana 2017, Oh7B was included in each block, along with the resistant check line NC344 or NC258. NC344 was not included in every block due to seed availability. For Urbana 2016, we used an augmented incomplete block design with one replication. In this location, the parental lines NC344 and Oh7B were included in each block. Seed was machine planted with 20 kernels per plot. Plots were 3.2 meters with 0.76 m alleys between each plot and row spacing of 0.762 meters. Fields were managed using standard agronomic practices for central Illinois. Disease evaluation Clavibacter nebraskensis isolate 16Cmn001 was used for the GW inoculations (Cooper et al. 2018). We inoculated the plants twice, once at the V4 stage and a second time at the V7 stage (Abendroth et al. 2011), as described by Cooper et al. (2018). Two inoculations improved the differentiation between lines. We assessed the extent of necrosis of inoculated plants using a visual percentage rating on a per plot basis with 5% intervals starting about two weeks after the first inoculation date. A rating of 0% represented no disease in the plot, while 100% indicated that all the foliage was necrotic (Poland and Nelson 2011). In Urbana 2016, two visual ratings were taken 17 days apart; in Urbana 2017, two ratings were taken 18 days apart; in Monmouth 2017, three ratings were taken with 8 and 9 days between ratings. We calculated the area under the disease progress curve (AUDPC) scores for each plot in R (Version 3.5.1) (R Core Team 2018) using the audpc function in the agricolae package (de Mendiburu and de Mendiburu 2019) (File S1). Statistical analysis Least Square Means (LSMeans) were estimated for GW for each environment (2016 Urbana, 2017 Urbana, and 2017 Monmouth) and for the combined multienvironment dataset using AUDPC values and the lmer function in the R package lme4 (Doran et al. 2007). Linear mixed models were constructed for each environment and the combined dataset and are listed below: where Y is the response variable (AUDPC) as described above, m is the overall mean, G is the fixed genotype (introgression line) effect, B is the random blocking effect, R is the random replication effect, E is the random environment effect, and GE is the random genotype-byenvironment interaction effect. Blocks were nested within replications within environments. Only significant factors were included in the models. We examined the skewness of the data using the skewness function from the e1071 package (Dimitriadou et al. 2009). Heritability on both a plot and family-means basis were calculated for GW with SAS (version 9.4) using PROC MIXED, as described by Holland et al. (2003). We calculated LSMeans for the BLS data based on the raw measurements from Qiu et al. (2020). The model included genotype as a fixed factor, and replication and block nested within replication as random factors. We obtained LSMeans for SCLB, NCLB, and GLS from Lopez-Zuniga et al. (2019). Multiple comparison tests were conducted using the LSMeans calculated for each disease individually to identify the lines that were significantly different from the recurrent parent Oh7B. To perform the tests we used the package multcomp in R, specifically the function glht, with a Dunnett's p-value adjustment (Hothorn et al. 2016). Disease correlations We conducted Pearson's product-moment correlation tests among LSMeans for the diseases (ten total comparisons) in R using the cor.test function. The parent lines were not included. SCLB, NCLB, and GLS were rated using a 1-9 rating scale, where 1 indicated 100% leaf area affected by the pathogen and 9 indicated no disease; BLS phenotypes were lesion length measurements where small values indicated shorter lesions; GW was rated using a percentage scale based on the severity of the disease where 0% indicated no disease. To have a uniform scale for correlation analysis, we multiplied the BLS and GW LSMeans values by -1. With this modification, low values indicated more severe infections for all datasets. Linkage mapping A total of 190 lines, including the recurrent parent Oh7B, were shared across all five datasets. We used the LSMeans for 190 lines and 237 single nucleotide markers from Lopez-Zuniga et al. (2019) to conduct linkage mapping for each of the five diseases (File S2). The software ICIMapping 4.0.6.0 with the options "CSL" and "RSTEP-LRT-ADD" mapping were used to conduct QTL analysis (Meng et al. 2015). We conducted 1000 permutations with a 0.10 Type I error rate to determine the logarithm of odds (LOD) threshold. We recalculated the LOD threshold for each disease. The physical positions of markers with LOD values exceeding the threshold are reported based on B73 RefGen_v3 coordinates (Schnable et al. 2009). Multivariate analysis We conducted multivariate analysis to identify QTL associated with more than one disease using the methods described in Lopez-Zuniga et al. (2019). The five diseases each served as a variable and the "robust Mahalanobis distance" method was used to combine the five variates to detect outlier markers. In this study, Mahalanobis distance (Md) was calculated based on the five negative log10 p-values of the LOD scores derived from the five single-disease mapping results. Outlier markers were detected based on p-values for Md. The detailed steps of multivariate analysis are described below: (i) conduct linkage mapping analysis with ICIMaping for each trait in the population independently; (ii) obtain trait-specific, permutation-based LOD thresholds and trait-specific marker LOD values from the mapping results; (iii) calculate p-values for each marker for each disease based on the following function: to account for the variation in LOD significance thresholds between different mapping studies (Nyholt 2000); (iv) convert p-values into negative log10 p-values; (v) calculate Mahalanobis distance based on negative log p-values (Md-p) for each of the diseases in R with OutlierMahdist function in rrcovHD package (Todorov 2018), as described by Lotterhos et al. (2017); (vi) calculate p-values for Md-p for each marker (Rousseeuw and Van Zomeren 1990). To control for multiple comparisons, the false discovery rate (FDR) was calculated by adjusing the p-values using the "BH" method (Hochberg and Benjamini 1990) with the p.adjust function in R. Markers were declared to be significant using a 1% FDR. Haplotype effect calculation The maize genome has previously been divided into 100 bins which we used here to delineate disease resistance-associated segments of the genome (Davis et al. 1999). The chromosomal bin for each marker that passed the 1% FDR Md-p test and the single-disease linkage mapping analysis was recorded. We considered bins with at least three significant Md-p markers as candidate MDR regions. The selected MDR regions were delimited by the position of the two flanking significant markers. To calculate the haplotype effect for each region, we identified lines with introgressions in the MDR regions and then calculated, using the raw AUDPC data, the difference between the mean AUDPC for those lines and the mean AUDPC for the recurrent parent Oh7B (Belcher et al. 2012). Because different scales were used for each disease and we wanted to compare between diseases, we standardized the haplotype effect by Oh7B. Finally, we conducted a t-test using the percentage change to determine whether there was a significant difference between the Oh7B phenotype and the introgression line effect. The null hypothesis was that there is no difference between Oh7B and the haplotype effect (percent change = 0). Characterization of germplasm As expected, the recurrent parent Oh7B was the susceptible parent for all the diseases we examined. Of the five diseases, the parents were the most phenotypically similar for BLS. Using a multiple comparison test, we detected significant differences between the donor and recurrent parent for all diseases except BLS. Similar to what has been reported previously for fungal disease phenotypes (Lopez-Zuniga et al. 2019), there was substantial transgressive segregation for the bacterial diseases (Figure 1). Like the fungal diseases, the DRIL78 population included lines with transgressive segregation for GW only in the direction of susceptibility, indicating NC344 may donate alleles for both resistance and susceptibility. In contrast, transgressive segregation for BLS occurred in both directions, suggesting that resistance to BLS in NC344 and Oh7B is conditioned by complementary sets of alleles. Using our data, we calculated the heritability for GW: heritability on a plot basis was 0.53 (s.e.= 0.03) and on a family-mean basis was 0.78 (s.e. = 0.02), indicating that progress can be made from inbred line evaluations in breeding for this disease. Using a multiple test comparison, we examined whether there were DRILs that were significantly more resistant or susceptible than the recurrent parent. For GW, 16 of the 258 lines, or 6.2% of the lines tested, were significantly different than Oh7B (Table 1). Despite the presence of transgressive segregants for susceptibility to BLS, none of the DRILs were significantly more susceptible than Oh7B; however, three lines were significantly more resistant than Oh7B. Correlation between diseases We tested pairwise correlations among the five diseases. A total of five of the ten pairwise correlation tests were significant (P , 0.05); the two bacterial diseases were not significantly correlated. Of the correlations that were significant, coefficients ranged from 0.15 to 0.31 (Table 2). The correlations for the three fungal diseases vary slightly compared to Lopez-Zuniga et al. (2019) as fewer lines are in common for all five diseases as compared to the number of lines in common and included in the correlation analysis for the three fungal diseases. For the three fungal diseases, as previously reported, resistance to NCLB was significantly and positively correlated with resistance to SCLB and GLS, while the correlation between resistance to GLS and SCLB was positive but not significant (Lopez-Zuniga et al. 2019). Here, we found significant and positive correlations among pairs of bacterial and fungal diseases (GW and NCLB; GW and GLS; BLS and NCLB). These correlations suggest that loci conditioning MDR to bacterial and fungal diseases may exist in this population, although the correlations could also be due to morphological traits or other factors. Identification of multiple disease resistant lines The correlations between diseases suggested that MDR loci may exist in this population, so we tested whether the same DRILs were significantly more resistant or susceptible than the recurrent parent for multiple diseases. Only 5.3% of the lines (10 of 189 lines) were significantly different than Oh7B for more than one disease. The lines that were significantly different for more than one disease represented seven unique two disease combinations. Not all possible two disease combinations were represented. Only one line was significantly different than Oh7B for the combination of the two bacterial diseases. There were four bacterial/fungal disease combinations, all of which included GW, with seven lines that were significantly more resistant to the combination of a bacterial and fungal pathogen. The remaining two lines were significantly different than Oh7B for a combination of two fungal diseases (SCLB and GLS; NCLB and GLS). For NCLB and SCLB there were lines that were resistant to the respective fungal disease, but susceptible to GW. No lines were significantly different than Oh7B for more than two diseases. GW linkage mapping The genotype and environment interaction accounted for some variance; thus, single environment mapping analysis was also conducted for GW. We conducted linkage mapping for GW for three individual environments, as well as the combined dataset. A total of ten QTL on chromosomes 1 through 6, and 9 were detected ( Table 3). Six of the QTL were stable, as they were consistently detected across multiple environments or in the combined dataset. The QTL detected in chromosomal bin 2.07 (qGW2.07; peak marker PHM14412-4) was detected in all three individual environments and the combined dataset. The QTL in chromosomal bins 3.06, 4.06 and 9.02 were detected in more than one environment, and the additive effect estimates and percentage of variance explained by these QTL were similar across datasets. We examined the additive effect estimates and percentage of variance explained by the significant markers. The GW QTL were of small effect, with the largest-effect QTL, referred to as qGW2.07, accounting for 8.96% of the phenotypic variation in the combined dataset. The other QTL explained from 3.73 to 8.84% of the phenotypic variance. The QTL detected on chromosomes 2, 3 and 9 had negative additive effect estimates, indicating that the NC344 allele confers resistance. The QTL with positive additive effect estimates on chromosome 1, 3, 4 and 6 indicate that the Oh7B allele confers resistance. On chromosome 3, two QTL were identified within the same bin. NC344 conferred the resistant allele for both QTL on bin 3.04. Multivariate multiple disease mapping Across all diseases, we detected 18 significant markers in the singletrait mapping, with two markers for BLS, five for GW, four for SCLB, three for NCLB and six for GLS. The markers detected in the singletrait mapping were designated "single-trait markers." Among the 18 single-trait markers, two were shared by multiple diseases (GW and SCLB; GW, and GLS) (Figure 2). Across the single-trait analyses, chromosomes 1 through 4 were all associated with more than one disease ( Figure 2). Multivariate analysis was conducted to detect MDR regions using the robust Mahalanobis distance method (Rousseeuw 1985;Rousseeuw and Van Zomeren 1990). At a 1% false discovery rate, 54 out of 237 markers were detected as related to one or more diseases. The 54 significant markers included all 18 single-trait markers. Several regions emerged as likely MDR candidates. We identified the largest number of significant markers on chromosomes 1 (10 significant markers), 3 (8 significant markers), and 8 (9 significant markers). On chromosome 4, 6 and 10, several markers exceeded the multi-trait threshold, indicating that even markers with relatively low LOD scores for individual diseases can have a high multi-trait Md value (Lopez-Zuniga et al. 2019). We observed four co-localized QTL in bin 8.03 and three in bin 9.02. The two regions with markers that were identified for more than one disease in the single trait analysis, specifically bin 3.04 (GW and SCLB) and bin 4.06 (GW and GLS), were also detected in the Md test. In all, five regions with the strongest statistical support and that have been examined in previous studies, were selected to examine their role in resistance to multiple diseases. a Both lines were more resistant to SCLB, but more susceptible to GW. b Both lines were more resistant to NCLB. Of those, one line was more resistant to GW, while the other was more susceptible to GW. n■ Haplotype effect analysis We hypothesized that some haplotypes may have opposite effects on bacterial and fungal diseases, e.g., a region may confer resistance to a fungal disease but susceptibility to a bacterial disease. We selected MDR regions located in bins 1.05, 3.04, 4.06, 8.03 and 9.02 to test this hypothesis. We estimated the effect of the haplotype at each of the selected regions, referred to as the haplotype effect, on disease severity for each of the diseases (Figure 3). The MDR region at bin 8.03 was associated with resistance to GW, NCLB and GLS; bin 9.02 was associated with resistance to GW, SCLB and GLS. While the introgressions conditioned resistance relative to Oh7B for these two bins, the effect sizes varied. These may be examples of uneven pleiotropy, whereby an MDR locus has varying effect sizes for different diseases (Wiesner-Hanks and Nelson 2016), or tight linkage. Some regions conferred contrasting effects for the diseases examined: the haplotypes at bins 1.05, 3.04 and 4.06 had an opposite effect for GW as compared to the other diseases. The NC344 haplotype at bin 1.05 was associated with resistance to SCLB and GLS, but susceptibility to GW. The introgressions in bin 3.04 conferred resistance to all the three fungal diseases, but susceptibility to GW. Lines with introgressions at bin 4.06 were more resistant to BLS and SCLB, but more susceptible to GW as compared to Oh7B. The examination of individual loci showed that the same region can confer opposite effects for different diseases and suggests that multiple disease resistance may be linked in these cases, rather than pleiotropic. DISCUSSION The heritability of GW resistance in this population was relatively high and on par with previous GW studies (Ngong-Nassah 1992;Singh et al. 2016;Cooper et al. 2018). High heritability has been reported for the three fungal diseases for this population, namely 0.76 for SCLB, 0.75 for NCLB, and 0.59 for GLS (Lopez-Zuniga et al. 2019), indicating that progress can be made from inbred line evaluations in breeding for these diseases. Bacterial leaf streak had the lowest heritability of the diseases examined in this population: 0.42 (Qiu et al. 2020). The GW QTL we identified were relatively stable across multiple environments in the single trait analysis. The QTL in bins 1.05, 2.07 and 9.02 were consistently detected and colocalized with previously identified QTL (Cooper et al. 2018;Singh et al. 2016). A central objective of this study was to investigate the relationship between resistance for multiple diseases in a mapping population. Previous studies demonstrated that resistance for the three fungal diseases, namely SCLB, NCLB, and GLS, are correlated with each other. For instance, high positive (.0.5) genetic correlations were detected in a diversity panel between resistance to all the pairwise fungal disease combinations in 253 inbred maize lines (Wisser et al. 2011). The DRIL78 correlations for the fungal diseases are not as strong compared to other populations, as no correlation was detected between resistance to SCLB and GLS (Lopez-Zuniga et al. 2019). Resistance between these two diseases are typically significantly and highly correlated (Zwonitzer et al. 2010). The lack of correlation in this population is likely due to the alleles segregating in this population. We previously reported a significant positive correlation between resistance to a bacterial (GW) and a fungal disease (NCLB) in a different population (Cooper et al. 2018). The significant correlations among diseases indicate the possibility of MDR in this population. Despite the differences between fungal and bacterial pathogens, some of the pathogens can infect the same tissue types, specifically the vasculature. SCLB and GLS are non-vascular diseases (Beckman and Payne 1982;Minker et al. 2018), while GW and NCLB are vascular diseases (Minker et al. 2018;Mbofung et al. 2016). Only one vascular/ vascular (NCLB and GW) disease correlation combination was identified. Most combinations were of a vascular and non-vascular disease (NCLB with BLS, NCLB with SCLB, NCLB with GLS, and GLS with GW), indicating that either resistance is linked but not pleiotropic or that there is another resistance mechanism at play that does not interfere with the pathogen's growth within specific plant tissues. We found evidence of regions conferring resistance to more than one disease from the single disease analysis. This is consistent with previous reports across multiple species of clustering of regions n■ b The physical position (RefGen_v3) of significant markers. c Chromosomal bin location of significant QTL (Davis et al. 1999). conferring disease resistance (Wisser et al. 2005;McMullen and Simcox 1995;Miklas et al. 2000). The same marker was effective for two disease combinations, specifically for the combination of GW and SCLB in bin 3.04 and for the combination of GW and GLS in bin 4.06. This is consistent with the multiple comparison test, where lines effective against these two disease combinations were identified. The Pearson's product correlation coefficients were significant for the combination of GW and GLS. Interestingly, in both instances, the QTL protect against a combination of a vascular bacterial disease and a non-vascular fungal disease. To examine MDR in the DRIL78 population, multi-disease postmapping analysis based on Md was conducted. All 18 of the markers detected in the single-trait mapping analysis were significant in the Md analysis. One possible explanation for this is that significant Md values can arise only due to one trait so that if a marker was highly significant for one disease, it would be identified as an MDR marker as well. The fundamental idea of the Md approach is to identify outliers in multivariate space, and outliers can occur in any one of the dimensions (the five disease-trait dimensions in our case). For the 36 novel markers from the multivariate analysis, LOD values were not high enough to exceed the LOD threshold in the single-trait mapping analysis. However, when combining the five diseases together, creating a new variable Md-p, and testing for Md-p outliers, led to the identification of the additional markers. Lopez-Zuniga et al. (2019) also noted this phenomenon when testing for MDR markers using an Md approach. We found that disease-associated QTL were distributed across all 10 chromosomes, but the QTL were not evenly distributed. This is consistent with previous synthesis studies on the genomic distribution of disease QTL in maize (Wisser et al. 2006). Based on the distribution of the single-trait and multi-trait QTL, we focused on five MDR regions to investigate further. Of these five regions, bins 1.05, 3.04, 8.03 and 9.02 have been reported previously to be related to multiple diseases in other populations (McMullen and Simcox 1995;Wisser et al. 2006;Ali et al. 2013;Lopez-Zuniga et al. 2019;Cooper et al. 2018). Lopez-Zuniga et al. (2019) identified bin 1.05 for resistance to SCLB, NCLB and GLS, and bin 3.04 for SCLB and GLS. Another study in maize utilizing near-isogenic lines found that bin 3.03-3.04 and bin 9.02-9.03 were associated with SCLB, NCLB and GLS resistance (Belcher et al. 2012). In addition to the three selected fungal diseases, bin 3.04 was also found to harbor QTL conferring resistance to European corn borer, Fusarium stalk rot, common rust and maize mosaic diseases (McMullen and Simcox 1995). We hypothesized that allele effect sizes differed at each locus for each disease and that some QTL had contrasting effects for different diseases. We found that some regions were associated with resistance to one disease and susceptibility to another, which is consistent with previous findings in other studies (Belcher et al. 2012). The introduction of resistance for one disease might unintentionally introduce susceptibility for a second disease. Fine mapping is required Figure 2 Manhattan plot for multivariate analysis. The mapping results for the two bacterial diseases are represented with warm colors and the three fungal diseases in cold colors. The GW&SCLB and GW&GLS symbols indicate that the same SNP is significantly associated with both diseases. The MO symbol corresponds to the markers that were not significant in the single-trait mapping analysis but were significant in the multi-trait composite analysis. The dotted line indicates the 1% FDR for the Md statistic. The dashed line represents the Md value for the minimum LOD threshold for the five mapping analyses. Figure 3 Estimation of haplotype effect. The x-axis indicates the selected genomic regions, and the y-axis indicated the percentage change of disease severity of lines with an introgression at that region. The negative percentage value indicates that lines with an introgression in this region were more resistant than Oh7B and a positive value indicates that the lines were more susceptible. A t-test was conducted to examine the significance of bin effect. Ã indicates the 0.05 significance level; ÃÃ indicates the 0.01 significance level and ÃÃÃ indicates that p-value was smaller than 0.001. to determine whether the same gene is conferring resistance to one disease and susceptibility to another. The mechanisms underlying MDR in this population remain elusive. Of the combinations of diseases identified using the mulitple comparison and multivariate tests, there was no clear pattern of pathogen kingdom or pathogenesis process in the combinations observed. Thus, if there is a pleiotropic gene underlying these regions, the mechanism is not obviously associated with pathogen kingdom or the growth of the pathogen in the vasculature. Resistance to all five of the diseases examined here is largely quantitative (Qiu et al. 2020;Cooper et al. 2019;Wisser et al. 2006), and thus it is conceivable that common quantitative disease resistance mechanisms could underlie the observed multiple disease resistance. Several mechanisms have been hypothesized to underlie quantitative disease resistance (Poland et al. 2009;Yang et al. 2017a) and some of these could have effects across pathogen kindgoms and pathogenesis strategies. It is important to note, that this study does not have the resolution to resolve these QTL to single genes, and it is likely that several of these cases are due to linkage, not pleiotropy. Breakpoint analysis is needed to further dissect these loci. CONCLUSION In summary, a total of five QTL associated with resistance to GW in the combined-environment mapping study were identified, one of which was consistent across all individual environments and the combined-environment mapping analysis. By combining GW mapping results with published data for NCLB, SCLB, GLS (Lopez-Zuniga et al. 2019) and BLS (Qiu et al. 2020), we identified genomic regions associated with multiple disease resistance. Two markers were identified in the independent single-trait mapping analysis as conferring effects for two diseases. A total of 36 MDR-related markers were identified in the multivariate analysis. Disease QTL were distributed across all ten chromosomes, and we focused on five regions with QTL clustering. We found strong support for multiple disease resistance QTL at 1.05, 3.04, 4.06, 8.03 and 9.02 across multiple analyses. We found evidence of QTL conferring contrasting effects for different diseases. This work deepens our understanding of multiple disease resistance in maize and the relationship between fungal and bacterial disease resistance.
v3-fos-license
2019-04-20T13:12:28.971Z
2014-01-01T00:00:00.000
55415444
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.hrpub.org/download/20131215/UJPA5-18401473.pdf", "pdf_hash": "a0ca889f9fc6b77d17472beb3dc9052ea9d41efc", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42783", "s2fieldsofstudy": [ "Physics" ], "sha1": "6dfc6abd2807818b04f513eb5eb1553d110fcba5", "year": 2014 }
pes2o/s2orc
Higgs Boson Mass and Other Constants Estimated by Polymer Statistics Methods The Higgs boson mass, the fine structure constant, the Weinberg angle, the critical exponents are estimated by polymer statistics methods. Friction forces have electroweak nature. New phase transitions by changing the even number of components of an ordering field, n, near the α relaxation transition point in melt-crystallized polymers is revealed. The transition from n = 0 to n = 2 is found. In frameworks of scaling theory of phase transitions and critical phenomena the results obtained are in a good agreement with experimental and theoretical data. Introduction About the α−relaxation transition the draw ratio at break, λ br , has a maximum [1,2] in copolymers of low pressure ethylene -acrylic acids with the small number of the latter comonomer (~0.1-0.3mol% [1,3]) as well as in linear polyethylene (PE) (see [4] and refs. in it). At this point the expression λ n 2 /λ br 2 = P(N) (obtained for room temperature) should be changed as formulated in [2]. Here, λ n is the neck draw ratio, P(N) = l a /l a cr is the probability of collision of chain ends, l a is the mean thickness of amorphous layers in isotropic material, l a cr is the same value at N = N cr , N = M w /µ is the polymerization degree, µ = 14kg/kmol is the molecular mass of repeated CH 2 group, lnN cr ≈ 15.89. Figure 1 shows the dependence of λ br on N at room [5] and elevated [4] temperatures. The theoretical curves λ br = (N/N cr ) (1−νd)/2 are also presented in Figure 1, ν is the critical exponent of correlation radius, d is the space dimension. It has been recently shown [6] that ν = 0.5(1 + t -1/t) where t = 1 + 2π (n + 2)/f, f = 137, n is the number of components of ordering field; it should be underlined that n may be only by the even number, otherwise, the gauge symmetry breaks (see Appendix). The theoretical curves in Figure 1 seem to confirm our assumption that near the α -relaxation transition the phase transition from n = 0 to n = 2 occurs. Here, we have used precise value for lnN cr ≈ 15.69 in comparison with [2]. Two Phase Model of Polymer Structure; An Estimation of Density Ratio Let a polymer chain with the polymerization degree, N, walk over two lattices having a total boundary, both being the simple cubic. In that case z is the number of neighbour monomers surrounding a site of the crystalline lattice (z = 2d = 6 for three-dimensional space). If we will regard z* as the same parameter for the second amorphous lattice and introduce the probability p=z*/z< 1 then we can write the partition function, Z: where p can be defined as the probability discovering a monomer of the chain in the first phase while 1 -p is the same probability for the second phase. Assuming Z = z* we find From the solution of the equation (1) we obtain and Z = 3 + 3 1/2 ≈ 4.732: The other solution p -= 1/Z = 1 -p has no interest for subsequent considerations. Taking the logarithm of the expression (1) we obtain lnZ = lnp + lnz. Multiplying the equation (2) by c/lnz where c is the mean concentration of chain monomers in the first phase we find clnZ /lnz = c lnp /lnz + c = C where C is a mean concentration. We suppose that C is the concentration of monomers of the chain in the second phase Experimental data are obtained at room temperature [5] and at 75C [4], they are marked by circles and by squares, respectively; the theoretical curves λbr = exp(-0.5(νd -1)ln(N/Ncr)) correspond to Ncr≈exp(15.69) and ν from Table 2 for n = 0 (dash line) and n = 2 (solid line) Thus, we can draw an important conclusion: for the two-phase model of polymer structure the density ratio of the most dense phase to the second phase is the constant which is equal to lnz/lnZ≈ 1.153 for three-dimensional space. It should be underlined that this relation is correct within the error which is less than 4% for a number of semicrystalline polymers. In Table 1 the experimental data [7] are presented for 12 polymers. The list may be expanded to a great extent. The value of Z ≈ 4.732 is very close to that of 4.68 which is a characteristic of the polymer statistics for the excluded volume problem [8]. This enables us to solve our problems in frameworks of the scaling theory of critical phenomena. This is the second important conclusion obtained by help of the present model. 3D-exponents In Table 2 the theoretical values of critical exponentsν, γ, β are presented. If n = 1 or n = 3 then such systems can be regarded as examples of materials when states with n = 0 and n = 2 (or n = 2 and n = 4) must have the same probability, while n = 6 seems to be realized in some percolation systems such as gels [8]. The experimental value of the exponent η is not determined exactly. The theoretical estimations [9,10] show its weak dependence on n. In this connection, we may attribute any reasonable meaning to η. Then for β and γ the following evaluations can be obtained by standard ways (γ + β = δβ) [9] if to suppose η= δ/f and to use the identity δ = -1 + 2d/(d -2 + η). Thus, η ≈ 0.0350, γ = νd(δ -1)/(δ + 1), β = νd/(δ + 1). These values are presented in Table 2. They are in a good accordance with recent theoretical data [10]. However, more accurate experimental checking could be made in future. If 1/f = 0 and d = 4 then we obtain the Landau exponents. Thus, we can see that friction forces have electroweak nature. Firstly, λ n is expressed by help of the β-exponent and connected with the energy dissipation during drawing [2,3]. Secondly, the exponent η is dealing with a dynamic exponent [2,3,8] and characterized the viscosity of swelling polymer coil [2,3]. where m e ≈ 0.511MeV is the electron mass. The rest exponents are expressed in terms of the similarity laws [8][9][10]. Friction forces have electroweak nature. Appendix Following Edwards [11] a behaviour of polymer chain with the excluded volume can be described by help of function, ψ(r), which is the solution of the diffusion with the boundary condition ψ(r, r`) = δ (r -r`) a 3 at N = 0, a is the monomer diameter (the lattice constant); ϕ (r) is the potential connected with the excluded volume parameter, V, as follows Let ψ(r) be an eigen function of the operator Ĥ(r). Let us assume also that there exists an uniform field H with a vector potential A(r). While the symmetry of the operator Ĥ(r) can be broken due to the vector potential is not the periodic function of coordinates, a physical translation symmetry of the polymer system does not change, as the lattice over that the macromolecule makes random walks is periodic. Further we will use the well-known solution about the symmetry of electron states in the lattice with a magnetic field [12]. Let us choose the gauge for the vector potential of the uniform field: If the field h will be where f is a simple number, a 3 is one of the lattice periods, and v is the volume of the elementary unit (v = a 3 for the primitive cubic lattice which we will use for the sake of simplicity), then for the translations where i, j, k are the unit vectors of the orthogonal basis, here, m, n, and l are the whole numbers, g(L, L`) ≡ 1 in (A.2) and Ŝ L Ŝ L` = Ŝ L` Ŝ L . Returning to estimations of the critical exponents we have found [13] that the wave vector q gives the main contribution to the integral for receiving the pair correlation function of concentration fluctuation. The wave vector q has a very important meaning for polymer statistics, since it is connected with the fluctuation interactions of the chain ends, and, consequently, with the field h corresponding to the chain ends [14,8]. Let h designate h·k in (A.3) and let us choose N = exp [1/(ha 2 )]. Then N<N cr at h>h cr where N cr is the value of the polymerization degree at which the transition from a swelling coil to ideal (h → h cr > 0) can be revealed [13]. Where n is the number of components of an ordering field, f=137 to obtain the best agreement with the estimation ν ≈ 1 -6 -1/2 at n = 0 [3,13,15], t = 1 + 2π (n + 2)/f. We have used the expression ν − 0.5 ~ n + 2 by the Wilson ε-expansion [9,16]. Thus, n may be the even number only, otherwise, g(L, L`) can have the pure imaginary value in (A.2). The formula (A.8) can be rewritten in other way supposing 2π (n + 2)/f by a small parameter
v3-fos-license
2016-05-12T22:15:10.714Z
2012-11-20T00:00:00.000
15726638
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://biolmoodanxietydisord.biomedcentral.com/track/pdf/10.1186/2045-5380-2-20", "pdf_hash": "baa667a9e6d51e6549006b7a4ee4cb46ce2aea1e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42784", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "baa667a9e6d51e6549006b7a4ee4cb46ce2aea1e", "year": 2012 }
pes2o/s2orc
The role of the amygdala in the pathophysiology of panic disorder: evidence from neuroimaging studies Although the neurobiological mechanisms underlying panic disorder (PD) are not yet clearly understood, increasing amount of evidence from animal and human studies suggests that the amygdala, which plays a pivotal role in neural network of fear and anxiety, has an important role in the pathogenesis of PD. This article aims to (1) review the findings of structural, chemical, and functional neuroimaging studies on PD, (2) relate the amygdala to panic attacks and PD development, (3) discuss the possible causes of amygdalar abnormalities in PD, (4) and suggest directions for future research. Introduction Panic disorder (PD) is characterized by repeated panic attacks, often accompanied with anticipatory anxiety, fear of losing control or sanity, or behavioral changes related to the attacks [1]. An epidemiological study conducted with a nationally representative sample estimated the lifetime prevalence of PD to be 4.5% [2]. A panic attack typically develops suddenly and reaches its peak within 10 minutes. Symptoms that accompany panic attacks include palpitations, chest pain, sweating, trembling, smothering, abdominal distress, dizziness, and fear of dying. It is estimated to be highly prevalent, with the percentage of people who experience a panic attack at least once in their lifetime reaching up to 28.3% [2][3][4]. Panic responses, when proximal predatory threat is approaching or is actually present, are adaptive in the sense that they prepare animals to fight vigorously or flee ("fight or flight response") [5]. During panic attacks, however, an intense fear response to aroused sympathetic activity is manifested in the absence of actual danger [1]. Animal model studies, genetic studies, and human neuroendocrinology and neuroimaging studies have provided important insights [5][6][7][8][9], although the neurobiological underpinnings of the panic attack and PD are not yet completely understood [10]. Neuroendocrinological studies have implicated dysfunction of the hypothalamic-pituitary-adrenal axis, although this perturbation occurs only later in the course of the disorder, after the development of anticipatory anxiety and associated distresses [11,12]. Apart from higher levels of COMT Val158Met polymorphism in PD patients, the role of genes in PD susceptibility has not yet been defined [7]. Neuroanatomical correlates that may be responsible for PD pathogenesis have been suggested [6,13]. The implicated brain areas are the amygdala, thalamus, hypothalamus, and brain stem regions including the periaqueductal gray, parabrachial nucleus, and locus ceruleus [6,13]. In particular, the amygdala has been suggested to have a critical role in PD [14], while there have been a few studies that indicate otherwise [15]. The aim of this review is to describe and discuss the neuroimaging findings and current hypotheses on the role of amygdala in the pathophysiology of PD. The amygdala The amygdala is known to have 13 nuclei, which can be categorized into lateral, basal and central subregions [16]. In humans, nuclei of the amygdala are usually grouped as laterobasal subgroup including both lateral and basal nuclei, centromedial subgroup, and cortical subgroup. These subgroups deal with fear [17], a process which can be schematically illustrated as follows: the lateral subgroup receives information from the cortical and subcortical areas, the basal subgroup inter-connects the lateral and central subgroups, and sends the output to the cortical areas, and the central subgroup conveys the information to the brain regions including hypothalamus and periaqueductal gray [17]. The laterobasal and central subgroups are also connected with bed nucleus of the stria terminalis, which also projects to hypothalamus, cerebellum, and brain stem areas [18]. The Figure 1 shows the simplified inputs and outputs of the amygdala, which have been reported to be associated with PD pathogenesis [13,18]. Theoretically, disruption in any of these brain areas and connections along these areas, or any imbalance in the network can cause maladaptive and exaggerated fear responses such as panic attacks, increased basal anxiety or arousal [19], and excessive worrying [18,20]. Consistent with this postulation, structural, chemical, and functional alterations in these amygdalar areas have been reported in neuroimaging studies in patients with PD (Tables 1, 2, 3, 4, 5; Additional files 1, 2, 3). It has been suggested that the amygdala has a critical role in the development of panic attacks and the pathogenesis of PD [14,17,18]. Structural abnormalities of the amygdala in patients with PD There have been only a handful of structural neuroimaging studies that examined neuroanatomical alterations in patients with PD (Tables 1, 2, Additional file 1), relative to the number of studies in patients with other anxiety disorders such as post-traumatic stress disorder and obsessive compulsive disorder [42][43][44][45]. In earlier studies using the computerized tomography (Table 1), it may have not been possible to evaluate amygdalar structural alterations due to insufficient spatial resolution and tissue contrast. All magnetic resonance imaging studies in which amygdalae were manually traced have consistently reported amygdalar volume reduction in patients with PD [46][47][48] (Additional file 1). In the report of Uchida and colleagues, there was a trend-level significance for bilateral amygdalar volume reduction [48], while two other studies reported a statistically significant bilateral amygdalar volume reduction [46,47]. In addition to the relatively small sample size, the fact that amygdalae were traced on 2 mm-thick reformatted magnetic resonance (MR) images in the study of Uchida and colleagues [48] may have resulted in a comparatively greater error range, which could have undermined the power of detecting volume differences of small structures. In the report of Massana and colleagues, 1.2 mm-thick isocubically reformatted MR images were used for tracing [47]. Although the voxel size of images used for tracing is not described in the report of Hayano and colleagues [46], it is likely that 1.5-mm thick native images were used according to the standard protocol of manual segmentation in 3D slicer (http://www.slicer.org) which was the image analysis software adopted in the study. Among five studies that used whole brain-wise approach of voxel-based morphometry (VBM) (Additional file 1), the report of Asami and colleagues noted amygdalar volume reduction [49]. It has been suggested that VBM approach may have limited sensitivity for identifying a priori-specified structural differences unless analytical approaches such as small volume correction are applied. This indicates that a large sample size may be required for the reliable results in VBM studies [50][51][52]. Thus, it is not surprising that the study of Asami et al., which had the positive finding in the amygdala, had the largest sample size among these five VBM studies. The effect size for group differences was greater in the right amygdala than the left amygdala across all four studies that have reported the amygdalar volume reduction [46][47][48][49]. In the study of Asami et al., a statistically significant amygdalar volume reduction was noted only in the right hemisphere. Reasons for the potentially greater deficits in the right amygdala than in the left amygdala in patients with PD may be understood in a large body of literature on hemispheric organization for processing emotions such as fear. The right hemisphere has long been considered to have dominance over the left hemisphere regarding emotional behaviors [54,55]. Theories that suggest the lateralized role of right and left hemispheres for different aspects of emotions have arisen [56]. Sackeim and colleagues proposed that the right hemisphere may have dominance especially in processing negative emotions [57]. Regarding fear processing and the amygdala, more recent neuroimaging studies indicated that the right amygdala is primarily involved in processing acquired fear, while the left amygdala is particularly involved in processing innate fear [58]. This lateralization has been interpreted in the context that innate fear may require more conscious and linguistic processing of the stimuli. On the other hand, acquired fear may not require as much conscious elaborations of the stimuli since the response would rather be automatic [58]. In PD, enhanced conditionality of fear (i.e., a tendency to acquire fear more easily) with resistance to extinction has been considered as one of the core elements of the pathology [59,60]. Because of the cross-sectional nature of the studies, it is not appropriate to infer the causality of the relationship between PD and reduced amygdalar volume. However, although speculative, some of the evidence indicates that amygdalar deficits, particularly the deficit in the right amygdala, may predispose to PD. Massana and colleagues found that amygdalar volume reduction was noted in both subgroups of PD patients with long duration of illness (>6 months) and those with relatively short duration of illness (<6 months). There were no associations between the magnitude of amygdalar volume reduction and clinical measures including panic symptom severity and illness duration [49]. In Uchida et al., correlational results between amgydalar volume and clinical measures were not reported [48]. In the study of Hayano et al., the right amygdalar volume of patients with PD had a negative correlation with the neuroticism score of NEO personality inventory-revised, the measure of enduring tendency toward experiencing negative emotional states, while the left amygdalar volume showed a negative correlation with the state anxiety score from the State-Trait Anxiety Inventory, the measure of anxiety severity [46]. This conjecture is entirely speculative and subsequently urges further studies. It is less likely that amygdalar deficits in patients with PD are due to the use of antidepressants. The total daily dose of antidepressant was not associated with the amygdalar volume reduction [49]. The study of Massana and colleagues was conducted in patients who are antidepressant-naive [47]. Also, a meta-analysis in patients with major depressive disorder has suggested that antidepressant medication increases the amygdalar volume, rather than causing amygdalar deficits [61]. Since the amygdala is composed of many different subnuclei with distinctive connections and functions [62], researchers are interested in whether there would be subregional specificity of the amygdalar deficit. Hayano and colleagues investigated which subregion of the amygdala might show more deficits in patients with PD [46]. They employed the optimized voxel-based morphometry with small volume correction using bilateral amygdalar masks. Volume deficits were noted in the amygdalar subregion that might correspond to the corticomedial subregional group. Central and medial subnuclei that are parts of the corticomedial subregion have been implicated for autonomic responses to fear stimuli [63]. However, the laterobasal subregion of the amygdala has also been reported to be involved in the pathogenesis of PD [64]. Further studies that adopt strategies that may allow finer registration among intersubject amygdalae [65][66][67] may be needed to confirm the subregional findings. Evaluating the structural connectivity among the amygdala and other brain regions on diffusion tensor images would provide important additional information [68]. Relative to healthy comparison subjects, patients with PD had a greater fractional anisotropy value in the left anterior and right posterior cingulate regions [23] ( Table 1). These regions may have a role in maintaining visceromotor homeostasis through its interconnections with the amygdala [69]. Recently, with the use of high quality diffusion tensor images and T1-weighted images, evaluating the connections among amygdalar subregions and other brain regions has become possible [70,71]. Studies with this approach would also enhance the understanding on the roles of amygdalar subregions. Functional abnormalities of the amygdala in patients with PD Functional neuroimaging techniques that have been used for studying PD include functional magnetic resonance imaging (fMRI), positron emission tomography (PET), single photon emission computed tomography (SPECT), electroencephalography (EEG), and near-infrared spectroscopy (NIRS). Among these, NIRS and EEG may not be suitable for assessing amygdalar activity due to a relatively superficial penetration depth [72] (Tables 2 and 4). There have been PET and SPECT studies that used neutral state paradigms. Earlier studies with a regionof-interest approach did not find significantly different amygdalar activity [73][74][75] (Additional file 2). As De Cristofaro and colleagues commented [74], image resolutions of SPECT and PET may not be sufficient for delineating the amygdala from other adjacent structures. A neutral-state PET study which used a whole brain-wise approach [76] also did not find amygdalar metabolic differences, while in a more recent study [77], a greater metabolism in bilateral amygdalar regions was noted in patients with PD compared with healthy comparison subjects. The authors conjectured that this discrepancy may have stemmed from differences in image resolution, sample size, and subjects' age [77]. Among three fMRI studies that captured brain regional activation during spontaneous panic attacks, two studies conducted in patients with PD or specific phobia have found increased right amygdalar activation [78,79]. A study conducted in an individual with restless leg syndrome but without any history of psychiatric disorders has noted the relationship of the left amygdalar activity and the heart rate [80] (Additional file 3). A PET study could not find the difference of the amygdalar blood flow in a healthy volunteer who experienced a panic attack during fear conditioning in comparison with 5 healthy volunteers who did not experience panic attacks during the same fear conditioning sessions [81]. Symptom provocation paradigms were used both in healthy volunteers and in patients with PD (Additional file 2, 3). When challenged with cholecystokinin tetrapeptide (CCK-4) or procaine, healthy volunteers showed increased amygdalar activation [82] or increased regional cerebral blood flow [83][84][85]. In patients with PD, CCK-4 or doxapram intravenous injection did not elicit significantly greater amygdalar activation [86,87]. Due to relatively modest sample sizes, there is a risk for the type II error, precluding the possibility of drawing any conclusions from these negative findings. In a report of Boshuisen et al., PD patients experiencing anticipatory anxiety showed significantly decreased regional cerebral blood flow in the right amygdala [88]. The authors commented on the possibility that cortical inhibitory effects in response to the intense anticipatory anxiety may have depressed the amygdalar activity. It has also been suggested that this might reflect the functional impairment of the amygdala to cope with the anticipatory anxiety [88]. Cognitive activation probes to evaluate amygdalar function have been used in functional neuroimaging studies of PD [9]. In most studies which used fMRI, patients with PD showed altered amygdalar activation level (Additional file 3) except a study with motor activation paradigms [89] and a few with a relatively small sample size [90,91]. In addition, Maddock et al., suggested that repeated presentation of the threat-related stimuli in their study (20 words being repeated eight times) may have habituated the amygdalar response [90]. This may have resulted in the negative finding in this region. Among studies that report altered amygdalar activation in PD patient group relative to comparison group, both higher and lower amygdalar activation level was found [92]. While most of the studies reported increased amygdalar activation, two studies [93,94] noted deactivated the amygdala in patients with PD relative to healthy comparison subjects. Statistical activation maps of comparison between the threat condition vs. the safe condition in the study of Tuescher and colleagues showed less activation in the amygdala of PD patients in the threat condition relative to post traumatic stress disorder patients [93]. However, this was mainly due to amygdalar hyperactivation in PD patients in the safe condition relative to threat condition, while post-traumatic stress disorder patients showed increased amygdalar activation in the threat condition. The fact that patients with PD were compared to those with post-traumatic stress disorder in this study, and the difference in the experimental paradigm from other studies precludes direct comparison of this study result with those from other studies [95]. In this study, patients were instructed that visual presentation of the square with a certain color would mean that an electrodermal stimulation can occur at any time. Actual stimulation did not happen during scanning. This may have caused anticipatory anxiety as in the study of Boshuisen and colleagues [88], in which patients with PD also exhibited decreased regional cerebral blood flow in the amygdala. Deactivation of the amygdala in response to presentation of fearful faces [94] may in part be due to the facts that most of the patients were taking antidepressants [92], since antidepressants have been reported to decrease the amygdalar activity [96]. In addition, it should be considered that the relatively lenient statistical threshold (p < 0.05) for a priori-hypothesized regions of the anterior cingulate cortex and the amygdala [94]. It is possible that only certain subgroups of patients with PD exhibit amygdalar hyperactivation [92]. Only women were responsive to fearful faces [97]. Patients with the COMT 158 val allele showed hyperactivation of the amygdala in response to fearful faces [98]. In a relatively small sample size, the heterogeneity with regard to genetic polymorphism and sex may make it hard to detect the increased amygdalar activation, if any [99]. Baseline anxiety and associated physiologic changes may also confound functional neuroimaging findings [19]. Findings regarding laterality of amygdalar activation (Additional file 3) are less consistent than those of amygdalar volume (Additional file 1). Left and right amygdalae are known to be involved in different aspects of fear processing [58]. Direct comparison of results is not appropriate since the experimental paradigms are slightly different with each other [95]. In studies with a relatively small sample size, differences in the number of subjects with right or left-handedness may confound the laterality findings [95]. There have been reports that investigated amygdalar function in relation to the treatment (Additional file 2). When untreated patients and treated patients were compared with healthy comparison subjects, patients treated with antidepressants showed no difference in amygdalar serotonin binding potential, while untreated patients showed significantly lower serotonin binding potential [100]. Treatment with paroxetine for 12 weeks in psychotropic medication-naive PD patients altered amygdalar glucose metabolism [101]. In the report of Prasko and colleagues, treatment with antidepressants did not change the amygdalar glucose metabolism [102]. This may have stemmed from exposure to antidepressants prior to study participation [102]. Cognitive behavioral therapy did not change the amygdalar glucose metabolism [102,103]. The number of studies in regards to treatmentrelated changes in amygdalar function is small and more studies are required for conclusions. Potential mechanisms underlying altered amygdalar function and structure in patients with PD Structural and functional neuroimaging findings with regard to the amygdala in patients with PD are not perfectly consistent, but a pattern suggested by one of the neurocircuitry models may be noted [13]. Increased reactivity of the amygdala with structural deficits, similar to the findings in other anxiety disorders such as posttraumatic stress disorder [92]. Whether structural deficits of the amygdala cause its hyper-reactivity or hyperactivation of the amygdala for a prolonged period elicits overuse atrophy is not known [104]. Brain physiological processes underlying altered amygdalar function and structure may be partly expounded by the findings from molecular imaging and chemical neuroimaging studies (Additional file 2, Table 5). Decreased gamma amino-butyric acid (GABA)benzodiazepine receptor binding has been reported in the medial temporal lobes that may include amygdalar regions [105,106] (Additional file 2). In the amygdala, decreased 5-HT1 A receptor binding has also been reported [100]. Both of these receptors are involved in inhibitory neurotransmission. Defective inhibition of the amygdalar activity may result in paroxysmal elevations in anxiety [105]. Chemical neuroimaging studies have reported lower levels of GABA in patients with PD than healthy comparison subjects [37,40], which were not reversed with acute benzodiazepine challenge [38]. Decreases in the GABA level may cause dysfunction in GABAergic inhibition of brain activity [107]. Morever, experimentally lowered GABA level caused panic-like behaviors in rats [108]. Findings from chemical neuroimaging studies suggest metabolic disturbances in the brain of patients with PD [19] (Table 5). Hypermetabolic state was suspected based on the findings that show depletion of phosphocreatine and creatine [92,109] in patients with PD, consistent with functional neuroimaging findings. Shioiri and colleagues reported higher level of inorganic phosphate in patients with PD [41]. Inorganic phosphate is known to be accumulated when creatine phosphate is broken down during anaerobic metabolism [110]. A rapid rise of the brain lactate level in response to physiological challenge (vigilance task with visual stimuli) also suggests an intrinsic metabolic disturbance [19,36]. Genetic polymorphism and early-life experiences may also contribute to amygdalar abnormalities. Specially, the COMT Val158Met polymorphism that affects the amygdalar structure, function, and receptor expression [98,[111][112][113] has also been reported to be associated with PD development [7]. Data from the National Comorbidity Survey showed a positive association between early-life traumatic experiences and later development of PD [114]. PD patients who experienced early-life traumatic events had more severe symptom profile [115]. Relationships between early-life traumatic experiences and amygdalar dysfunction have also been reported [116]. The role of amygdalar pathology in developing PD Electrical or chemical stimulation of the amygdalar central nucleus causes constellation of symptoms that are very similar to those of panic attacks [63]. Electrolytic lesions made in the central nucleus would also disrupt fibers connecting laterobasal nucleus and bed nucleus of the stria terminalis, which has outputs to the hypothalamus and brain stem [18], where the centers for autonomic and neuroendocrine response regulations are located. When the efferent fibers from the central nucleus are stimulated, similar effects are produced [117]. By abruptly blocking the tonic GABAergic inhibition in the laterobasal subregion of the amygdala, symptoms mimicking panic attacks are induced [64]. These animal studies suggest the role of the amygdala in the pathogenesis of PD. Earlier models to explain the neurobiology of PD have underscored the hypersensitivity to carbon dioxide, namely "false suffocation alarm theory [118,119]," although it has been revised since [120]. The following neurocircuitry model [13] is well suited for explaining the role of the 'hyperactive and smaller' amygdala [104] in the pathogenesis of PD for the fairly consistent findings. Findings of Ziemann et al., suggest the role of the amygdala in sensing central acidosis [121] which has been considered as one of the core pathophysiological processes in PD [122]. In chemical neuroimaging studies in which neurometabolite changes were measured during induced panic attacks [31][32][33]102], lactate/n-acetyl aspartate level increased during panic attacks (Table 5). Hyperventilation or sodium lactate infusion leads to a metabolic alkalosis, which presumably augments neuronal activity [123] and alters the redox state to glycolytic metabolism producing lactate [29,31]. The report of Shioiri and colleagues which found increased level of inorganic phosphate, a potential indicator of anaerobic metabolism, indirectly supports the possibility of the redox shift to glycolysis [41]. It is also possible that lactate builds up presumably due to reduced cerebral blood flow [124]. Elevated brain lactate level, which would be associated with decreased brain pH or exaggerated alkalotic buffering [29], may trigger a panic attack by stimulating the amygdala through the chemosensing ion channel [121]. The role of the amygdala in progression to PD Panic attacks do not necessarily progress to PD [3]. Bouton and colleagues [60] proposed that among all individuals who experience panic attacks, only those who have increased conditionality would develop PD. As panic attacks recur, the association would become even stronger [125,126]. Consistent with this hypothesis, enhanced conditionality [127] and resistance to extinction in patients with PD have been reported. Altered anxiety neurocircuitry including the amygdala in PD may in part be responsible for this enhanced conditionality and resistance to extinction. The amygdala projects to the striatum and cortical areas, and induce behavioral changes [62]. Phobic avoidance or agoraphobia, often associated with PD, may reflect the amygdala's influence on these areas [128]. Unlike the general belief that 'cognition rules over emotion, ' there is evidence that emotion modulates cognition from perception and attention [17] to higher domains of judgment and reasoning [129]. Irrational worry about the connotation of the attacks, one of the PD core symptoms, may also be partly attributed to the amygdalar structural and functional abnormalities. Summary and recommendations The amygdala has been reported to have a crucial role in the pathophysiology of PD. In animal studies, behaviors similar to those of panic attacks were observed when the amygdala was stimulated. Increased amygdalar activity with volumetric deficits was noted in patients with PD, although not always consistent. This altered function and structure of the amygdala may be partly due to dysregulated brain metabolism [31][32][33]121]. As potential causes of amygdalar abnormality in PD, COMT Val158Met polymorphism and early life traumatic experiences can also be suggested [112,130,131]. Enhanced conditionality and resistance to extinction are known as risk factors in progression from panic attack to PD. As an important element of the anxiety neurocircuitry, altered amygdalar function and structure may facilitate the progression to PD. Typical PD symptoms such as phobic avoidance and irrational worry of panic attacks may also be attributed to amygdalar structural and functional abnormalities. Partly inconsistent findings from human neuroimaging studies are possibly due to sample heterogeneity, small sample size, and limitations of neuroimaging techniques. More studies with a larger sample size are warranted. In terms of technical difficulties, subregional analysis of the amygdala, which detects alterations in subregions of the amygdala separately, could be a novel strategy for future research, since each nucleus might have a distinct role in the pathophysiology of PD. This approach has already shown its potential in structural neuroimaging studies [65,66] and functional neuroimaging studies, examining neural circuitry in healthy individuals [132,133]. A postmortem study, investigating association between pathology in amygdalar subdivisions of patients with Parkinson's disease [134] and premortem anxiety symptoms, provides another insight for a new strategy. Conclusions The amygdala, the hub of fear processing networks, is closely associated with the pathogenesis of PD as well as panic attack. Further studies in well-defined larger samples, with more sophisticated research designs and advanced technologies would promise a better understanding on the role of the amygdala in the pathophysiology of PD. Competing interests IKL has received research support from AstraZeneca, Lundbeck, and GSK. All other authors declare that they have no competing interests.
v3-fos-license
2017-10-30T00:53:23.895Z
2017-03-14T00:00:00.000
5857214
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1076029617696581", "pdf_hash": "b93b26d607bd1777f09b45ef20d8a1bebbf0793f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42785", "s2fieldsofstudy": [ "Medicine" ], "sha1": "ecb8c1f6591cb6d96d654794e6159584a31a46f2", "year": 2017 }
pes2o/s2orc
Tinzaparin for Long-Term Treatment of Venous Thromboembolism in Patients With Cancer: A Systematic Review and Meta-Analysis Patients with cancer are at increased risk of recurrent venous thromboembolism (VTE) and bleeding. Thus, long-term treatment with anticoagulants for secondary prevention is challenging. The objective of this review was to evaluate current evidence on the safety and efficacy of tinzaparin compared with other anticoagulants for long-term VTE treatment in patients with cancer. Based on a preregistered protocol, we identified randomized controlled trials (RCTs) comparing long-term tinzaparin (therapeutic dose: 175 IU/kg) versus other anticoagulants for at least 3 months after an acute episode of VTE that included adult patients with underlying malignancy. We extracted predefined, clinically relevant outcomes of patients with cancer and, using standard methodology, pooled available data and assessed risk of bias and quality of evidence for each study. Three open-label RCTs evaluating 1169 patients with cancer were included in the analysis. Tinzaparin was associated with a significantly lower risk of recurrent VTE at the end of treatment (relative risk [RR], [95% confidence interval] 0.67 [0.46-0.99]) and at longest follow-up (RR: 0.58 [0.39-0.88]) and showed a lower risk of clinically relevant non-major bleeding at the end of treatment (RR: 0.71 [0.51-1.00]). No significant between-treatment differences were found for all-cause mortality (RR: 1.09 [0.91-1.30]) or fatal and non-fatal major bleeding events (RR: 1.06 [0.56-1.99]). The overall quality of evidence was deemed moderate, mainly due to small sample size in 2 of the studies and limited number of events in the meta-analyses. In conclusion, both short- and long-term treatments with tinzaparin were found to be superior to vitamin K antagonists for avoiding recurrences of VTE. Introduction Hemostasis and malignancy are strongly related, 1 and patients with cancer are at increased risk of venous thromboembolism (VTE). 2 Venous thromboembolism occurs 4 times more often in patients with cancer compared with the general population. 3 However, there is a wide variability related to the cancer type and time since diagnosis. 3 Multiple factors have been reported to increase the risk of venous thrombosis in patients with cancer. Some of these factors include chemotherapy, use of erythropoietin agents, and use of certain anticancer therapies such as thalidomide, high-dose steroids, and antiangiogenic therapy. In addition, the risk of VTE is higher in patients with coexisting chronic medical illnesses. 4 The risk of recurrent VTE seems to be higher in patients with metastatic versus localized malignancy. 5 Furthermore, it has been reported that the risk of recurrence is increased by factors such as interim hospitalizations, central venous catheter, and respiratory infection. 6 A recently updated Cochrane review shows that primary thromboprophylaxis with low-molecular-weight heparin (LMWH) significantly reduces the incidence of symptomatic VTE in ambulatory patients with cancer treated with chemotherapy. 7 Another recently published Cochrane review shows that LMWH for secondary prophylaxis, compared with vitamin K antagonists, reduces recurrent VTE events but not mortality. 8 Tinzaparin sodium (tinzaparin) is an LMWH produced by the enzymatic degradation of porcine-derived unfractionated heparin. 9 Tinzaparin acts as an anticoagulant by enhancing the inhibition of the activating effect of antithrombin on coagulation factors, especially Factors Xa and IIa. The ratio of anti-Xa/ anti-IIa activity for tinzaparin is between 1.5 and 2.5 times the normal ratio. Subcutaneous tinzaparin increases anti-Xa and anti-IIa activities in the plasma in a dose-dependent fashion and stimulates the release of tissue factor pathway inhibitor, which contributes to its anticoagulant and potential anticancer effects; 9,10 this could be advantageous for reducing risk of recurrence of VTE in patients with cancer. A meta-analysis of 5 randomized controlled trials (RCTs) in patients with and without cancer found that tinzaparin may be a valuable option for long-term VTE treatment in those who have a contraindication for vitamin K antagonists or when monitoring is difficult. 11 The meta-analysis showed no difference in symptomatic VTE after treatment but did show superiority of tinzaparin over vitamin K antagonists with regards to recurrence in patients with cancer at 1 year of follow-up. Our objective in this systematic review is to provide an update regarding the clinical efficacy and safety and potential side effects of tinzaparin for the treatment of VTE in patients with cancer. Methods This systematic review was based in a study protocol that was prospectively registered in PROSPERO database (register number CRD42016036024; available from http://www.crd.york .ac.uk/PROSPERO/display_record.asp?ID¼CRD4201603 6024). The report was developed following guidance from PRISMA statement. 12 Eligibility Criteria We included RCTs comparing head-to-head long-term (>3 months) tinzaparin (therapeutic dose 175 IU/kg; subcutaneous injection once daily) versus oral anticoagulants or any other heparin after an episode of acute deep vein thrombosis (DVT) or pulmonary embolism (PE), including adult patients with underlying malignancy of any type. Also included were RCTs evaluating non-selected adult patients, provided they included a well-defined subgroup of patients with cancer, and it was possible to identify the relevant outcomes for this subgroup of participants. A diagnosis of all episodes of thrombosis, confirmed objectively, using standard imaging techniques was a prerequisite. Trial registries were also searched via the World Health Organization International Clinical Trials Platform Search Portal to identify further ongoing or completed trials. When required, the authors of the included studies were contacted in order to obtain further details. Finally, the reference lists of all trials and identifiers were also assessed. Outcome Measures The primary outcome was the number of patients with at least 1 recurrent VTE event (composite of DVT and PE; incidental and symptomatic [including fatal]) at the end of the treatment period. Secondary outcomes included safety outcomes (all adverse events [AEs], all AEs related to the interventions tested, all-cause mortality at the end of treatment period and at any follow-up, major bleeding [fatal and non-fatal; defined according to International Society on Thrombosis and Haemostasis criteria] 13 at the end of the treatment period and at any follow-up, minor bleeding [all bleedings not classified as major], clinically relevant non-major bleeding [all non-major bleedings requiring a medical or surgical intervention], and trivial bleeding [those not requiring medical or surgical intervention]); recurrent VTE at any follow-up; recurrent symptomatic DVT at the end of treatment period and at any follow-up; recurrent incidental DVT at the end of treatment period and at any follow-up; and recurrent incidental PE at the end of treatment period and at any follow-up. Study Selection and Data Extraction All studies identified by the search strategies were independently assessed for inclusion by 2 review authors (M.M.Z. and A.G.M.). Data were also independently extracted by 2 review authors, using a prespecified standardized form (M.M.Z. and A.G.M.). Disagreement in study selection or extraction was resolved through discussion and consensus. Assessment of Risk of Bias Two authors independently assessed the risk of bias for each included trial, in accordance with Cochrane's Handbook. 14 We assessed generation sequence, allocation concealment, blinding of patients and investigators, blinding of outcome assessors, incomplete outcome data, and selective reporting. Risk of bias for each of these domains was rated as low, high, or unclear. The overall risk was considered "high" if any of the domains were deemed high risk, "unclear" if any of the domains were deemed unclear risk plus none of high risk, and "low" if all domains were deemed low risk. We had planned to explore whether the review was subject to publication bias by means of a funnel plot. However, we could not conduct this analysis because the number of included studies was less than 10. 15 Assessment of Heterogeneity We used the I 2 statistic to measure statistical heterogeneity between trials in each analysis; this describes the percentage of total variation across trials, which is due to heterogeneity rather than to sampling error. 16 We considered a substantial statistical heterogeneity if the I 2 was greater than 75%. 14 If substantial heterogeneity was detected, we explored its sources by prespecified subgroup analyses. Data Synthesis The effect of treatment with tinzaparin was estimated with pooled relative risks (RRs) and their corresponding 95% confidence intervals (CIs). Pooled estimates were computed with the Mantel-Haenszel method, under a random-effects model. 17 Analysis of the primary outcome, all recurrent VTE, was stratified by the type of anticoagulants; analysis of the secondary outcome, non-major bleeding, was also stratified (eg, minor bleeding, clinically relevant non-major bleeding, and trivial bleeding) due to different definitions used in the literature, and the overall results were not pooled. We used Review Manager Software (RevMan 5, Cochrane Community) to perform all statistical analyses. Subgroup Analysis and Investigation of Heterogeneity Subgroup analyses were restricted to the review's primary outcome. Three subgroup analyses were performed: by the type of oral anticoagulant; by the duration of treatment at 3, 6, and !12 months; and by the length of follow-up at 3, 6, and !12 months. Sensitivity Analysis We planned to conduct a sensitivity analysis excluding studies at high risk of bias; however, this was not possible as all trials were at high risk of bias due to the open-label study design. We did perform a sensitivity analysis including only patients who complied with the protocol of the included studies. Quality of Evidence We used GRADE methodology 18 to assess the quality of the body of evidence for the outcomes: all recurrent VTE, all-cause mortality, major (fatal and non-fatal) bleeding, clinically relevant non-major bleeding, recurrent symptomatic DVT, and recurrent (fatal and non-fatal) symptomatic PE. This approach assessed the quality of the body of evidence per comparison and outcome, taking into account the risk of bias across included studies, indirectness, inconsistency, imprecision, and the publication bias. The GRADE Working Group classifies evidence in 4 grades-(1) high quality: further research is very unlikely to change our confidence in the estimate of effect; (2) moderate quality: further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate; (3) low quality: further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate; (4) very low quality: there are many uncertainties about the estimate. We present the outcomes and quality assessments in table format, which was constructed using the GRADEPro software version 3.0 (https://gradepro.org/). Results The search strategy identified 1044 relevant references. From these, we retained 763 records after duplicates were removed. Three trials met the inclusion criteria and were reported in 13 articles ( Figure 1). [19][20][21] Included Studies We included 3 RCTs. [19][20][21] One RCT (the CATCH trial) 19 was specific for patients with cancer, but the other 2 studies (the LITE trial and the Romera trial) 20,21 also included patients without malignancy. The most frequent type of cancer was gynecologic, followed by colorectal, upper gastrointestinal, lung, genitourinary, hematologic, and breast cancer. More than 50% of patients in the CATCH trial 19 had metastatic disease, 41.5% in the LITE trial, 20 and 24.6% in the Romera trial. 21 The basal distribution of cancer localization and metastasis in the studies that reported the primary cancer localization 19,21 was similar comparing in tinzaparin and control groups. A total of 1169 patients with cancer were included in our review from the 3 studies: 900 patients in the CATCH trial, 19 200 patients with cancer in the LITE trial, 20 and 69 patients with cancer in the Romera trial. 21 These studies assessed 175 IU/kg tinzaparin, once daily by subcutaneous administration. The treatment duration of tinzaparin was 3 months in the LITE trial 20 and 6 months in both the CATCH trial 19 and the Romera trial. 21 Two trials used warfarin as control 19,20 and the other used acenocoumarol. 21 There were no major differences between arms in baseline characteristics ( Table 1). All trials used parallel designs to compare 2 arms, 19-21 whereas CATCH was a phase 3 trial, 19 the LITE and Romera studies did not report the phase. 20,21 One trial was conducted in Asia, Africa, Europe, and North, Central, and South America; 19 1 in Canada; 20 and 1 in Spain. 21 The reviewed trials included 2, 21 30, 20 and 164 participating 19 centers. Duration of followup varied across trials: 180 days 19 and 1 year. 20,21 Whereas the CATCH trial reported protocol registration and was sponsored by a pharmaceutical company (LEO Pharma), 19 the LITE and Romera trials were investigator-initiated studies that received partial funding/support from LEO Pharma (provision of study drug and drug safety monitoring in the LITE trial; duplex ultrasonography in the Romera) trial. Excluded Studies In total, 27 studies were excluded for the following reasons: they were systematic reviews or meta-analyses; non-systematic reviews; non-RCTs; tinzaparin was not assessed; only short-term assessment of tinzaparin; included patients with cancer but data were unavailable; terminated early; clinical practice guidelines; included few patients with cancer. Risk of Bias All 3 included trials were rated as having low risk of selection bias regarding random sequence generation (Figure 2). [19][20][21] Two trials had low risk of selection bias regarding allocation concealment. 19,21 One trial reported insufficient information on this item, so it was rated as having unclear risk of bias. 20 All trials were reported as open label; therefore, they had high risk of performance bias. [19][20][21] As such, acknowledgment of the allocated treatment could have influenced compliance with the treatment and the implementation or not of different co-interventions during the study (eg, use of antiplatelets). However, blinding of the interventions is challenging (heparins are administered by fixed dose of subcutaneous injections and antivitamin K is administered orally and dose adjusted by monitoring anticoagulant effect) and could pose ethical concerns. In the 3 trials, outcome assessment was conducted by reviewers who were not involved in the study conduct and were blinded to the interventions. For this reason, we rated these trials as having low risk of detection bias. All trials were deemed low risk of attrition and selective reporting bias. [19][20][21] The risk of other potential sources of bias was rated low for 2 of the trials 19,21 and unclear for the LITE trial, due to unreported sample size and inadequate data on the age of participants. 20 In the stratified analysis of recurrent VTE by the length of follow-up, results at 3 months were not statistically significant in the single trial providing data (RR: 0.60, 95% CI: 0.23-1.59). 20 At 6 months of follow-up, a meta-analysis of the CATCH and Romera trials also showed a non-significant difference of tinzaparin compared with vitamin K antagonists (RR: 0.69, 95% CI: 0.45-1.05; I 2 ¼ 0%). 19,21 At 12 months of follow-up, a meta-analysis of the LITE and Romera trials found a statistically significant decrease of all recurrent VTE events in participants receiving tinzaparin versus vitamin K antagonists (RR: 0.39, 95% CI: 0.19-0.81; I 2 ¼ 0%; Figure 4). 20, 21 We did not perform a sensitivity analysis by the risk of bias because all trials had high risk of bias due to their unblinded study design. In the sensitivity analysis evaluating only studies that published per protocol data, 1 trial was included. 19 There was no statistically significant difference regarding all recurrent VTE events comparing tinzaparin to vitamin K antagonist therapy (RR: 0.65, 95% CI: 0.41-1.03). 19 Secondary Outcomes Pooled data from all included trials did not demonstrate any difference between tinzaparin and vitamin K antagonist therapy in all-cause mortality at the end of treatment period (RR: 1.09, 95% CI: 0.91-1.30; I 2 ¼ 0%, moderate-quality evidence). [19][20][21] Similarly, no between-group difference was found in all-cause mortality at the longest follow-up (RR: 1.06, 95% CI: 0.91-1.25; I 2 ¼ 0%). [19][20][21] The pooled data of 3 trials found a significant risk reduction (42%) of all recurrent VTE events at the longest follow-up in participants assigned to tinzaparin compared with those receiving vitamin K antagonist therapy (RR: 0.58, 95% CI: 0.39-0.88; I 2 ¼ 6%). [19][20][21] The pooled data of the CATCH and LITE trials found a statistically significant risk reduction (45%) of recurrent symptomatic VTE at the end of the treatment in participants assigned to tinzaparin compared with those receiving vitamin K antagonist therapy (RR: 0.55, 95% CI: 0.31-0.99; I 2 ¼ 0%, moderatequality evidence; Figure 5). 19,20 At the longest follow-up, the pooled results suggested a decrease in recurrent symptomatic VTE events with tinzaparin compared with those receiving vitamin K antagonist therapy (RR: 0.57, 95% CI: 0.32-1.00; 19,20 However, the pooled data of the CATCH and LITE trials found no statistically significant reduction in recurrent (fatal and non-fatal) symptomatic PE at the end of treatment with tinzaparin versus vitamin K antagonists (RR: 0.98, 95% CI: 0.54-1.76; I 2 ¼ 0%, moderate-quality evidence). 19,20 At the longest follow-up, results were also not significant (RR: 0.46, 95% CI: 0.06-3.70; I 2 ¼ 75%). 19,20 In terms of recurrent incidental VTE, the CATCH trial did not find differences in the effect of tinzaparin compared with vitamin K antagonist therapy at the end of the treatment (RR: 0.33, 95% CI: 0.01-8.20). 19 At the longest follow-up, results were also not significant (0 of 449 [0%] vs 1 of 451 [0.22%]; RR: 0.33, 95% CI: 0.01-8.20). 19 Furthermore, regarding recurrent incidental PE, no significant differences were found comparing tinzaparin with vitamin K antagonist therapy at the end of the treatment (RR: 0.33, 95% CI: 0.01-8.20). 19 At the longest follow-up, the results also showed no significant difference (RR: 0.33, 95% CI: 0.01-8.20). 19 The pooled data of the CATCH and LITE trials found no differences on fatal and non-fatal major bleeding at the end of the treatment between tinzaparin and vitamin K antagonist therapy (RR: 1.06, 95% CI: 0.56-1.99; I 2 ¼ 0%, moderatequality evidence). 19,20 In the CATCH trial that assessed clinically relevant nonmajor bleeding at the end of treatment, the tinzaparin arm reported lower frequency of this event than the vitamin K antagonist therapy arm (RR: 0.71, 95% CI: 0.51-1.00, moderate-quality evidence). 19 One trial found no differences between tinzaparin and vitamin K antagonist therapy regarding minor bleeding at the end of the treatment (RR: 1.18, 95% CI: 0.66-2.11). 20 None of the trials reported data for trivial bleeding, all AEs in general, and those related with the interventions. Quality of the Evidence The overall quality of evidence was moderate, mainly due to a low number of events in the meta-analysis and a small sample size in the LITE and Romera studies (Table 2). 20,21 Discussion This systematic review of tinzaparin for long-term treatment of VTE in patients with cancer identified 3 RCTs that included 1169 patients with different types of cancer. The percentage of patients with metastatic disease varied from 24% to 54% in the studies. [19][20][21] The trials compared 175 IU/kg tinzaparin administered once daily by subcutaneous injection with vitamin K antagonist therapy. Two clinical trials assessed warfarin 19,20 and 1 assessed acenocoumarol. 21 This review found evidence of moderate quality suggesting that tinzaparin is associated with a risk reduction of all recurrent symptomatic VTE; and that tinzaparin and vitamin K antagonist therapy have a similar effect on all-cause mortality, on major (fatal and non-fatal) bleeding, clinically relevant nonmajor bleeding, minor bleeding, and recurrent symptomatic PE. None of the trials evaluated "any AEs" or "trivial bleeding" as outcomes. Of note, the difference in all recurrent symptomatic VTE was driven by a significant reduction in the risk of recurrent DVT; there was no between-treatment difference in the risk of recurrent symptomatic PE. When we stratified the analyses by the time of follow-up, VTE recurrences were significant at 12 months of follow-up but not at 3 or 6 months. Caution is necessary in interpreting the results since they are based on a low number of events because the largest included trial 19 did not include 12-month follow-up. Our results are similar to those of a Cochrane review that focused on patients with cancer but included different LMWHs. 8 The reviews by Akl et al 8 and Laporte et al 11 assessed LMWH versus vitamin K antagonists in different participants with and without cancer; this, however, did not include the recently published CATCH 2015 trial. Furthermore, our review focused on tinzaparin and studies in patients with cancer and has overall better quality of evidence (moderate) than the Akl et al's 2014 review 22 (from low to moderate), which was downgraded by imprecision and indirectness. Despite methodological differences with our review, results from Laporte et al 11 are similar regarding safety outcomes. Laporte et al also showed that tinzaparin compared with vitamin K antagonist significantly reduced the risk of all recurrent thromboembolic events but only at 12 months of follow-up. In our review, we showed that tinzaparin versus vitamin K antagonist significantly reduced the risk of all recurrent thromboembolism events after 3 to 6 months of treatment and at 12 months of follow-up. We identified some limitations in our systematic review. First, all included trials had high risk of performance bias due to the fact that both patients and researchers were unblinded. However, considering that the outcomes were relevant, objective, measured by diagnostic tests, and that the outcome assessor was blinded, we rated the overall risk of bias as low. Furthermore, concealing interventions (oral tablets vs subcutaneous injections) is difficult to achieve and could pose ethical concerns, particularly in patients with cancer. Moreover, our results are limited by the primary studies. While adherence could be an important issue in an injectable medication such as tinzaparin, unlike trials observing the use of LMWH following surgery in patients with cancer, 23 none of the included trials reported on the adherence of the patients to assessed treatments. Furthermore, the number of events in all meta-analyses we performed was low. Moreover, LITE 20 and Romera 21 studies had a small sample size and our results were mainly driven by the CATCH trial, 19 which represented 77.2% of the overall systematic review population. For this reason, the quality of the available evidence was deemed moderate for all included outcomes. The strength of this review was that the results show consistency and are based on a broad range of types of cancer. A wide search strategy in different databases was implemented, therefore detection bias is unlikely. Only studies comparing tinzaparin with a vitamin K antagonist were identified, and it could be interesting to conduct RCTs comparing the efficacy and safety of tinzaparin with the new oral anticoagulants in the future. The corresponding risk (and its 95% CI) is based on the assumed risk in the comparison group and the relative effect of the intervention (and its 95% CI). c Downgraded 1 level due to imprecision (low number of events). d Downgraded 1 level due to imprecision (CI overlaps, no effect; cannot exclude important benefit or important harm). e Downgraded 1 level due to imprecision (low number of events and CI overlaps, no effect; cannot exclude important benefit or important harm). Conclusion Our systematic review demonstrated a reduction in the risk of all recurrent thromboembolism in patients with cancer-associated thrombosis managed with long-term tinzaparin compared with vitamin K antagonist therapy. However, according to GRADE methodology, the quality of the available evidence was deemed moderate; this suggests the need for more confirmatory trials, especially with a longer follow-up.
v3-fos-license
2019-03-20T13:12:49.029Z
2004-10-29T00:00:00.000
83712995
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/279/44/45528.full.pdf", "pdf_hash": "01823bda453f87b6d4f6eefd7ae0c022d69bfef3", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42786", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "sha1": "4a8f16333c72c4b0d6c89c558a196f9d2f0a3386", "year": 2004 }
pes2o/s2orc
A Novel Ubiquitin-like Domain in I (cid:1) B Kinase (cid:2) Is Required for Functional Activity of the Kinase* Activation of NF- (cid:1) B requires two highly related kinases named IKK (cid:3) and IKK (cid:2) that share identity in the nature and positioning of their structural domains. Despite their similarity, the kinases are functionally divergent, and we therefore sought to identify any structural features specific for IKK (cid:3) or IKK (cid:2) . We performed bioinformatics analysis, and we identified a region resembling a ubiquitin-like domain (UBL) that exists only in IKK (cid:2) and that we named the UBL-like domain (ULD). Deletion of the ULD rendered IKK (cid:2) catalytically inactive and unable to induce NF- (cid:1) B activity, and overexpression of only the ULD dose-dependently inhibited tumor necrosis factor- (cid:3) -induced NF- (cid:1) B activity. The ULD could not be functionally replaced within IKK (cid:2) by ubiquitin or the corresponding region of IKK (cid:3) , whereas deletion of the equivalent section of IKK (cid:3) did not affect its catalytic activity against I (cid:1) B (cid:3) or its activation by NF- (cid:1) B-inducing kinase. We identified five residues conserved among the larger family of UBL-containing proteins NF-B 1 describes a family of structurally and functionally related, ubiquitously expressed transcription factors that play pivotal roles in innate and adaptive immunity, inflammation, development, cell growth, and survival (1,2). A defining feature of most NF-B responses is their inducibility. Thus, NF-B proteins are normally sequestered inactive in the cytoplasm of resting cells through interaction with distinct members of a family of inhibitory proteins named the IBs. Following appropriate stimulation, the IB proteins are degraded, and NF-B migrates to the nucleus where it binds to specific DNA motifs within the promoters of its target genes (1,2). The most intensely studied NF-B activation pathway is that induced by the pro-inflammatory cytokine tumor necrosis factor (TNF)-␣. In response to TNF stimulation, IB proteins (typified by IB␣) become rapidly phosphorylated on two specific N-terminal serine residues, and this signals their subsequent ubiquitination and proteasomal degradation. Degradation of IB␣ liberates NF-B dimers that most commonly consist of the p65 (RelA) and p50 NF-B subunits, and these p65:p50 heterodimers regulate the expression of a wide range of genes that include those of pro-inflammatory cytokines, leukocyte adhesion molecules, and anti-apoptotic proteins (2,3). Arguably, the most important intermediate signaling event in this pathway is the phosphorylation of IB proteins, and tremendous effort from a number of laboratories has identified and characterized the signal-responsive kinase complex responsible for this crucial step (4). This catalytic activity resides in a complex of proteins named the IB kinase (IKK) complex that consists of three core subunits named IKK␣ (IKK1), IKK␤ (IKK2), and NF-B essential modulator (NEMO) that is also know as IKK␥ (5)(6)(7)(8)(9). A number of elegant genetic studies have clearly demonstrated that phosphorylation of IB␣ in response to TNF␣ signaling is absolutely dependent upon IKK␤ and NEMO (10 -16), and this pathway that results in liberation of mainly p50:p65 heterodimers is now named the "classical" NF-B pathway (17). Activation of the classical pathway underlies the vast majority of the known functions of NF-B in immune and inflammatory responses, cell survival, and development (17). In contrast to the fundamental role for IKK␤ in the classical pathway, IKK␣ appears for the most part to be functionally redundant (12,13,16). Thus IB␣ degradation and activation of NF-B in response to TNF␣ remains intact in cells derived from IKK␣ Ϫ/Ϫ animals, whereas this is completely ablated in both IKK␤ Ϫ/Ϫ and NEMO Ϫ/Ϫ cells (10 -16). However, evidence exists that IKK␣ plays a role in mediating the classical pathway in response to a subset of inducers including receptor activator of NF-B ligand (RANKL) (18), and it is known to be able to phosphorylate IB␣ in vitro, albeit with a lower relative activity than IKK␤ (19). Recently, IKK␣ has been shown to migrate to the nucleus and to perform an epigenetic function by regulating histone phosphorylation in the vicinity of classical NF-B-dependent genes (20,21). In addition, IKK␣ may play a regulatory role within the IKK complex by trans-phosphorylating IKK␤ (22), and loss of IKK␣ appears to affect the expression of a number of genes activated in response to pro-inflammatory cytokines (23), although the mechanisms responsible for these effects remain unclear. Nevertheless, despite these separate lines of evidence, it remains that the classical NF-B pathway is absolutely dependent upon IKK␤ and that IKK␣ is dispen-sable for IKK␤-dependent IB␣ phosphorylation and subsequent NF-B activation. In contrast to its lack of function in classical NF-B signaling, IKK␣ is the key regulator of a recently described "noncanonical" NF-B pathway (24 -26). In this pathway IKK␣ specifically phosphorylates the C terminus of the NF-B p100 subunit (also known as NF-B2) inducing its ubiquitination and processing, and in a manner analogous with IB␣ degradation, p100 processing releases its N-terminal portion (p52) as a transcriptionally active heterodimer with RelB (24 -26). This series of events is completely independent of IKK␤ and NEMO and functions in IKK␤ Ϫ/Ϫ and NEMO Ϫ/Ϫ cells (24 -26). A critical component of the noncanonical pathway is the upstream kinase NIK (NF-B-inducing kinase) that phosphorylates and activates IKK␣ (27), and activation of this pathway is absent in aly/aly mice that carry a mutated NIK gene (24,27). Studies of aly/aly mice together with accumulated genetic evidence clearly demonstrate that the noncanonical NF-B pathway is critical for the maturation of B-cells and development of lymphoid organs. Consistent with this, p100 processing only occurs in response to signals from a subset of TNF receptor family members involved in lymphoid organogenesis and B-cell maturation as follows: B-cell activating factor receptor, CD40, and the lymphotoxin-␤ receptor (24,25,28). In turn, the known inducers of this pathway are the B-cell activating factor, CD40L, heterotrimeric LT␣1␤2, and LIGHT that also binds the lymphotoxin-␤ receptor. Only five genes have been definitively identified as targets of the noncanonical pathway: the cytokine B-cell activating factor and the chemokines SLC (CCL21), ELC (CCL19), BLC (CXCL13), and SDF-1␣ (CXCL12) (24). Consistent with the genetically defined function of the pathway, these are all involved in either B-cell maturation or the development and function of lymphoid organs. It is clear from these studies that although they physically interact within the IKK complex, IKK␣ and IKK␤ perform highly distinct functions. Although it has been suggested that IKK␣ functions in the noncanonical pathway via a separate IKK␣-alone complex, such a complex has yet to be described and molecularly characterized (24). Despite this potentially separate complex however, it remains that the IKKs share remarkable structural similarity, and consequently the precise mechanisms that underlie their divergent functions remain unknown. In this regard both kinases possess N-terminal catalytic domains, centrally located leucine zippers through which they interact, and a helix-loop-helix domain that is critical for their activation (29). In addition, IKK␣ and IKK␤ each contain a C-terminal NEMO-binding domain (NBD), which we have demonstrated to facilitate NEMO interaction with both kinases (30,31). In light of these similarities, we therefore sought to identify any novel domains or sequences within either of the IKKs that might account for their functional divergence. We describe here the identification and functional characterization of a novel ubiquitin-like domain (ULD) we have identified in IKK␤. Such a domain does not exist in IKK␣. We demonstrate that deletion of or mutations within the ULD profoundly affect the function of IKK␤, whereas loss of the identical segment of IKK␣ does not affect its activity. Furthermore, our findings strongly suggest that the IKK␤ ULD plays a fundamental role in regulating the interaction of the IKK complex with p65. We therefore conclude that the ULD is absolutely critical for the induced activity of IKK␤, and we further hypothesize that this novel IKK␤-specific domain contributes to the functional divergence of the IKKs. EXPERIMENTAL PROCEDURES Cell Culture and Reagents-HeLa, HEK293, and COS cells were obtained from ATCC (Manassas, VA) and maintained in Dulbecco's modified Eagle's medium (Invitrogen) supplemented with 10% fetal bovine serum, 2 mM L-glutamine, penicillin (50 units/ml), and streptomycin (50 g/ml). Mouse anti-FLAG (M2) and anti-FLAG-coupled agarose beads were purchased from Sigma. Mouse anti-Xpress was purchased from Invitrogen; rabbit anti-p65 (SA-171) was from Biomol (Plymouth Meeting, PA), and the horseradish peroxidase-conjugated secondary antibodies against either rabbit or mouse IgG were both from Amersham Biosciences. Anti-IB␣ and anti-glyceraldehyde-3-phosphate dehydrogenase were from Santa Cruz Biotechnology (Santa Cruz, CA), and anti-p100 (NF-B2) was from Upstate Biotechnology, Inc. Recombinant human TNF␣ was purchased from R & D Systems (Minneapolis, MN). Plasmids and PCR Mutagenesis-Full-length cDNA clones of human IKK␣ and IKK␤ were the generous gifts from Dr. Michael Karin (University of California, San Diego). All subcloning and mutagenesis procedures were performed by PCR using cloned Pfu DNA polymerase (Stratagene, La Jolla, CA). All PCR conditions and primer sequences are available upon request. Wild-type and mutated IKK␤ cDNAs were inserted between the KpnI and NotI restriction sites of pcDNA-3.1-Xpress (Invitrogen), and all IKK␣ cDNAs were inserted into the EcoRI and XhoI sites of the same vector. FLAG-tagged versions of wild-type and mutated IKK␣ and IKK␤ were constructed by subcloning into the EcoRI/XhoI and HindIII/NotI sites within pFLAG-CMV2 (Sigma), respectively. Human p65 in pcDNA3 was described previously as were the GST-p65N and GST-p65C constructs (32). Full-length cDNAs encoding human ubiquitin and NIK were obtained from HeLa cDNA by PCR cloning using Pfu DNA polymerase. Point mutations within the ULD were made using the QuickChange® site-directed mutagenesis kit from Stratagene. To make all of the domain substitution mutations in I⌲⌲␤ and IKK␣, an EcoRV site was inserted into the IKK␤-d.ULD or IKK␣-d.78 mutant across the join. The fragments to be inserted into the kinases were generated by using primers flanked by EcoRV sites, and these were then ligated into the EcoRV-cut IKK constructs. The GST-ULD fusion protein was constructed by inserting a PCRgenerated fragment encompassing the IKK␤ ULD (encoding amino acids Leu 307 to Met 384 ) between the EcoRI and NotI sites within pGEX-4T1 (Amersham Biosciences). A cDNA encoding human NEMO was obtained as described previously (31). GST-NEMO was constructed by subcloning the full-length cDNA into the EcoRI and XhoI sites of pGEX-4T1. The fusion of GST with the first 90 amino acids of human IB␣ (GST-IB␣-(1-90)) that was used as a substrate in the kinase assays was described previously (31). GST proteins were made in Escherichia coli (BL21) by treating transformed bacteria with 0.4 mM isopropyl-␤-D-thiogalactopyranoside (Sigma) and following the manufacturer's protocol for protein recovery provided with the vector. Interaction Analysis-For GST pull-down analysis, IKK␣ or IKK␤ in pcDNA3.1 were transcribed in vitro and translated in the presence of [ 35 S]methionine using the TNT-T7 Quick system from Promega (Madison, WI). Labeled proteins (1 l of reticulocyte lysate) were incubated with GST alone, GST-NEMO, or GST-ULD (1 g) in 100 l of TNT (50 mM Tris, pH 7.5, 200 mM NaCl, 1% Triton X-100) containing protease inhibitors (Complete Protease Inhibitor Mixture, Roche Diagnostics) at 4°C for 30 min, and 20 l of a 50% (v/v) slurry of glutathioneagarose beads (Amersham Biosciences) was then added and incubated a further 15 min. Proteins were then precipitated and washed extensively in TNT before addition of sample buffer (20 l). Samples were then separated by SDS-PAGE (10%), and the resulting gels were stained with Coomassie Blue, fixed, and then examined autoradiographically. For transient transfection studies, 1 ϫ 10 6 COS cells grown in 6-well trays were transfected with 1 g of total DNA using the FuGENE 6 transfection reagent (Roche Diagnostics). All DNA/FuGENE 6 incubations were performed at a ratio of 1 g of DNA per 3 l of FuGENE 6 according to the manufacturer's recommended protocol in Opti-MEM medium (Invitrogen). After 48 h, cells were lysed in 500 l of TNT, and then complexes were immunoprecipitated (IP) by using anti-FLAGcoupled agarose beads. A portion of each lysate taken prior to immunoprecipitation (5%) was retained for analysis (pre-IP). Precipitated proteins were analyzed by immunoblotting using epitope-specific (anti-FLAG or anti-Xpress) antibodies that were visualized using enhanced chemiluminescence (ECL) reagents from Amersham Biosciences. Luciferase Reporter Assay-Dual luciferase reporter assays were performed essentially as described previously (30,31). Briefly, 2.5 ϫ 10 5 HeLa cells grown on 12-well plates were transiently transfected using FuGENE 6 with the NF-B-dependent reporter construct pBIIx-luc (0.2 g/well) together with the Renilla luciferase vector (0.02 g/well). Total DNA concentration in each experiment (1.0 g/well) was maintained by adding the appropriate empty vector to the DNA mixture. Forty-eight hours after transfection, cells were lysed in passive lysis buffer (Promega), and luciferase activity was measured using the dual luciferase assay kit from Promega. In some experiments the levels of transfected proteins in 20 g of lysates were examined by immunoblotting by using appropriate epitope tag-specific antibodies. Immune Complex Kinase Assay-For immune complex kinases assays, 1 ϫ 10 6 HeLa cells grown on 6-well plates were transiently transfected with 1 g of the FLAG-tagged version of the kinase constructs using the FuGENE 6 reagent as described above. Forty-eight hours after transfection, the cells were either untreated or treated with TNF␣ (10 ng/ml) and then lysed on ice in 500 l of TNT for 15 min. Protein content in each lysate was determined by using a Bio-Rad protein assay kit (Bio-Rad) and then normalized among the samples. Proteins in lysates were immunoprecipitated using anti-FLAG (M2)coupled agarose beads for 1 h at 4°C, and the precipitates were washed extensively in TNT and then kinase buffer (20 mM HEPES, pH 7.5, 20 mM MgCl 2 , 1 mM EDTA, 2 mM NaF, 2 mM ␤-glycerophosphate, 1 mM dithiothreitol, 10 M ATP). Precipitates were then incubated for 15 min at 37°C in 20 l of kinase buffer containing appropriate GST-fused substrate proteins and 10 Ci of [␥-32 P]ATP (Amersham Biosciences). The substrate was then precipitated using glutathione-agarose (Amersham Biosciences) and washed extensively with TNT. Beads were then suspended in 20 l of sample buffer, and samples were separated by SDS-PAGE (10%). Kinase activity was determined by autoradiography. Sequence Alignment-All sequence alignments were performed using MacVector software from Accelrys (San Diego, CA). IKK␤ Contains a Novel Ubiquitin-like Domain- We performed a series of proteomic data base searches using various web-based profiles, motifs, and protein family analysis programs to identify any novel structural or functional domains in either IKK␣ or IKK␤. As expected, the previously described structural features of both kinases, including their N-terminal catalytic domains, central leucine zipper motifs, and C-terminal helix-loop-helix and NEMO-binding domains, were identified through a combination of separate approaches. Most surprisingly, however, analysis of the complete amino acid sequence of human IKK␤ using the ExPASy (Expert Protein Analysis System) molecular biology server and the PROSITE data base (us.expasy.org/cgi-bin/scanprosite) identified a region of 78 amino acids from Leu 307 to Met 384 corresponding to the Ubiquitin_2 (type 2 ubiquitin-like) profile (PROSITE document PDOC00271; PROSITE accession number PS50053). No such domain was detected in either IKK␣ or the related kinase IKKe/IKKi. Despite this sequence identity with ubiquitin, we noted that the PROSITE data base classified this region of IKK␤ as a false positive for the Ubiquitin_2 profile, suggesting that it does not belong to the larger family of ubiquitin-like domain-containing proteins per se (us.expasy.org/cgi-bin/nicesite.pl?PS50053). Therefore, we further investigated the IKK␤ sequence using the Profilescan Server (hits.isb-sib.ch/cgi-bin/PFSCAN) maintained by the Swiss Institute for Experimental Cancer Research, and this analysis identified the same region as a strong match with a normalized match score of 9.548 (statistical interpretation of match score significance is available at hits.isb-sib.ch/doc/motif_score.shtml). Subsequent search analysis using the Protein Families (Pfam) data base of Alignments and Hidden Markov Models (www.sanger-.ac.uk/software/pfam/search.shtml) identified a shorter ubiquitin domain (Pfam document: PF00240) between residues 319 and 354 of IKK␤, and this was also recognized, together with the longer PROSITE Ubiquitin_2 profile, as a ubiquitin domain (document IPR000626) using the Interpro search analysis program of the European Bioinformatics Institute (www.ebi.ac.uk/interpro/index.html). Therefore, we conclude from this accumulated bioinformatic evidence that IKK␤ but not IKK␣ contains a domain located between residues 307 and 384 (more specifically between residues 319 and 354) that displays significant sequence identity with ubiquitin. Although this region strongly resembles both ubiquitin and the UBLs present in a family of unrelated proteins (33), the false positive classification from the original PROSITE search led us to name this region the UBL-like domain (ULD) of IKK␤. The position of the IKKb ULD and its sequence alignment with human ubiquitin are shown in Fig. 1, A and B. Similar alignment of the corresponding region of IKKa (Ile 307 -Val 384 ) (Fig. 1C) demonstrates the significantly lower degree of similarity and identity with ubiquitin. The IKK␤ ULD Is Required for Catalytic Function-Ubiquitin-like domains have been identified in an expanding family of proteins (33), which include the yeast DNA repair enzyme Rad23 and its human homologues HHR23A and -B, the yeast cell cycle control protein Dsk2, and its human homologues hPLIC-1 and -2, the causative gene of autosomal-recessive juvenile parkinsonism named Parkin, and the anti-apoptotic protein Bag-1 (33). Furthermore, some proteins (i.e. p59 OASL) contain two copies of the domain within their open reading frames (34,35). The precise role of the UBL within many proteins remains unknown; however, it has been directly demonstrated to be absolutely critical for the biological function of at least a subset of these proteins (36 -40). We therefore wished to determine whether the ULD was required for functional activity of IKK␤. The UBLs of these other proteins do not function as targets for ubiquitination nor are they ligated in a ubiquitin-like manner to separate target proteins (33). Consistent with this, the IKK␤ ULD does not contain lysine residues corresponding to Lys 48 or Lys 63 in ubiquitin that are required for ubiquitin chain formation. The only conserved lysine between the IKK␤ ULD and ubiquitin is at position 337 (see asterisk in Fig. 1B); when we mutated this to arginine (K337R), the resulting kinase did not differ from the wild type with respect to basal and induced catalytic activity and the ability to form complexes with IKK␣ and NEMO (data not shown). Therefore, it appears that like the wider family of UBL domain-containing proteins, the ULD does not function to facilitate IKK␤ ubiquitination or ubiquitinlike conjugation with other proteins. To investigate the function of the ULD, we constructed a deletion mutant of IKK␤ lacking the region between residues Leu 307 and Met 384 (inclusive) which we named IKK␤-d.ULD. As shown in Fig. 2A, IKK␤-d.ULD failed to activate NF-B when transiently overexpressed in HeLa cells, whereas similar levels of overexpressed wild-type IKK␤ induced robust NF-B activity in these cells. Furthermore, overexpression of IKK␤d.ULD dose-dependently reduced TNF␣-induced NF-B activity induced in HeLa cells (Fig. 2B). To determine the effects of deleting the ULD on the catalytic function of IKK␤, we overexpressed FLAG-tagged versions of wild-type IKK␤ and IKK␤d.ULD in HeLa cells, and then following incubation with TNF␣ for a range of times up to 120 min, we performed immunoprecipitation kinase assays using GST-IB␣-(1-90) as a substrate. As shown in Fig. 2C, catalytic activity of the wild-type kinase was rapidly induced by TNF␣, reaching a maximum at 5 min and then returning to basal levels after 30 min (see Fig. 2C, lanes 1-6). Remarkably, and despite being expressed to the same extent as the wild-type kinase, IKK␤-d.ULD exhibited no basal or TNF␣-induced catalytic activity against IB␣ (Fig. 2C, lanes 7-12). We also failed to detect catalytic activation of IKK␤-d.ULD in HeLa cells following interleukin-1␤ treatment (data not shown). Therefore, these findings demonstrate that deletion of the ULD renders IKK␤ catalytically inactive against IB␣ and refractory to pro-inflammatory cytokine-induced activation. Several studies (39, 41-43) have established that recombinant versions of only the ubiquitin-like domain of several proteins possess biological activity in cellular overexpression stud-ies and in vitro biochemical assays. To determine whether the IKK␤ ULD alone would similarly affect NF-B activation, we transiently transfected HeLa cells with just the ULD, and we performed luciferase reporter assays following incubation with pro-inflammatory cytokines. It should be noted that despite intense effort to immunoblot both untagged or epitope-tagged (i.e. FLAG, Xpress, or HA) versions of the ULD, we were unable to detect this protein in lysates of transfected HeLa, COS, or HEK293 cells (data not shown). We did not detect any toxicity associated with overexpressing the ULD in any of these cell types. Although we could not visualize the protein, we consistently observed that transfection with the ULD significantly reduced TNF␣- (Fig. 2D) and interleukin-1␤-induced (not shown) NF-B activity in HeLa and HEK293 cells. Taken together, the data presented in Fig. 2 clearly identify the ULD as a critical domain required for functional activity of IKK␤. The ULD Is Not Required for Assembly of the IKK Complex-Previous studies (8,9,30,31) have described the molecular mechanisms through which IKK␤ interacts with both NEMO and IKK␣. Nonetheless, it remains possible that the ULD plays an as yet unidentified role in maintaining either of these molecular interactions. Therefore, we performed GST pull-down analysis and, as shown in Fig. 3A, IKK␤-d.ULD interacted with GST-NEMO to the same extent as both wild-type IKK␤ and IKK␣ (compare lanes 6, 9, and 12). Furthermore, a protein composed of GST fused with the IKK␤ ULD (GST-ULD) did not associate with either of the IKKs (Fig. 3A, lanes 5, 8, or 11) or NEMO (not shown). Finally, when we co-expressed IKK␣-FLAG together with either wild-type IKK␤ or IKK␤-d.ULD in COS cells, we recovered both IKK␤ proteins from lysates by immunoprecipitation using anti-FLAG (Fig. 3B, lanes 4 and 5). We conclude from these findings that the ULD does not play a role in maintaining the interactions between IKK␤ and IKK␣ or NEMO and is therefore not required for assembly of the "core" IKK complex. Neither Ubiquitin Nor the Equivalent Region of IKK␣ Can Functionally Replace the IKK␤ ULD-In light of its sequence similarity with ubiquitin, we sought to determine whether the ULD could be functionally replaced within IKK␤ by ubiquitin as described previously for the yeast DNA repair protein Rad23 (44). We therefore constructed a panel of deletion and substitution mutants (Fig. 4A), and we noted that each of these kinases interacted with NEMO and each other to the same extent as the wild-type IKK, verifying that these mutations do not affect the inter-molecular interactions within IKK complex (data not shown). We first tested the ability of a FLAG-tagged version of IKK␤ in which the ULD was replaced with human ubiquitin (IKK␤-Ub) to activate NF-B-dependent luciferase activity. As shown in Fig. 4B, neither IKK␤-d.ULD nor the ubiquitin-containing mutant (IKK␤-Ub) could activate NF-B in a luciferase reporter assay when compared with wild-type IKK␤. Furthermore, similar to IKK␤-d.ULD, IKK␤-Ub was basally catalytically inactive and was not activated following treatment of transfected HeLa cells with TNF␣ (Fig. 4C, lanes 5 and 6). We next questioned whether the IKK␤ ULD sequence could be functionally interchanged with the equivalent region of IKK␣, and we constructed an IKK␤ mutant containing the 78 residues Ile 307 to Val 384 of IKK␣ in place of the ULD (IKK␤-␣.78; Fig. 4A). Similar to the ubiquitin substitution mutant, IKK␤-␣.78 failed to activate NF-B-driven luciferase activity (Fig. 4B) and was catalytically inactive and refractory to TNF␣induced activation (Fig. 4C, lanes 7 and 8). To determine the effects of deleting the corresponding 78 amino acids in IKK␣ on its catalytic activity, we constructed a mutant that we named IKK␣-d.78 (Fig. 4A). As shown in Fig. 4D and in contrast to the effects observed with the similar IKK␤ mutant, deletion of this entire region had no effect on the ability of FLAG-IKK␣ to phosphorylate IB␣ in response to TNF␣. However, despite its ability to phosphorylate IB␣ in vitro, the major function of IKK␣ in NF-B activation is as the critical kinase necessary for phosphorylating the NF-B2 precursor protein p100 in the noncanonical NF-B pathway (24 -26). Activation of IKK␣ in this pathway is dependent upon NIK and results in phosphorylation-induced proteolytic processing of p100 to p52 (24 -26). We therefore tested the effects of deleting the ULD-corresponding region of IKK␣ on its activation by NIK, and as demonstrated in Fig. 4E, IKK␣-d78 remained capable of mediating NIK-induced p100 processing to p52. Hence this region appears to be dispensable for the known physiological functions of IKK␣. We conclude from these experiments that an intact ULD is exquisitely required for IKK␤ activity and cannot be functionally substituted with ubiquitin or the corresponding region of IKK␣. In contrast, catalytic activity of IKK␣ against both IB␣ and p100 does not appear to require the presence of the equivalent region of that kinase. 1-6) or IKK␤-d.ULD (lanes 7-12) were treated with 10 ng/ml TNF␣ for the times indicated. Proteins were immunoprecipitated from lysates using anti-FLAG, and then half of each sample was subjected to kinase assay (KA) using GST-IB␣-(1-90) as a substrate (upper panel). The other half of each immunoprecipitate was immunoblotted using anti-FLAG (lower panel). D, HeLa cells were transiently transfected with pBIIx-luc together with either vector alone (1st two bars and lanes) or 0.25, 0.5, or 1 g/ml of the IKK␤ ULD. Forty-eight hours later, cells were either untreated (Ϫ) or incubated with TNF␣ (10 ng/ml; ϩ) for a further 4 h prior to lysis and measurement of luciferase activity. Mutational Analysis of the IKK␤ ULD-It is possible that the effects of deleting the entire ULD on IKK␤ activity might be due to gross structural disruption resulting from loss of a relatively large portion of the kinase. To address this issue we constructed a panel of five smaller subdomain deletion mutants lacking stretches of between 11 and 17 residues within the ULD. The positions of the subdomains that we named regions A to E are illustrated in Fig. 5A. As shown in Fig. 5B, versions of IKK␤ sequentially lacking regions A, B, C, or D failed to activate NF-B in a luciferase reporter assay. In contrast, the mutant lacking region E of the ULD (Del.E) induced NF-B activity to a similar level as the wild-type kinase (Fig. 5B). Consistent with these data, we found that only the IKK␤-Del.E mutant displayed TNF␣-induced catalytic activity against GST-IB␣, which resembled the activity of wild-type IKK␤ (Fig. 5C, lanes 11 and 12). Moreover, although we observed low levels of catalytic activity with each of the other deletion mutants, only IKK␤-Del.D consistently exhibited detectable inducibility in response to TNF␣ stimulation, although the magnitude of this activity was significantly less than either wildtype IKK␤ or IKK␤-Del.E (Fig. 5C, compare lanes 9 and 10 with lanes 1, 2, 11, and 12). These findings strongly suggest that maintenance of the overall integrity of the region between Leu 311 and Ala 367 of the ULD corresponding to the subdomains designated A-D in Fig. 5A is absolutely critical for the functional activity of IKK␤. In a further attempt to identify any potentially critical functional sites within the IKK␤ ULD, we performed sequence alignment analysis to determine whether the domain contained any residues at positions conserved among the family of UBL-containing proteins (33). As shown in Fig. 6A, alignment of the IKK␤ ULD (Leu 311 -Met 384 ) with the UBLs of Bag-1, BAT-3, OASL, HHR23A, HHR23B, as well as human ubiquitin identified a cluster of five residues that were conserved among all of the proteins. These residues in IKK␤ were specifically proline at position 347 (Pro 347 ), glutamine at 351 (Gln 351 ), leucine at 353 (Leu 353 ), glycine at 358 (Gly 358 ), and leucine at position 361 (Leu 361 ). When we extended the sequence alignment to over 20 distinct UBL-containing proteins from species ranging from yeast to human, these same residues were conserved among all proteins analyzed (data not shown). We there-fore surmised that the residues at these positions might play an important role in the function of this domain, and to test this we constructed a panel of single point mutants in which each residue was substituted with alanine. Consistent with our previous observations (Fig. 3), all of these IKK␤ point mutants interacted with NEMO and IKK␣ (data not shown). To test the effects of these alanine substitutions on the ability of IKK␤ to induce transcriptionally active NF-B, we performed a luciferase reporter assay, and as shown in Fig. 6B, the P347A and L361A mutants induced NF-B activity to the same level as wild-type IKK␤. Similarly, although the Q351A mutant tended to be less active, over the course of multiple experiments, its ability to induce NF-B activity did not significantly vary from that of the wild-type kinase (not shown). In contrast, NF-B activity induced by G358A was consistently reduced when compared with wild-type IKK␤, and more strikingly, the L353A mutant did not induce activity above the basal levels observed in vector-alone transfected control cells (Fig. 6B). We were therefore surprised to find that despite its inability to activate NF-B, L353A exhibited TNF␣-induced catalytic activity against GST-IB␣ (Fig. 6C). Similar catalytic activity was also observed for all of the other alanine mutants including G358A (data not shown). The failure of IKK␤-L353A to activate NF-B (Fig. 6B) led us to question whether phosphorylation by the mutant kinase could lead to IB␣ degradation. We therefore transfected HEK293 cells with wild-type IKK␤, IKK␤-L353A, or IKK␤-d.ULD, and we determined the effects on both basal and TNF␣-induced levels of IB␣. As shown in Fig. 6D, consistent with our in vitro kinase assay, transfection of HEK293 cells with both the wild-type and L353A mutant kinases decreased the amount of basal IB␣ in cells (Fig. 6D, compare lanes 1, 5, and 13). In contrast, IB␣ levels in IKK␤d.ULD transfected were unchanged compared with control (Fig. 6D, lanes 9 and 13). Furthermore, IB␣ was degraded following TNF␣ treatment in wild-type and L353A-transfected cells with similar kinetics as control cells, whereas in d.ULDtransfected cells, IB␣ degradation was impaired (Fig. 6D, compare lanes 12 and 16). This is consistent with the lack of catalytic activity we observed for IKK␤-d.ULD (Fig. 2C). Taken together, these findings demonstrate that although IKK␤-L353A is capable of phosphorylating IB␣ and causing its 5, 8, and 11), or GST-NEMO (lanes 6, 9, and 12). GST proteins were incubated with [ 35 S]methionine-labeled, in vitro transcribed and translated IKK␣ (lanes 4 -6), IKK␤ (lanes 7-9), or IKK␤-d.ULD (lanes 10 -12), and the resulting precipitated complexes were separated by SDS-PAGE (10%). Input amounts of 35 S-labeled proteins (lanes 1-3) and interacting proteins recovered following pull-down (lanes 4 -12) were determined autoradiographically (upper panels). The relative amount of each GST protein after pull-down was visualized by Coomassie Blue (CB) staining the gel (lower panel). B, COS cells were transiently transfected with the constructs indicated, and 48 h later complexes were immunoprecipitated from lysates using anti-FLAG. The resulting complexes or portions of each lysate (5%) taken prior to immunoprecipitation (Pre-IP) were immunoblotted (IB) by using either anti-Xpress or anti-FLAG as indicated. degradation in response to TNF␣, this single point mutation prevents the overexpressed kinase from activating transcriptionally competent NF-B. (45,46) have demonstrated that IKK␤ can phosphorylate the serine residue at position 536 within the C terminus of the NF-B p65 subunit and that this phosphorylation is critical for its transcriptional activity. We therefore surmised that IKK␤-L353A might be unable to phosphorylate Ser 536 in p65, thereby accounting for its failure to induce transcriptionally active NF-B (Fig. 6B) despite its ability to phosphorylate IB␣ (Fig. 6C) and cause its degradation (Fig. 6D). To test this hypothesis, we transfected HeLa cells with either p65 alone or p65 in the presence of wild-type IKK␤, IKK␤-L353A, or dominant negative IKK␤ (K44M), and we performed a luciferase reporter assay. Consistent with our hypothesis, we found that wild-type IKK␤ enhanced p65-induced luciferase activity, whereas transcriptional activity of p65 that was co-transfected with either K44M or L353A was severely impaired (Fig. 7A). We next performed immunoprecipitation kinase assays using either the N terminus (residues 1-313) or C terminus (residues 314 -550) of p65 fused with GST as substrates (32), and as expected, neither wild-type IKK␤ nor IKK␤-L353A phosphorylated GST-p65N (Fig. 7B, lanes 3, 4, 7, and 8). In contrast, immunoprecipitated wild-type IKK␤ phosphorylated the C terminus of p65, and this activity was basally maximal in our assay and was not further increased by TNF␣ stimulation. To our surprise, however, GST-p65C was also phosphorylated by IKK␤-L353A, and this phosphorylation could be increased to maximum following stimulation with TNF␣. Therefore, our data clearly demonstrate that IKK␤-L353A mutant can phosphorylate the C terminus of p65 strongly, suggesting that the inhibition of overexpressed p65 observed in Fig. 7A is not due to defective C-terminal phosphorylation. IKK␤-L353A and IKK␤-d.ULD but Not Wild-type IKK␤ Form a Complex with the NF-B p65 Subunit-Previous studies In the course of our phosphorylation assays, we observed that L353A but not the wild-type kinase formed a stable complex with p65. To explore this further, we transiently transfected HEK293 cells with FLAG-tagged IKK␤, IKK␤-L353A, or IKK␤-d.ULD together with p65, and we immunoprecipitated IKK␤-associated complexes using anti-FLAG. As demonstrated in Fig. 7C, p65 was detected in immunoprecipitates associated FIG. 4. The IKK␤ ULD cannot be functionally substituted with ubiquitin or the equivalent region of IKK␣. A, the structures of wild-type IKK␤ and IKK␣ and the various deletion and substitution mutants are shown. B, HeLa cells were transiently transfected with pBIIx-luc together with either vector alone (pFLAG-CMV2: Control), FLAG-tagged IKK␤ (WT), or the mutants indicated, and luciferase activity was measured in lysates 48 h after transfection. Expression levels of each construct were determined by immunoblotting (IB) using anti-FLAG (lower panel). C, HeLa cells were transiently transfected with the FLAG-IKK␤ constructs indicated, and after 48 h, cells were either untreated (Ϫ) or treated (ϩ) for 5 min with 10 ng/ml TNF␣. Proteins were recovered from lysates using anti-FLAG and an immune complex kinase assay was performed using GST-IB␣-(1-90) as a substrate (lanes 1-8). Proteins in identical lysates from simultaneously transfected cells were immunoprecipitated and immunoblotted using anti-FLAG (lanes 9 -12). D, HeLa cells were transiently transfected with the FLAG-IKK␣ constructs indicated and then processed for kinase assay (lanes 1-4) and immunoblotting (lanes 5 and 6) as described in C. E, HEK293 cells were transiently transfected with pFLAG-CMV2 (Control; lanes 1 and 2), FLAG-IKK␣ (WT; lanes 3 and 4), or FLAG-IKK␣ (d.78; lanes 5 and 6) either alone (Ϫ) or together with NIK (ϩ), and then resulting lysates were sequentially immunoblotted with anti-p100 (upper panel) and anti-FLAG (lower panel). The positions of p100, p52, and a nonspecific band (n.s.) are indicated. with both the L353A and d.ULD IKK␤ mutants, whereas it was not pulled down with the wild-type kinase. These results were recapitulated when we transfected HeLa cells with IKK␤ and the ULD mutants and tested their ability to interact with endogenous p65. Thus, FLAG-tagged IKK␤-L353A and IKK␤d.ULD associated with endogenous p65 (Fig. 7D, lanes 3 and 4) although no such interaction with the wild-type kinase could be detected (lane 2). These findings therefore suggest that the failure of L353A to induce transcriptional activity of NF-B is not due to a lack of catalytic activity against either IB␣ or the C terminus of p65 but may instead be related to its ability to physically interact with p65. DISCUSSION In this study we sought to identify regions of IKK␣ or IKK␤ that are unique for either kinase. By using a series of bioinformatic strategies, we identified a novel ubiquitin-like domain within IKK␤ that was not detected in IKK␣. Deletion and small mutations within the ULD profoundly affected IKK␤ function, demonstrating that it is a critical regulatory domain required for activity of the kinase. Furthermore, our domain-swap analysis (Fig. 4) strongly suggests that the domain is absolutely specific for IKK␤ and dispensable for catalytic function of IKK␣. In this regard versions of IKK␤ that either lacked the entire ULD or had this region replaced with either ubiquitin or the corresponding region of IKK␣ were completely catalytically inactive against IB␣ and were unable to induce NF-B-dependent gene expression. This is entirely consistent with a previous report (47) in which the catalytic domain of IKK␤ (residues 1-301) was fused with the C terminus of IKK␣ (residues 301-745), resulting in catalytic inactivity of that chimera against IB␣. It therefore appears that an intact ULD is absolutely critical for IKK␤ to phosphorylate IB␣. In contrast, loss of the ULD-corresponding region of IKK␣ did not affect its ability to both phosphorylate IB␣ and induce p100 processing in response to NIK. Therefore, we hypothesize that the pres-ence of the ULD in IKK␤ contributes significantly to the functional divergence of the IKKs. In this regard, it is possible that the presence of this domain, resembling the highly evolutionarily conserved ubiquitin protein, underlies the more ancient innate immune and inflammatory functions mediated by IKK␤ via the classical NF-B pathway (17). It is tempting to speculate further that the role of IKK␣ in mediating critical aspects of the adaptive immune response (i.e. lymphoid organogenesis and B-cell maturation) may be a result of evolutionary modification within the ULD, resulting in the functional divergence of the kinases. Clearly, however, a full understanding of the precise function of the ULD will be required before any conclusions can be drawn concerning its role in shaping the distinct biological functions of the IKKs. We have demonstrated that the ULD is not involved in formation of the core IKK complex composed of IKK␣, IKK␤, and NEMO. Thus we found that deletion of the ULD does not affect the ability of IKK␤ to heterodimerize with IKK␣. Furthermore, consistent with our prior identification of the NBD in the extreme C terminus of both IKKs (30,31), deletion of the ULD did not affect the interaction of IKK␤ with NEMO. Previous workers (48) have suggested that in addition to the NBD, a separate interaction domain for NEMO exists within IKK␤. Our data demonstrate that if such a region exists, it is not the ULD as the interaction with NEMO was clearly unaffected following its deletion. We also found that insertion of either ubiquitin or the corresponding region of IKK␣ into IKK␤, or deletion of the equivalent IKK␣ residues or insertion of the ULD into IKK␣ did not affect IKK heterodimerization or the ability of either IKK␤ or IKK␣ to interact with NEMO (data not shown). These findings therefore lead us to conclude that the ULD is not required for the maintenance of IKK complex architecture. It remains an intriguing possibility that the ULD functions as a protein-protein interaction domain that facilitates the association with unknown IKK␤-specific interacting proteins (see below). We considered the possibility that the ULD might play a role in ubiquitination of IKK␤ or might be required to physically conjugate IKK␤ with unknown target proteins. However, we have failed to detect such modifications throughout the course of our experiments. Furthermore, two lines of evidence suggest that the ULD does not play a role in facilitating ubiquitination or ubiquitin-mediated conjugation of IKK␤. First, we initially identified the ULD by using the PROSITE data base as having similarity with the type 2 UBL (UBIQUITIN_2) profile present in a large family of proteins (33). To date, however, no evidence exists to support a role for this type of domain in ubiquitination nor have such domains been reported to conjugate with target proteins. In contrast, the type 2 domain defines a family of proteins in which the UBL functions as a linear noncleavable insertion that appears to have a primary role in maintaining specific protein-protein interactions (33). The second piece of evidence against the role in ubiquitination is that the IKK␤ ULD does not contain lysines in either of the positions that correspond to Lys 48 and Lys 63 in ubiquitin. Lysines at these positions are required for ubiquitin conjugation and chain formation, yet the only lysine that is conserved between ubiquitin and the ULD is Lys 337 of IKK␤ (Fig. 1B). The corresponding lysine in ubiquitin (Lys 27 ) is not a target for ubiquitin chain formation. Nevertheless, to test the potential importance of this residue, we have constructed a lysine to arginine substitution mutant (K337R) of IKK␤, and we found that this mutation did not affect the basal or induced catalytic function or the complex forming capability of IKK␤ (not shown). We therefore believe that the ULD is not a target for IKK␤ ubiquitination or ubiquitin-like conjugation of IKK␤ with other proteins. As described above, the type 2 UBL is considered to function as a protein-protein interaction domain. Specifically, the domain has been shown to directly target UBL-containing proteins to the proteasome and to play a role in ubiquitin-independent proteasomal protein degradation. In this regard the UBL domains of proteins including the yeast protein RAD23 and its human orthologues HHR23A and -B, hPLIC-1 and -2, Parkin, and BAG-1 facilitate their direct interaction and stable association with components of the regulatory domain of the 26 S proteasome (33, 36 -40, 42, 43, 49 -53). Furthermore, this proteasomal localization does not lead to degradation of these proteins but enables the proteolysis of ubiquitinated cargo proteins that associate with the UBL-containing carrier. Therefore, at least some UBL-containing proteins appear to function as molecular chaperones that present ubiquitinated cargo proteins to proteasomal ATPases where they are subsequently unfolded and degraded, although leaving the UBL protein intact (33, 36 -40, 42, 43, 49 -53). This function of certain UBL-containing proteins presented a fascinating hypothesis regarding the potential role of the ULD in IKK␤. Thus we considered that the ULD might facilitate an interaction between the IKK complex and proteasome thereby bringing the kinase, its ubiquitinated substrate (i.e. IB proteins), and the degradation machinery into close context. Furthermore, association with the proteasome may explain our failure to detect overexpressed ULD alone as it might be rapidly degraded via this route. Nevertheless, despite intense effort using a wide range of available reagents, we have been unable to detect any interactions between either endogenous or overexpressed IKK␤ and the proteasome. 2 It therefore appears that the IKK␤ ULD does not function in manner similar to the UBLs of HHR23A and -B, PLIC-1 and -2, Parkin, or Bag-1. This finding is perhaps not completely surprising as the UBL is located in the extreme N terminus of all of the UBL-containing proteins that interact with the proteasome, whereas the ULD is centrally positioned in IKK␤. Furthermore, the proteasomeinteracting proteins also contain ubiquitin-associated domains through which they can interact with ubiquitinated proteins. In addition, none of these proteins are kinases and appear to function primarily as molecular adaptors or chaperones. Finally, our domain swap analysis demonstrated that the function of the ULD could not be replaced with ubiquitin, whereas insertion of ubiquitin into the UBL site in Rad23 has been reported to maintain the function of that protein (44). Thus, it appears that the proteasome-binding capability of ubiquitin is insufficient to maintain the function of IKK␤. It is therefore likely that the ULD represents a distinct type of ubiquitin-like domain that performs a function separate from proteasomal localization. The possibility therefore remains that the ULD plays a role in maintaining interactions with separate IKK␤-specific proteins. One particular set of candidates for such interacting proteins may be components of the COP9 signalsome that is a distinct protein complex that exhibits similarities to the 26 S proteasome (54). Most intriguingly, catalytic activity specific for IB␣ has been found associated with the COP9 signalsome, 1-4) or wild-type IKK␤ (lanes 5-8) and then incubated in the absence or presence of TNF␣ (10 ng/ml) for 5 min. Following lysis and immunoprecipitation using anti-FLAG, an immune complex kinase assay was performed using either GST-p65C (lanes 1, 2, 5, and 6) or GST-p65N (lanes 3, 4, 7, and 8) as substrates. Samples were then separated by SDS-PAGE (10%) and visualized by autoradiography (upper panel). The gel was stained with Coomassie Blue (CB) to identify the substrate proteins (lower panel), and the relative position of GST-p65N on the autoradiograph is indicated (*). Amounts of IKK␤ in each lane are shown in the middle panel. C, HEK293 cells were transfected with p65 together with either pFLAG-CMV2 (control), wild-type IKK␤, IKK␤-L353A, or IKK␤-d.ULD, and then immunoprecipitation (IP) was performed using anti-FLAG. Resulting precipitated material was immunoblotted (IB) using either anti-p65 or anti-FLAG as shown. A portion (5%) of the original lysate (Pre-IP) was immunoblotted using anti-p65 (middle panel). D, HeLa cells were transfected with the constructs indicated, and then anti-FLAG was used to immunoprecipitate complexes from lysates. Immunoprecipitated samples were immunoblotted using either anti-p65 or anti-FLAG as shown.
v3-fos-license
2018-04-03T06:24:14.349Z
2016-08-24T00:00:00.000
18260955
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1002/ece3.2341", "pdf_hash": "af0f48f75385ddb10111a78c2490b8d04771f62b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42787", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "af0f48f75385ddb10111a78c2490b8d04771f62b", "year": 2016 }
pes2o/s2orc
Landscape‐scale deforestation decreases gene flow distance of a keystone tropical palm, Euterpe edulis Mart (Arecaceae) Abstract Habitat loss represents one of the main threats to tropical forests, which have reached extremely high rates of species extinction. Forest loss negatively impacts biodiversity, affecting ecological (e.g., seed dispersal) and genetic (e.g., genetic diversity and structure) processes. Therefore, understanding how deforestation influences genetic resources is strategic for conservation. Our aim was to empirically evaluate the effects of landscape‐scale forest reduction on the spatial genetic structure and gene flow of Euterpe edulis Mart (Arecaceae), a palm tree considered a keystone resource for many vertebrate species. This study was carried out in nine forest remnants in the Atlantic Forest, northeastern Brazil, located in landscapes within a gradient of forest cover (19–83%). We collected leaves of 246 adults and 271 seedlings and performed genotyping using microsatellite markers. Our results showed that the palm populations had low spatial genetic structure, indicating that forest reduction did not influence this genetic parameter for neither seedlings nor adults. However, forest loss decreased the gene flow distance, which may negatively affect the genetic diversity of future generations by increasing the risk of local extinction of this keystone palm. For efficient strategies of genetic variability conservation and maintenance of gene flow in E. edulis, we recommend the maintenance of landscapes with intermediary to high levels of forest cover, that is, forest cover above 40%. Introduction Forest loss is the main threat to biodiversity, contributing to the unprecedented levels of the current high rates of species extinction, particularly in tropical regions (Pimm et al. 2014). With the exception of a few wilderness areas, most of extant species in the tropics currently occur in anthropogenic landscapes, where previous continuous forest has been reduced to smaller and increasingly isolated patches. Such change in the amount of remnant forest The structural modifications in anthropogenic landscapes affect genetic diversity within and among forest remnants (Willi and Fischer 2005;Pickup and Young 2008). As landscape-scale deforestation progresses, mean patch size is reduced and the likelihood of genetic erosion in smaller populations increases due to inbreeding depression and genetic drift; often losses are not offset by migrants as patches become more isolated. Considering that plant dispersers and pollinators are often sensitive to changes in landscape structure (Aizen and Feinsinger 1994;Breed et al. 2013;Morante-Filho et al. 2015), their mutualistic interactions may also be altered. Therefore, it is expected that tree populations in highly deforested and fragmented landscapes would experience high genetic divergence among isolated populations, reducing gene flow distance (Young et al. 1996) and thus increasing spatial genetic structure (SGS). Despite such expectations, some widespread tree species are apparently more resilient to such structural changes (Kramer et al. 2008), as they may benefit from long-distance gene flow via pollen or seeds among forest fragments (Gaiotto et al. 2003;Côrtes et al. 2013). In addition, the genetic response to recently segregated populations can rather reflect the genetic structure of previously continuous population (Carvalho et al. 2015). In particular, due to the time lag between generations of adults and seedlings (Rigueira et al. 2013), different ontogenetic stages may reflect different time frames in which gene flow was involved, that is, considering some long-lived trees, the progeny is more likely to reflect recent impacts on genetic structure or gene flow than the adults. Therefore, a comprehensive view on how tree species are impacted by landscape changes should include the assessment of different ontogenetic stages well as different genetic components. For instance, while SGS can be used as an indirect measure of both historical and contemporary gene flow (Vekemans and Hardy 2004), more recent gene flow estimates can be accessed through paternity tests (Ashley 2010;Harrison et al. 2013). Studies with contemporary gene flow approach has shown that forest fragmentation response can vary between species (Kramer et al. 2008) but in general, plants with seeds and pollen dispersed by animals are more sensitive than plants with abiotic dispersion system. Our aim here was to empirically evaluate the effects of landscape-scale forest reduction on the spatial genetic structure and the contemporary gene flow of Euterpe edulis (Arecaceae). The species is widespread throughout the Brazilian Atlantic Forest (Leitman et al. 2015), a biome currently reduced to 12-16% of its original extension ). This palm is considered a keystone species in Atlantic Forest because it plays an important ecological role, providing food resources for several animal species, including 32 bird and six mammals species (Jordano et al. 2006;Galetti et al. 2013). Its flowers are pollinated by various insects, with the main pollinator being Trigona spinipes (Mantovani and Morellato 2000), a small stingless bee with estimated maximum flight distance of <900 m (Zurbuchen et al. 2010). In addition, the appreciation of the apical meristem for human consumption has caused over-exploitation (Matos et al. 1999) that, in addition to habitat loss, has brought E. edulis to the status of endangered species (Martinelli and Moraes 2013). We conducted surveys in nine forest sites immersed in landscapes varying from 19% to 83% of remaining forest cover with the purpose of test the following hypotheses: (1) Spatial genetic structure is inversely related to the percentage of forest in the landscape for both ontogenetic stages, with a greater effect on seedling populations, reflecting the recent forest loss and fragmentation in the study area; (2) seedling populations in more forested landscapes have fewer assigned parent, reflecting the larger source of pollen and seeds; and (3) landscape-scale forest reduction negatively influences the gene flow distance, which means that the smallest gene flow distances should be found in more deforested landscapes. Study area The research was conducted in landscapes of the Atlantic Forest in southern Bahia (Fig. 1), where lies the largest forest remnant in northeastern of Brazil ). This region is considered a conservation priority due to its high species richness and endemics (Thomas et al. 1998;Martini et al. 2007). The forest is classified as tropical lowland rainforest (Thomas et al. 1998) with an average annual temperature of 24°C and rainfall of 1.500 mm, with no discernible seasonality (Mori et al. 1983). Sampling design Based on a series of freely available landsat images (1990), we first identified large remaining forest tracts in southern Bahia with similar soil conditions, topography and floristic composition (Thomas et al. 1998), located between the rivers Jequitinhonha and Contas. Using satellite images (QuickBird and WorldView, from 2011 and Rapi-dEye, from 2009 to 2010) and after an intensive groundtruthing, we elaborated a map of the land use of 3.470 km 2 , which included the municipalities of Belmonte, Una, Santa Luzia, Mascote, and Canavieiras. This final map was used to identify and visually quantify different categories of land use at the scale of 1:10,000. The forest cover includes not only native forests at different successional stages, but also shade cocoa plantations (Theobroma cacao), in addition to rubber and eucalyptus (Eucalyptus sp) plantations. However, to estimate forest cover in this study, we considered only the amount of native forest in the landscape, excluding other vegetation classes. From this map, we identified 58 sites located in forest patches and, using ArcGIS (10.2, QGIS Development Team 2016), we calculated the percentage of native forest in 2 km radius from the central point of site. We selected those sites further than 1 km from each other and from this group we randomly selected 16 sites in landscapes ranging from 6% to 83% of forest cover as sampling sites. In each site, we established a plot of 15 9 400 m for leaf collection from all adults and a plot of 2 9 400 m in the center of the larger one to collect leaves of all seedlings with ≤15 cm in height to evaluate only the recent gene flow, after the forest suppression intensification in the region. This sampling design theoretically would allow us to find the female genitors for most of the sampled seedlings, whereas animals would probably disperse seedlings with no parent attributed. We only considered sites in which at least five individuals of E. edulis in each ontogenetic stage were presented within the study plots. Therefore, we included only nine sampling sites with forest cover percentage ranging from 19% to 83% (Fig. 1). DNA extraction and genotyping with microsatellite loci DNA extraction from E. edulis leaves was performed using the CTAB protocol (Doyle and Doyle 1987). We genotyped all E. edulis individuals, using thirteen microsatellite nuclear loci (Gaiotto et al. 2001). The PCR reactions were performed in multiplex system with the following composition of loci: (1) duplex with EE43 and EE45; EE48 and EE54; EE8, and EE23; and (2) simple PCR with loci: EE32; EE15; EE8, EE3, EE47, EE59, and EE63. Electrophoresis were performed in the capillary system by automated DNA analyzer ABI3500 (Applied Biosystems, Foster City, CA), based on three multiload systems: (1) pentaload I, composed of loci EE15, EE32, EE3, EE8, and EE23; (2) pentaload II, composed of loci EE43, EE45, EE48, EE54, and EE9; and (3) triload composed of loci EE47, EE59, and EE63. Fragment size was estimated with the software GeneMarker version 2.2 (SoftGenetics, State College, PA). Data analysis The spatial genetic structure of E. edulis seedlings and adults was estimated separately through the kinship coefficient average (F ij ; Loiselle et al. 1995) in Spagedi version 1.4 (Vekemans and Hardy 2004). To assess the spatial distance between individuals, we attributed eight distance classes with 50 m intervals, between 0 and 400 m. The 95% confidence interval for each distance class was obtained from 10,000 permutations of the individual location. We compared the average spatial genetic structure between the ontogenetic stages of seedlings and adults with a t-test for independent samples. We also evaluated the average spatial genetic structure of seedlings and adults according to the remaining forest in landscapes by means of simple linear regression analysis. These analyses were performed in R (R Core Team 2016). We used all seedlings for paternity analysis and each adult as a potential genitor. Paternity was based on statistical D (Marshall et al. 1998) and was analyzed in Cervus software version 3.0 (Kalinowski et al. 2007). The critical D value for each confidence interval was obtained from 10,000 simulations to determine the most probable parental of unknown sex (father or mother), allowing a proportion of 0.01 of genotyping error per locus, with confidence interval of 95%. We performed a further analysis of paternity exclusion for seedlings whose attributed parents were located at a distance greater than 2 km. The exclusion analysis was performed by comparing the genotypes of the alleged parent against its likely descendant seedling. The paternity index (PI) was estimated for each locus, and the combined paternity index (CPI) between loci was used to infer the paternity probability (PP) according to the method described by Stephenson (2010). We used the same a priori probability of 0.5 used for analysis in humans, where each candidate parent has 50% of chance of being the true parental (Gjertson and Morris 1995). To estimate the PI of each locus, we used the allele frequencies of all loci estimated based on all E. edulis adults sampled for this research. We assessed whether three types of response variables were related to forest cover: (1) the probability of a landscape containing at least one seedling with a parent assigned, as well as with a parent assigned in the same landscape and in another landscape; (2) the proportion of seedlings with assigned parents and that of seedlings with assigned parents in the same landscape and in another landscape; and (3) gene flow distance. For the first analyses, we used generalized linear models (GLMs) with a binomial distribution and using the presence (1) or absence (0) of seedlings with assigned parents (anywhere, in the same landscape and in another landscape) as response variables. For the second analyses, we also used GLMs with a binomial distribution, using the proportion of seedlings with parents assigned in each landscape as response variable and the total number of seedlings as weights to give greater importance to the landscapes that had more seedlings. For the third analyses, we used only the seedlings with assigned parents and applied linear mixed-effect models (LMMs) between the logarithm of gene flow distance and forest cover, using the landscape identity as a random variable. The logarithms were used to reduce the dispersion of the data. For each analysis, we calculated significance of the relationship with forest cover in two ways: parametrically, using z-ratio tests based on the normal distribution for the generalized linear models and t-ratio tests for the linear mixed models and using Monte Carlo randomizations. For the GLM randomizations, we (1) extracted the deviance of the observed relation between the response variable and habitat cover; (2) randomized the relation between the two variables; (3) performed a GLM on the randomized data set; and (4) extracted the deviance of this GLM. Significance was calculated as the proportion of deviance values in the randomized data set that were equal to or smaller than the observed deviance. When the proportion of seedlings was the response variable, we kept the total number of seedlings associated to each proportion in the randomizations. We assessed the significance of the relation between gene flow and forest cover by (1) performing a simple linear regression between gene flow and forest cover; (2) extracting its R 2 value; (3) randomly assigning a forest cover (from any of the nine landscapes sampled) to each landscape; (4) performing a linear regression on the randomized data set; and (5) calculating its R 2 . Significance was calculated as the proportion of randomized R 2 values that were equal to or greater than the observed R 2 . We did not use mixed models in these randomizations because the nonindependence between seedlings located in the same landscape was accounted for by the randomization scheme. In both analyses, we used a total of 4999 randomizations and included the observed deviation in the comparison because it is one of the possible combinations under the null model (Manly 2007). We performed the GLMs, LMMs, and randomizations in R (R Core Team 2016), with the aid of the nlme package (Pinheiro et al. 2016) for the LMM and a code written by us for the randomization tests (available at <https:// github.com/pdodonov/MonteCaRlo>). Results Landscape-scale forest loss did not influence the spatial genetic structure of E. edulis seedlings and adults (R 2 = 0.11, P = 0.37 and R 2 = 0.00, P = 0.90; respectively). The spatial genetic structure of E. edulis populations showed low values of average relatedness coefficient (F ij ) on both ontogenetic stages and they were not related to distance class ( Fig. 2A and B). Regarding the seedling population located in the forest site within 34% of landscape-scale forest cover, we recorded only five individuals, making impossible the construction of the SGS figure, as the average relatedness coefficient (F ij ) observed (0.0138) coincides with the minimum and maximum values of the confidence interval. The seedlings' average spatial genetic structure value is low (0.001), but significantly higher (P = 0.034) than the adults' average value (À0.003). Of the 271 seedlings analyzed, we assigned the parent (father/mother) to 21 individuals (7.75%; Table 1). Of these 21 individuals, 16 were assigned a parent within the same landscape (2 km radius) and five seedlings were assigned a parent located in another landscape (Table 1). The probability of a landscape having at least one seedling with a parent assigned was not related to forest cover (parametric P = 0.34, Monte Carlo P = 0.44; Fig. 3A). However, seedlings with assigned parents in the same landscape were more likely to be found in lower forest cover (Monte Carlo P = 0.03; Fig. 3B. No parametric p was available due to a perfect separation in the response variable) whereas those with parents in other landscapes were marginally more likely to occur in higher forest cover (parametric P = 0.12, Monte Carlo P = 0.08). The proportion of seedlings with an assigned showed decreased as forest cover increased (parametric P = 0.12, Monte Carlo P = 0.03; Fig. 3D). The proportion of seedlings with an assigned parent in the same landscape also decreased as forest cover increased (parametric P = 0.02, Monte Carlo P = 0.002; Fig. 3E) whereas the proportion of seedlings with an assigned parent in another landscape was not related to forest cover (parametric P = 0.29, Monte Carlo P = 0.18; Fig. 3F). Euterpe edulis geographic distance of gene flow increased with forest cover (Parametric P = 0.04, Monte Carlo P = 0.02; Fig. 4). In addition, the highest distances of gene flow were only recorded in the more forested landscapes. Furthermore, our results demonstrate that long-distance gene flow (maximum distance of 13 km) only occurs between populations located in intermediate and high percentage of forest cover (≥43%; Table 2). Discussion In this study, we did not find evidence that landscapescale deforestation influenced the spatial genetic structure (SGS) of E. edulis populations in our study region. Furthermore, we showed an increase in seedlings with parental assignment and a decrease in gene flow distance in more deforested landscapes. The negative effects of habitat loss have been previously shown in theoretical and empirical studies on biodiversity patterns (Andr en 1994; Fahrig 2002), including ecological interactions (Valiente-Banuet et al. 2014). We emphasize that a previous study has revealed a likely time lag between landscape change and genetic diversity loss in populations of E. edulis in the same study areas in southern Bahia (Santos et al. 2015). Now we have indication that the gene flow is the genetic parameter that first experiences the negative influence of such anthropogenic disturbances, measured as landscape-scale forest loss. Therefore, although the E. edulis populations evaluated are still harboring high levels of genetic diversity as described by Santos et al. (2015), such populations are experiencing a reduction in contemporary gene flow. Our results showed that kinship values did not decrease with geographic distance as expected from the isolationby-distance model (Wright 1943;Barbujani 1987). The SGS in E. edulis seedling and adult populations is considered low for plant species with pollen and seed dispersal over long distances (Loiselle et al. 1995). The low values of the average relatedness coefficient (F ij ) in both ontogenetic stages indicate that individuals are not closely related (low kinship levels) (Hamrick and Trapnell 2011). Such low SGS converges with the previous result reporting high genetic variability within these same populations of E. edulis (Santos et al. 2015). In addition, the low levels of SGS found in E. edulis seedling populations within a gradient of deforestation may occur due to different ecological processes. In deforested landscapes, for instance, we expected a limitation in food resources (Fahrig 2003) to make the few seed dispersers that are still present the scarce resources. In this situation, the seeds could be dispersed farther from the parent, resulting in low levels of kinship between nearby individuals, reflecting the low SGS in landscapes with low percentage of forest. By contrast, in more forested landscapes, the higher levels of richness and abundance of seed dispersers ) could increase the rates of long-distance dispersal, also resulting in a low level of relatedness and SGS. The aforementioned dispersion behavior associated with the naturally high genetic diversity of individuals contributing to pollen and seeds can explain the absence of SGS found in all the analyzed landscapes. The low SGS found in adult populations may reflect older ecological processes influenced by the genetic structure and diversity of past generations, when landscapes contained more continuous forests (Aguilar et al. 2008;Metzger et al. 2009;Landguth et al. 2010). Furthermore, the average SGS value for seedling populations was significantly higher than the average SGS for adult populations. Considering that the mean values of the relatedness of seedlings (0.001) and adults (À0.003) are low and very close to zero, the biological interpretation points to a similarity between both ontogenetic stages. Indeed, such low values were expected as E. edulis, being a tree species, would require a long time period to accumulate genetic changes (Landguth et al. 2010). In addition, land-use change and mainly deforestation in the region occurred less than 50 years prior to the study (after 1960 according to Alger and Caldas 1994), and probably, there was no enough time for a significant genetic change in E. edulis populations. The low percentage of seedlings with an assigned parent indicates that, notwithstanding the patterns of forest loss and fragmentation, dispersal of pollen and seeds is still occurring within and between landscapes. Moreover, although forest loss may not affect the density of individuals (Santos et al. 2015), we found a lower proportion of seedlings with an assigned parent in more forested landscapes, indicating that gene flow is occurring outside our sample plot. However, in landscapes with less than 19% of forest cover, we found less than five individuals of E. edulis. With such low abundances, the long-distance gene flow does not occur and we expect an increase in population vulnerability due to stochastic factors. Therefore, our results suggest that an increase in kinship between individuals can occur in a near future in the more deforested landscapes. As a consequence, we predict an increase in inbreeding and a reduction in genetic diversity over the generations ) that in turn would influence the long-term viability of populations in deforested landscapes. We detected that the geographic distance of gene flow increased with forest cover. In addition, the highest distances of gene flow were only recorded in the more forested landscapes. The shortest distance of gene flow found in deforested landscapes likely indicates a limitation of seed dispersal agents (Scheepens et al. 2012) or lower availability pollen and seed donors compared with forested landscapes. It is known that the richness of frugivorous birds, which are the main dispersers of E. edulis seeds in the region, abruptly decreases as deforestation progress . A decrease in seed dispersion distances is therefore also expected, directly influencing the reduction of gene flow in these landscapes. Furthermore, pollinators in deforested landscapes may occur in smaller number and be less effective than in forested landscapes (D attilo et al. 2015), representing a possible negative influence on the species gene flow distances and possibly driving related processes on the local scale (Breed et al. 2013). Our results showed contemporary gene flow with a maximum distance of 13 km between landscapes. Longdistance dispersal is critical for species persistence, especially considering anthropogenic disturbances such as habitat modification (Trakhtenbrot et al. 2005) and climate change (McCallum et al. 2014). Thus, due to the critical reduction and fragmentation of the Atlantic Forest ), potential gene flow over long distances is vital for E. edulis persistence. The sporadic gene flow among forest remnants may contribute to population dynamics maintenance, minimizing or inhibiting genetic differentiation between populations and preventing a possible reduction of local genetic diversity (Hardesty et al. 2006;Kramer et al. 2008). Therefore, gene flow over long distances is important for maintaining functional (genetic) connection among populations Colabella et al. 2014), minimizing the chances of inbreeding and genetic drift (McCallum et al. 2014). Hence, the knowledge of long-distance gene flow allows to propose efficient strategies for ecological connectivity (Carlo et al. 2013). Our data did not allow us to detect whether the observed long-distance gene flow occurred via pollen or seed. However, considering a previous record of E. edulis gene flow of 22 km that was attributed to seed dispersal (Gaiotto et al. 2003), we can assume that gene flow found in this study is probably due to seed dispersal by birds, which are the main seed dispersers of E. edulis (Galetti et al. 2013). Furthermore, the main pollinator (i.e., the bee species Trigona spinipes) has short displacement distance (Roubik & Aluja 1983;Araujo et al. 2004;Zurbuchen et al. 2010). Additionally, the contemporary gene flow results converging with results from F ST analyzes (Santos et al. 2015) and SGS. Our results indicated low spatial genetic structure that is characteristic of species with long-distance dispersal, and little genetic differentiation among populations (F ST ), probably an indication of high rates of gene flow. The long-distance gene flow was recorded only between landscapes with intermediate and high forest coverage (≥43%). This result may reflect the greater richness of frugivorous birds in these landscapes compared with little-forested landscapes (Morante-Filho et al. 2015) as well as higher connectivity among fragments, allowing both seed dispersal and short-distance pollination. Another plausible hypothesis is that, in deforested landscapes, few suitable habitats for seed germination and seedling establishment are found. Assuming that seeds can reach different sites in our deforested landscapes, recruitment is limited by unsuitable local conditions (Lowe et al. 2005). In such situations, populations in deforested landscapes are more likely to increase the degree of genetic differences in relation to other populations, and changes would occur in the gene pool over time (Santos et al. 2015). In conclusion, for landscape-level conservation strategies aimed to maintain micro-evolutionary processes through gene flow, we recommend avoiding decrease in forest cover below 40%. Based on E. edulis genetic analysis, our recommendation for conservation was similar to that reported for other populations of the same species in Southeastern Brazil (Carvalho et al. 2015). However, due to the older history of fragmentation in Brazilian southeastern landscapes (Carvalho et al. 2015), the effects of forest loss on genetic parameters there are much more evident than in southern Bahia (northeast Brazil). Thus, we can predict that, with time, the negative effects of forest loss in southern Bahia will be similar to those found in Southeastern Brazil (São Paulo State), an example that reinforces the time lag of tropical forests. Conversely, landscapes with forest cover below this threshold value should be managed to maintain genetic diversity (Santos et al. 2015) and intrapopulation gene flow. This recommendation provides time for long-term conservation strategies (e.g., landscape management, habitat restoration). Our study contributes to the understanding of functional connectivity between landscapes with different percentages of remaining forest. Furthermore, our results . Gene flow distance of each seedling with an assigned parent as related to forest cover. The line represents the fit of a linear mixed model performed on the logarithm of gene flow distance and forest cover, with landscape included as random factor (P < 0.05). Please note that the y-axis is on a logarithmic scale. Table 2. Gene flow between landscapes considering the landscape forest percentage of progeny (% 1 ); landscape forest percentage of parent (% 2 ); probability paternity or maternity obtained using a priori probability of 0.5 (P = 0.5); distance (km) is the geographic distance between the landscapes. provide an innovative character of relevance to unravel the impacts generated by the landscape-scale deforestation in SGS and contemporary gene flow to one plant species of the rainforest. Moreover, our findings may aid in assisting an efficient management and conservation of E. edulis (Keller et al. 2014), a keystone species highly threatened by harvesting and habitat modification.
v3-fos-license
2018-08-22T21:48:13.112Z
2003-12-25T00:00:00.000
52085717
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.annalsofgeophysics.eu/index.php/annals/article/download/3409/3455", "pdf_hash": "c58bd05a1013df5521e55d2f7ffb9500cb7ad1ab", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42789", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "c58bd05a1013df5521e55d2f7ffb9500cb7ad1ab", "year": 2003 }
pes2o/s2orc
Satellite rainfall estimates : new perspectives for meteorology and climate from the EURAINSAT project Satellite meteorology is facing a crucial period of its history since recent missions have revealed instrumental for quantitative rainfall measurements from space and newly conceived missions are at hand. International partnership is rapidly developing and research projects keep the community focused on rapidly developing research and operational issues. A perspective is given through the structure of EURAINSAT, a project of the 5th Framework Programme of the European Commission. Its key objective is the development of algorithms for rapidly-updated satellite rainfall estimations at the geostationary scale. The project is fostering international research on satellite rainfall estimations building a bridge between Europe and the U.S. for present and future missions. Mailing address: Dr. Vincenzo Levizzani, Istituto di Scienze dell’Atmosfera e del Clima (ISAC), CNR, Via Gobetti 101, 40129 Bologna, Italy; e-mail: v.levizzani@ isac. cnr.it Introduction The latest generation sensors on board the Geostationary Operational Environmental Satellites (GOES) (Menzel and Purdom, 1994) and the upcoming METEOSAT Second Generation (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI) (Schmetz et al., 1998) significantly enhance the ability of sensing cloud microstructure and precipitation forming processes (Levizzani et al., 2000(Levizzani et al., , 2001) ) from a geostationary platform.A potential exists for improved instantaneous rainfall measurements from space by combining infrared (IR) and visible (VIS) with passive microwave (MW) observations with a more global perspective. IR and VIS satellite rainfall estimates have long since been available and suffered from the diffi culty in associating cloud top features to precipitation at ground level.IR methods were used for climate purposes or combined with radar measurements for nowcasting (recent examples are Vicente et al., 1998;Porcù et al., 1999;Amorati et al., 2000) and multispectral approaches start to become operational (e.g., Ba and Gruber, 2001). Physically-based passive MW methods were developed mainly using data from the Special Sensor Microwave/Imager (SSM/I) and are based on several different physical principles (see for example Wilheit et al., 1994;Smith et al., 1998).Limitations of MW algorithms include the relatively large footprint and the low earth orbits not suitable for most of the operational strategies. The combined use of MW and IR data for rainfall estimations was already recognized some time ago (Vicente and Anderson, 1993).Adler et al. (1993) used SSM/I data for monthly average rainfall estimations over wide areas and global products such as those of the Global Precipitation Climatology Project (GPCP) were conceived (for recent advances see Huffman et al., 2001).However, the need for hourly and instantaneous combined estimations was clearly recognized (e.g., Vicente, 1994;Vicente and Anderson, 1994;Levizzani et al., 1996).Several methods exist (Turk et al., 1999;Sorooshian et al., 2000;Todd et al., 2001) that make use of IR and MW at various degrees of complexity and targeting different rainfall regimes.Some of these are running operationally, granting that their validation requires additional work in the years to come.In particular, global rapidupdate estimates with near real-time adjustment of the thermal IR co-localized with MW-based rainrates are operationally very promising (Turk et al., 1999). The algorithms for the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) require a special mention given the novelty and potential of the active instruments for future missions: an example is the TRMM algorithm 2A-25 (Iguchi et al., 2000).Combined MW and rain radar algorithms are relatively new and were developed for the TRMM Microwave Imager (TMI) and the PR (e.g., TRMM algorithm 2B-31, Haddad et al., 1997). Finally, rainfall and humidity assimilation, and microphysical parameterizations for Numerical Weather Prediction (NWP) models, above all Limited Area Models (LAMs), and General Circulation Models (GCMs) open up the road to very effective operational meteorological applications that incorporate the verifi cation of model output (Turk et al., 1997). EURAINSAT: the project EURAINSAT is a project partially funded by the European Commission with the aim of developing new satellite rainfall estimation methods at the geostationary scale for an operational use in short and very short range weather monitoring and forecasting.It will exploit the new channels in the VIS, NIR and IR of the MSG SEVIRI (see fi g. 1) that will be launched in mid 2002.The project started in January 2001 and will last until December 2003. The SEVIRI channels in the VIS and IR portion of the spectrum will gain better insights into the microphysical and dynamic structure of precipitating clouds allowing for a more precise identifi cation of precipitation levels.The method(s) will work as follows: 1) microphysical characterization of precipitating clouds with VIS/IR sensors; 2) creation of microphysical and radiative databases on cloud systems using cloud model outputs and aircraft penetrations; 3) tuning of MW algorithms on the different cloud systems (convective, stratiform, ...); 4) combination of data from the different algorithms and application to a rapid update cycle that makes use of the different sensors at the geostationary scale. The consortium has two objectives in mind: 1) contribute to improving the knowledge of clouds and precipitation formation processes using meteorological satellite sensors, and 2) make available new precipitation products for weather analysis and forecasting.SEVIRI will in fact provide better multispectral measurements for the identifi cation of the physical processes of cloud formation and evolution.The 15 min image repetition time is also more compatible with the time responses of cloud systems. The following key geographic areas and major meteorological events are considered: -Flood-producing episodes (e.g., Northwestern and Southern Italy). -Several cases involving the presence/ absence of ice, polluted air masses and maritime conditions. -Sustained light rain and «insignifi cant» rain cases (very diffi cult to detect from satellite) in U.K. and Northern Europe. -Tropical and sub-tropical cases over Africa, where the Niger catchment was selected given the relatively regular ground raingage network. The project has gathered together a substantial part of the satellite rainfall community.The team actively participates into the development of scenarios and concept for the future Global Precipitation Measurement (GPM) Mission and the International Precipitation Working Group of the Coordination Group for Meteorological Satellites (CGMS).More information on EURAINSAT and its findings can be gained at the web site http://www.isac.cnr.it/~eurainsat. Microphysical characterization of cloud processes A method was developed by Rosenfeld and Lensky (1998) to infer precipitation-forming processes in clouds based on multispectral satellite data.The method was originally based on the Advanced Very High Resolution Radiometer (AVHRR) imagery on polar orbiting satellites (Lensky and Rosenfeld, 1997).The forthcoming MSG SEVIRI is expected to enhance the capabilities of extracting cloud physical properties (Watts et al., 1998) more relevant for cloud genesis and evolution and not anymore limited by the insuffi cient number of passages.The effective radius (r e ) of the particles and cloud optical thickness are extracted and used for radiative transfer calculations that defi ne the cloud type and improve its characterization. Precipitation forming processes are inferred also using data from the AVHRR, the TRMM VIS and IR Sensor (VIRS) and the MODerateresolution Imaging Spectroradiometer (MODIS) on board NASA's Terra spacecraft.In particular, microphysical and radiative parameters from satellite sensors are instrumental for defi ning the characteristics of precipitating clouds.An example of global cloud effective radius values derived from MODIS and ready to be used for EURAINSAT is given in fig. 2. «Microphysically-maritime» clouds grow in relatively clean air with small Cloud Condensation Nuclei (CCN) and low droplet concentrations, which produce very effi cient coalescence and warm rain processes.«Continental» clouds normally grow, on the contrary, in polluted air masses having large CCN and high droplet concentrations, i.e. the coalescence is relatively ineffi cient.The better knowledge of cloud microphysical structure and precipitation forming processes will facilitate the development of a new generation of improved passive MW rainfall algorithms. One more promising line of action is the potential use of lightning detection for discriminating between convective and stratiform regimes while estimating precipitation.Data from ground-based lightning detection networks and satellite sensors like the TRMM Lightning Imaging Sensor (LIS) are applied to IR rainfall estimations (Grecu et al., 2000) and show considerable potential for rapid update applications. Above all, lightning detection represents a fast-response fundamental parameter for discriminating active convection and quantitative relationships have been found between lightning discharges and other measurables of rainfall (Petersen and Rutledge, 1998).Dietrich et al. (2001) have recently shown that the use of concurrent data from TRMM PR and LIS give unique information about the link between electrification and convection for the discrimination of convective and stratiform regimes.The authors hint at applications for rainfall retrieval using multispectral and MW techniques.The work is based upon the fi ndings of Solomon and Baker (1998) who examined thunderstorm development and lightning fl ash rates in tropical maritime, subtropical continental, and midlatitude continental storms.The dependence of lightning occurrence and fl ash rate on cloud condensation nucleus concentration, primary and secondary glaciation mechanism, liquid water flux, and updraft velocity is of fundamental importance. IR methods IR-based rainfall estimation methods were the fi rst to be applied to a wide variety of scales and phenomena.With the advent of MW sensors they were more and more confi ned to large scale and climate applications.At the instantaneous time scale, methods based on thermal IR data are being integrated by ancillary information, such as data from radar, other VIS, IR and NIR channels, lightning detection, model output and other meteorological parameters. For example, Porcù et al. (1999) have conceived a simple method to ensure a more physical relationship between cloud structure and precipitation rates as derived from the IR thresholding Negri-Adler-Wetzel (NAW) technique (Negri et al., 1984) by using SSM/I observations.A low-rainrate event with orographic forcing and small scale precipitation in Piedmont (NW Italy) on 13-16 October 2000 was considered over a 25 000 km 2 area to document the impact of the calibration on low precipitation NAW areas; nine SSM/I overpasses were available over the target basin (Porcù et al, 2000).The impact on the basin-averaged rainfall is rather low, resulting in a slight decrease of satellite overestimation for higher rainrates.The effect of calibration is more evident over the directly calibrated locations, given the very low occurrence of high precipitation areas during the event.The calibration helps for the higher rainrate peaks, while there is a marked underestimation of lower rainrates that is not affected by the calibration.The scattering of the points also increases showing an overall overestimation even for the highest rainrates.to the peak rainrates and does not reduce the underestimation for light rain, since it has no effects on the rain/no rain threshold. MW passive and active methods Many methods have been proposed for measuring rainfall from MW satellite sensors.Simple methods using polarization-corrected brightness temperatures (PCT) (e.g., Kidd, 1998) have been proposed together with more physical approaches that rely upon microphysical characterization by: -stratifying clouds into different microphysical types and examining how much of the variability in the bias of MW rainfall estimation is explained by the microphysical characterization; -developing a library of passive MW signatures from different cloud types, and -using a microphysical cloud classifi cation for improving cloud radiative transfer modeling based on statistical multivariate generators of cloud genera.Figure 3 shows an example of MW rainfall retrieval over the Mediterranean and Western Europe using the algorithm by Turk et al. (1999) and data from SSM/I, TMI and the Advanced Microwave Sounding Unit-B (AMSU-B).More details can be found in Berg et al. (1998). The scheme of Mugnai et al. (1993) and Smith et al. (1992) is a good example of such methods, especially in the very complex environment of severe storm microphysics.Cloud modeling and MW radiative transfer has been recently applied to stratiform rainfall by Bauer et al. (2000).Panegrossi et al. (1998) have shown the importance of testing the physical initialization and the consistency between model and measurement manifolds.Research trends concentrate on improving the interpretation of active and passive MW measurements through better modeling of cloud processes such as the melting layer (Bauer, 2001a;Olson et al., 2001a,b).The TRMM satellite has set the path to new algorithms that still mostly work over oceans (e.g., Bauer, 2001b), but new developments are at hand over land (Grecu and Anagnostou, 2001).Polarization and texture information from passive radiometers together with PR data complete the scenario of new methods (Olson et al., 2001c).Ferreira et al. (2001) have recently contributed to the improvement of the TRMM 2A-25 algorithm (Iguchi et al., 2000) proposing the use of G drop size distributions and R-k relations instead of R-Z (where R is rainrate, k the attenuation coeffi cient, and Z the refl ectivity). The importance of separating convective and stratiform precipitation and stratiform and transition regions in the a priori cloud model database of MW algorithms is demonstrated by Kummerow et al. (2001).The authors document the latest improvements to the Goddard PROFiling algorithm (GPROF) as applied to the TRMM data.The new algorithm also uses the emission and scattering indexes instead of individual brightness temperatures.These improvements, together with the elimination of some classifi cation ambiguities over land, are general and apply to other algorithms as well. Fundamental ancillary data are finally provided by active and passive MW radiometry for Cloud Liquid Water (LWC) profi ling: measurements from ground based MW radiometers combine with Z profi les from cloud radars and cloud model statistics to lower the errors in LWC measurements by as much as 10-20 % (Löhnert et al., 2001). Combined multispectral and MW methods Cloud microphysical information, when combined with MW measurements, can lead to improvements in satellite-based rainfall measurements, especially from clouds in the extra tropics and over land (e.g., Bauer et al., 1998).EURAINSAT concentrates on exploiting SEVIRI data in the VIS, NIR and Water Vapor (WV) for cloud characterization and screening within a rapid cycle of rainfall estimation based on SSM/I, TMI and geostationary IR data.Data from MODIS simulate MSG SEVIRI data during the pre-launch phase.Moreover, the project shares in the cloud-related work from the MODIS team (King et al., 1997). There are two main research lines: -Develop new MSG-MW rainfall algorithms incorporating the observed cloud microstructure and precipitation forming processes.State of the art cloud and mesoscale models (Khain et al., 2000;Tripoli et al., 2001) and radiative transfer models will be instrumental to detailed cloud and rainfall type discrimination.The need to use such frontier cloud model was recently shown by Khain et al. (2001) who demonstrated the weakness of existing cloud parameterizations while trying to give reason of highly supercooled water in convective clouds (1.8 g m -3 at -37.5 C) as found by Rosenfeld and Woodley (2000). -Introduce such methods into rapid update rainfall cycles for near real time rainfall estimations over oceans and land with the widest possible area coverage.Mid-latitude Europe, the Mediterranean basin, North Africa, the Middle East and equatorial and tropical African regions are the main targets for operational and climatological applications.Applications to the Mediterranean have been reported by Meneguzzo et al. (1998). Applications Applications embrace, among others, water availability, global change studies, nowcasting, hydrogeological disaster management, agriculture, famine reduction, and monitoring of remote areas. Rainfall assimilation for NWP Data assimilation procedures that improve cloud and humidity characterization in current analysis schemes for LAMs are at hand.Most important are the sensitivity to orography and modeling of moist processes.The model BOLAM (Buzzi et al., 1998) is used in the project to conduct rainfall assimilation experiments that quantify the impact of satellite data onto the forecasting chain.The nudging technique of Falcovich et al. (2000) is adopted.Motivations for using nudging techniques are that nudging to over-saturation is more gradual, and the reference profi le is useful in tropical areas.The model is now applied to autumn rainfall episodes over the Mediterranean. The model RAMS (Pielke et al., 1992) is also used and runs operationally in Tuscany with two grids, that are two-way nested and with the highest horizontal spatial resolution of 4 km around Tuscany and the Arno river basin.The complex cloud microphysics scheme is fully activated (Walko et al., 1995), while the Kuotype convection parameterization (Molinari, 1985) is activated only over the 20 km outer grid and explicit (resolved) convection is allowed over the inner grid.A higher spatial horizontal resolution of 2 km will be reached to ensure fully consistent explicit representation of the convection.Quantitative Precipitation Forecasts (QPFs) are produced hourly over the inner grid.Diabatic initialization and rainfall assimilation will be conducted in the operational chain before the end of the project. Rainfall and climate change The importance of multispectral cloud characterization methods has recently been demonstrated by observing and documenting the inhibiting effects of forest fi re (Rosenfeld, 1999), urban pollution (Rosenfeld, 2000) and desert dust aerosols (Rosenfeld et al., 2001) on precipitation formation processes.It is very likely that rainfall processes have been substantially underestimated against, for example, greenhouse gases in evaluating possible causes of climate changes.A more quantitative assessment is, however, necessary. In particular, Ramanathan et al. (2001) argue that manmade aerosols produce brighter clouds with reduced precipitation efficiency and rainfall suppression.This can lead to a weaker hydrological cycle, which connects directly to water availability and quality, a major environmental issue of the 21st century. Satellite rainfall data are used also to assess the impact of particular weather systems on the geographical, seasonal and interannual distribution of total rainfall.Rodgers et al. (2000) have concentrated on the impact of tropical cyclones on the North Pacifi c climatological rainfall.The same authors have repeated the study for the North Atlantic (Rodgers et al., 2001).Such studies are crucial for the quantifi cation of climatic effects and their relationships with indicators such as the El Niño Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO).Databases of global products such as those of the GPCP (Huffman et al., 2001) will be very useful for supporting the defi nition of new local indexes that better describe regional variations, observed over smaller basins like the Mediterranean. Attenuation of satellite communications due to rainfall Over the last few years, there has been an increasing demand of large-bandwidth information services coupled with high availability and low-fade margin communication systems (Watson and Hu, 1994).This scenario has prompted exploring channel frequencies at Ka band and above, and developing sophisticated countermeasure techniques to mitigate outage periods (Jones and Watson, 1993).The implementation of most advanced adaptive countermeasure techniques is related to the possibility of monitoring in quasi real-time the beacon attenuation in a given region and period.Spatial and frequency diversity methods, power link control, data rate and error correction on the downlink/uplink can be effectively adopted only if the propagation conditions are known in real time. Exploitation of remote sensors and their products represents a natural way to optimize the performances of a satellite communication system with low power margins, specifi cally while applying fade mitigation techniques.MW signatures of precipitation, as given by a spaceborne multi-frequency radiometer, have been shown to be the base for estimating the path attenuation in K-band satellite communications (Marzano et al., 2000).A general approach should attempt to estimate rainfall intensity and attenuation by polar-orbiting MW radiometers and temporally track the rain system by means of geostationary IR radiometers.A statistical approach can be used to derive a prediction model of path attenuation from MW brightness temperature and surface rainrate (Crone et al., 1996). Fig. 3 . Fig. 3. 3-h average rainrate using a combined SSM/I, TMI and AMSU-B algorithm (29 September, 2001).Note the storm system stretching from the Tyrrhenian Sea to Tunisia.The black triangle indicates lack of data coverage after the mosaic of the satellite overpasses.Image courtesy of J.F. Turk (http://www.nrlmry.navy.mil/sat_products.html).
v3-fos-license
2019-01-29T10:42:37.542Z
2019-01-23T00:00:00.000
126997246
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2227-9091/7/1/10/pdf?version=1551322955", "pdf_hash": "5de1a3fd12ec7d519f9c86d1b75776d93039ca8c", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42793", "s2fieldsofstudy": [ "Business" ], "sha1": "f40b251ee561d296e760043a9b6312696baaa72f", "year": 2019 }
pes2o/s2orc
Risk Model Validation : An Intraday VaR and ES Approach Using the Multiplicative Component GARCH In this paper, we employ 99% intraday value-at-risk (VaR) and intraday expected shortfall (ES) as risk metrics to assess the competency of the Multiplicative Component Generalised Autoregressive Heteroskedasticity (MC-GARCH) models based on the 1-min EUR/USD exchange rate returns. Five distributional assumptions for the innovation process are used to analyse their effects on the modelling and forecasting performance. The high-frequency volatility models were validated in terms of in-sample fit based on various statistical and graphical tests. A more rigorous validation procedure involves testing the predictive power of the models. Therefore, three backtesting procedures were used for the VaR, namely, the Kupiec’s test, a duration-based backtest, and an asymmetric VaR loss function. Similarly, three backtests were employed for the ES: a regression-based backtesting procedure, the Exceedance Residual backtest and the V-Tests. The validation results show that non-normal distributions are best suited for both model fitting and forecasting. The MC-GARCH(1,1) model under the Generalised Error Distribution (GED) innovation assumption gave the best fit to the intraday data and gave the best results for the ES forecasts. However, the asymmetric Skewed Student’s-t distribution for the innovation process provided the best results for the VaR forecasts. This paper presents the results of the first empirical study (to the best of the authors’ knowledge) in: (1) forecasting the intraday Expected Shortfall (ES) under different distributional assumptions for the MC-GARCH model; (2) assessing the MC-GARCH model under the Generalised Error Distribution (GED) innovation; (3) evaluating and ranking the VaR predictability of the MC-GARCH models using an asymmetric loss function. Introduction Since the financial crisis of 2008, there has been an ever-growing need for financial entities to accurately assess their exposure to financial risks.Risk being commonly characterised by increasing volatility in the financial market, the modelling and forecasting of the volatility have become a very important research area among academics and practitioners in the last decade.According to Poon and Granger (2001), volatility can be viewed as a 'barometer for the vulnerability of financial markets and the economy'.It is, therefore, important to forecast volatility accurately.Volatility is also an essential tool in the computation of other risk metrics such as Value-at-Risk (VaR) and Expected Shortfall (ES). Value-at-Risk (VaR) is a mandatory risk management tool in the insurance and banking industry as per the regulatory norms of the Solvency II framework (Solvency II European Directive (2009/138/EC)) and the Basel committee (BCBS 2010), respectively.However, it was observed that under the stress conditions of the global financial crisis, VaR forecasts were exceeded multiple times.In the Basel Committee on Banking Supervision BCBS (2016) report, it was concluded that during times of significant financial market stress, ES will ensure that tail risk and capital adequacy are captured in a more prudent manner.Interest in ES grew alarmingly ever since the Basel Committee on Banking Supervision (BCBS) brought forward their intention to replace VaR with ES, BCBS (2012). For a long time, researchers and academics have made use of low-frequency data in financial time series analysis to forecast risk metrics such as volatility, Value-at-Risk (VaR) and Expected Shortfall (ES).However, low-frequency data misses out on precious information and as addressed by Engle and Russell (2004): "Like the view from the airplane above, classic asset pricing research assumes only that prices eventually reach their equilibrium value, the route taken and speed of achieving equilibrium is not specified".Low-frequency data lacks important details on the price adjustment compared to analysing high-frequency data.High-frequency data is defined as observations made over a short period of time, usually a day or less. As mentioned by Zivot (2005), the unique characteristics of high-frequency data render the process of econometric and statistical analysis even more complicated.This in turn makes the forecasting of intraday VaR and ES quite challenging.For instance, econometric models or the modelling process should be able to take into account the intraday periodicity and the high excess kurtosis of the data to provide reliable forecast of the risk metrics.Moreover, the number of observations in high-frequency financial datasets can be overwhelming at times, and these observations may also be irregularly time-spaced. Although forecasting VaR and ES using high-frequency data is challenging, it is also meaningful at the same time.As frequently mentioned in the literature, the volatility model is a fundamental ingredient which influences the measurement of both VaR and ES.It has been shown that the use of high-frequency data provides much more accurate estimates of volatility, Giot (2000).Due to the intense trading system nowadays, firms are forced to constantly build and devise strategies with the aim to beat the market.As mentioned by Müller (2000), it is no longer adequate to analyse these risk metrics based on daily data only.Today, more and more intraday price movements can be observed.Therefore, intraday VaR and ES estimates might be very beneficial to short-term traders involved in algorithmic and high-frequency trading, since the real-time market risk is quantified. Despite the growing amount of research in the field of high-frequency financial data analysis, few studies have focused on model validation and high-frequency risk measures.This study contributes to the literature in the following ways: (1) A rigorous model validation, both in terms of in-sample fit and out-sample performance for the Multiplicative Component Generalised Autoregressive Heteroskedasticity (MC-GARCH) model under five error distributions is provided.Statistical and graphical tests are conducted to validate the models.(2) One component of the MC-GARCH model is the daily variance forecast.For this purpose, the GARCH(1,1) and EGARCH(1,1) under the five error distributions are compared and the best model among the 10 GARCH models is used to forecast the daily variance. (3) The modelling and forecasting performance of the MC-GARCH model under different distributional assumptions is assessed in this study.(4) The 99% intraday VaR is forecasted and three backtesting procedures are used.This is the first study to assess the VaR predictive ability of the MC-GARCH models by using an asymmetric VaR loss function. (5) This is the first study to forecast the intraday expected shortfall under different distributional assumptions for the MC-GARCH model.Again, three backtests are used including the recently proposed ES regression backtest of Bayer and Dimitriadis (2018). Due to the high importance of risk management, the results of this study may contribute in many fields.This study is highly relevant to the banking industry since banks are required to calculate risk metrics on a daily basis for internal control purposes and for determining their capital requirements.Risk measurement is also essential to the insurance industry from the pricing of insurance contracts to determining the Solvency Capital Requirement (SCR), and therefore, the results of this study might be useful.Any other organisation with exposure to some kind of financial risk might benefit from this study.For instance, as mentioned by Culp et al. (1998), an airline company might use these intraday risk metrics to assess their exposure to jet fuel prices. The rest of this paper is organized as follows.Section 2 provides a brief literature review on the MC-GARCH model, followed by Section 3, which details the various methodologies employed in this study.Section 4 presents the application of the MC-GARCH models and the various backtesting results.Finally, Section 5 will seal off the research with a summing up of the entire research outcome and will also provide recommendations for further study. Past Studies on MC-GARCH Model The literature on Autoregressive Conditional Heteroscedasticity (ARCH) models and Generalised Autoregressive Conditional Heteroscedasticity (GARCH) has grown impressively since they were first introduced by Engle (1982) and Bollerslev (1987), respectively.As noted in Andersen and Bollerslev (1997), since GARCH models are associated with a geometric decay in their autocorrelation structure of returns, they cannot take into account the pronounced intraday seasonal pattern present in the high-frequency financial returns.Over the years, to circumvent this limitation, researchers have come up with different solutions by augmenting the basic GARCH family of models.For instance, Andersen andBollerslev (1997, 1998) and Andersen et al. (1999) took a novel approach by first deseasonalising the absolute returns prior to model fitting.The year 2011 saw the introduction of the MC-GARCH model of Engle and Sokalska (2011), which is a more sophisticated model designed specifically for high-frequency financial time series data.Basically, in this model, the variance part is decomposed into three multiplicative components: a daily component, a diurnal component and a stochastic volatility component.What makes the MC-GARCH model different from other typical GARCH models is that it includes a component which independently takes into account the intraday seasonality. Previous studies have shown that, indeed, the MC-GARCH model is well capable of forecasting intraday volatility and risk metrics.The MC-GARCH model was applied to three equally spaced intervals of 1 min, 5 min and 10 min intraday data of Australia's S&P/ASX-50 stock market by Singh et al. (2013).The model yielded satisfactory results for intraday VaR forecast.Their results were supported by another study by Diao and Tong (2015), who found that the MC-GARCH model performed well in forecasting the intraday VaR in Chinese stock market.The dataset used was 5-min intraday returns of CSI7-300 index.In both studies, the innovation process of the variance equation was assumed to have a Gaussian distribution. Narsoo (2016) applied the MC-GARCH model under four innovation distributions namely the Gaussian, the symmetric Student's-t, the skewed Student's-t and the reparametrised Johnson SU (JSU) distribution on the intraday 1-min EUR/USD exchange rates data to forecast the 99% VaR.Based on the Kupiec's test, it was concluded that the Skewed Student's-t MC-GARCH model delivered the best VaR forecast. However, there are still a lot of open research areas on the MC-GARCH model.For instance, there is no study dealing with the model validation of the MC-GARCH model under various distributional assumptions and assessing the performance, both in terms of model fitting and forecasting.Also, there is no study on the expected shortfall (ES) forecasting performance of the MC-GARCH model under different error distributions.This paper therefore contributes to the highfrequency trading and backtesting literature by forecasting the intraday Value-at-Risk (VaR) and intraday Expected Shortfall (ES) at 99% confidence level using the MC-GARCH model under five distributional assumptions, which are the Normal, the Student's-t, the Skewed Student's-t distribution, the reparametrised Johnson SU (JSU) and the Generalized Error Distribution (GED).After model fitting, the models will be validated in terms of in-sample fit based on a series of statistical and graphical tests.Due to the low statistical power of the Kupiec's test, two other backtests are also employed to rigorously assess the competency of the MC-GARCH models in predicting the intraday VaR.Three backtesting procedures will also be used to test the ES forecasting ability of the models. Methodology This study focuses on forecasting the intraday Value-at-Risk (VaR) and intraday Expected Shortfall (ES) at 99% confidence level using the MC-GARCH model under five distributional assumptions.This section explains the various models used to model both daily and intraday data.The backtesting procedures to assess the intraday VaR and ES forecasts are also presented. GARCH(1,1) The standard GARCH(1,1) model can be specified by the following set of equations: where is the conditional mean process made up of both autoregressive (AR) and moving averages (MA) terms and represents the daily log returns.We assume is the error term which can be decomposed as = √ℎ .The second equation is the variance equation and ℎ is the volatility process to be estimated.The innovation term, are i.i.d.variables.In the variance equation, > 0 , > 0 , > 0 and + < 1 to satisfy wide-sense stationarity. EGARCH(1,1) model The Exponential GARCH model (EGARCH) of Nelson (1991) is also employed.It captures the asymmetric effects between positive and negative asset returns and models the logarithm of the conditional variance ℎ .The EGARCH(1,1) specification has the following form: To ensure non-negative variance, the model is an AR(1) on ln(ℎ ) instead of ℎ . MC-GARCH(1,1) model The Multiplicative Component GARCH model (MC-GARCH) is a variant of the GARCH model which is specifically designed to model and forecast the intraday returns of financial assets.Basically, in this model, the conditional variance equation is specified by a multiplicative product of a daily volatility component, a diurnal volatility component and also a stochastic/intraday volatility component.For the sake of clarity, let , be the conditional compounded return series for a particular financial asset , where is representing any particular day and is the regularly spaced intraday time period.In the MC-GARCH model, the intraday return process of , may be represented as follows: where , ~ (0,1) and • ℎ denotes the daily variance component • denotes the diurnal/calendar variance component in each intraday period • , denotes the intraday variance component • , is an error term following a specified distribution This study employs GARCH and EGARCH to forecast the daily variance component ℎ , based on the paper by Andersen and Bollerslev (1997).The choice of the model is based on the bestperforming one among the GARCH and EGARCH models under five error distributions, which are the normal distribution, the Student's-t distribution, the Generalised Error Distribution (GED), the skewed Student's-t and the Johnson SU (JSU) distribution. The diurnal volatility component , is estimated as the variance of intraday returns in each regularly spaced intraday time period as represented below: By using the daily variance and the diurnal variance, the returns are normalized in the following way: After the normalization of the returns by both the daily and diurnal variance, the next step consists of modelling the stochastic intraday variance component , as a GARCH(1,1) process, which is given as follows: where * > 0, * ≥ 0, * ≥ 0. Parameter Estimation In this paper, all the parameters of the various GARCH models employed will be estimated using maximum likelihood estimation (MLE), since it is the most popular method for estimating GARCH type models.Moreover, this method yields asymptotically efficient parameter estimates for the GARCH models. Value-at-Risk and Expected Shortfall Evaluation Value-at-Risk Evaluation: According to McNeil et al. (2005), the Value-at-Risk (VaR) of a portfolio at time for a given confidence level ∈ (0,1) is given by the smallest number such that the loss at time + 1, which is denoted by , will be less than with probability : The one-step-ahead VaR is computed as follows: where the probability distribution function of the return innovations , is strictly monotone or has a generalised inverse of the cumulative distribution function.In this paper, is assumed to follow five probability distributions namely the Normal, Student's-t, Skewed Student's-t, JSU and GED. Expected Shortfall Evaluation: The Expected Shortfall (ES) at a given level is defined as being the expected value at time of , which is the loss in the next period conditional on the loss exceeding : According to García Jorcano (2018), the one-step-ahead ES can be further simplified using the properties of the expectation operator: where: Backtesting After forecasting the risk metrics VaR and ES, a backtesting procedure is employed to assess the accuracy of the forecasts.In the backtesting procedure, actual profits and losses are compared to the estimates of VaR and ES in a systematic manner. Kupiec's Unconditional Coverage Test The Kupiec's test was developed by Kupiec (1995) and is the most famous VaR test that is based on failure rates.It is also known as the proportion of failures (POF) test.The null hypothesis of the test assumes that the number of exceptions follows a binomial distribution. The null hypothesis for the test is as follows: where is the number of observations and is the number of exceptions.The test is in fact a likelihood ratio test where the test statistics are as follows: Under the null hypothesis, the is asymptotically chi-square distributed with one degree of freedom. A Duration-Based Approach to VaR Backtesting According to Christoffersen and Pelletier (2003), a more robust test to determine the adequacy of a risk model is by considering the duration between VaR violations.Ideally, the duration between the VaR violations should be independent of one another and should not cluster.The null hypothesis of this test is that under a correctly specified risk model, the VaR violations should be memoryless and should therefore follow an exponential distribution as follows: Under the alternative hypothesis, a Weibull distribution is used for the duration variable, since it embeds the exponential distribution as a restricted case: Also, , : = 1 and , : = 1, = , where IND denotes independence and CC denotes Conditional Coverage. Asymmetric VaR Loss Function Even though the two VaR backtesting procedures discussed above are highly relevant for testing the model adequacy, they do, however, fail to judge the model based on its predictive accuracy.In other words, they do not provide statistical evidence as to whether there is any difference in the forecasting performance between the different models employed.Therefore, González-Rivera et al. (2004) proposed an asymmetric VaR loss function to compare the performance of the different model on the basis of the loss function.The loss function is defined as: where is the length of the backtesting period, is the model indicator, denotes the return at time + 1, , | denotes the VaR at + 1 given the information set up to time .Moreover, = ( − , ( )) denotes the quantile loss function.Since it is an asymmetric loss function, it penalises observations below the quantile level more heavily as compared to observations above it.The best model is the one which minimises this loss function. Model Confidence Set Procedure Hansen et al. (2011) proposed the model confidence set (MCS) procedure, whereby a sequence of statistical tests are carried out with the objective of building a "Superior Set of Models" (SSM).Basically, the equal predictive ability (EPA) test statistic is calculated for an arbitrary loss function satisfying the general weak stationarity conditions.In this procedure, the loss function employed is the asymmetric VaR loss function of González-Rivera et al. (2004).For a chosen level of confidence, the null hypothesis stating EPA is not rejected.This procedure is implemented to rank the models, in ascending order, according to their VaR forecasting power. The Bivariate ES Regression Backtest The first ES backtest that will be considered is the very recently proposed Bivariate ES Regression Backtest of Bayer and Dimitriadis (2018).They proved that this backtest has far more power than other ES backtests.It is also more convenient for regulators since it is the only backtest method in the literature which uses only the ES forecasts for the backtesting of the risk metric. The Bivariate ES Regression Backtest simply tests if a series of ES forecasts denoted by { ̂ , = 1, . . .} from a forecasting model is specified correctly with respect to the series of realized returns denoted by { = 1, . . ., }. Basically, in this backtest, the returns are regressed on the ES forecasts ̂ including an intercept term which is designed particularly for the functional ES. = + ̂ + (1) where ( |ℱ ) = 0.Moreover, the condition on the error term can be specified in another way since ̂ is generated using the same filtration set ℱ : The null hypothesis is tested against the alternative hypothesis where: The null hypothesis states that the ES forecasts are specified correctly since ̂ = ( |ℱ ).This backtest is called a bivariate backtest, since the parameters and are tested simultaneously based on the regression framework. The estimation of Equation ( 1) is carried out by the semiparametric estimation of the joint system: Bayer and Dimitriadis (2018) also showed that the backtest procedure has even greater power when combined with bootstrapping.The backtest will therefore also be carried out using bootstrapping where B = 1000 bootstrap Wald statistics will be computed.The bootstrap p-value will simply be the share of the 1000 bootstrap test statistics greater or equal to the test statistic for the original sample. Exceedance Residual (ER) Backtest McNeil and Frey (2000) was among the first to propose an expected shortfall backtesting procedure.This procedure analyses the difference between the next period's return and ( ) which is the expected shortfall at time t, conditional on the fact that exceeds the VaR at time t, ( ). = − ( ) Under the null hypothesis ( ), they postulated that the modified series should be i.i.d with mean 0 and variance 1.To test , the non-parametric bootstrapping method is employed on the observations in against the alternative hypothesis which states that the mean of excess violation of VaR is greater than 0. The bootstrap methodology was devised by Efron and Tibshirani (1994). V-Test for the Expected Shortfall Different methods to evaluate the performance of the ES estimates were proposed by McNeil et al. (2005).These methods were based on the relative size of test statistics.These test statistics are regarded more as a diagnostic tool than a formal statistical test, since there is no null hypothesis testing involved. The first statistic, , takes the average between the forecasted ES and the actual return whenever a VaR violation occurs.A correctly specified model should yield a value close to 0 for .For a given probability , is defined as: where denotes the total number of ES estimates.However, the main drawback of is that it is too dependent on the VaR estimates.Therefore, McNeil et al. (2005) proposed a second test statistic , which is defined as follows: )) and is the empirical q-quantile of { , = 1,2, . . ., }.A third measure was also brought forward which combines and to strike a balance between the test statistic , which relies too heavily on theory, and the test statistic , which is more practically oriented.This measure is denoted by , and it is defined as: A good model would therefore bring the test statistics and close to 0. Data Description The intraday 1-min EUR/USD exchange rate price dataset consists of 28,290 observations for the month of February 2016 equivalent to 21 days intraday logarithmic returns.Moreover, the daily returns of the EUR/USD exchange rate are also used since a GARCH model will be employed to forecast the daily variance component in the MC-GARCH model.The daily data of the EUR/USD exchange rate prices span from 2 December 2003 to 29 February 2016 and consists of 3160 observations.The intraday dataset is split into two samples, where a sample of 20 days is used for estimating the models and a sample of 1 day is used to assess the forecasting ability of the models. The daily log return can be calculated as below.The same principle applies to the intraday log return process , . = ln In the above equation, is the exchange rate price at time and is the exchange rate price at time − 1. Calculating the log returns actually transforms the financial time series into a stationary series.The Augmented Dickey-Fuller (ADF) test results presented in Appendix A actually confirm the stationary property of both the 1-min and daily exchange rate log-returns. Heteroskedasticity and Normality Tests of the Return Series Figure 1 shows the return series plot for the 1-min EUR/USD exchange rate returns.Figure 2 is the correlogram of the absolute returns for the 1-min EUR/USD returns for the month of February 2016.Clearly, a strong pattern repeating approximately every 1500 observations, corresponding to a day, can be observed.Volatility is high at the opening and closing hours.This depicts the strong intraday seasonality revealed in the high-frequency literature.The descriptive statistics and normality tests of the EUR/USD exchange rate returns for the highfrequency 1-min returns and for the daily returns are presented in Appendix A. Identifying the Conditional Mean Equation The first step to the implementation of GARCH-type models for the conditional variance, involves identifying a suitable model for the conditional mean of the data.Literature suggests the implementation of an Auto Regressive Integrated Moving Average (ARIMA) model for modelling the conditional mean.Since both return series are stationary, the order of the parameter in the ARIMA( , , ) model is equal to 0. The next step consists of determining the order of the parameters and for the two return datasets.A graphical analysis of the Auto Correlation Function (ACF) and Partial Auto Correlation Function (PACF) of the two returns series is first carried out to visually determine the orders of their Auto Regressive Moving Average or ARMA(p,q) model.The ACF for both returns series are plotted in Figure 3 along with their respective PACF: While analysing the ACF and PACF plots, it seems that an ARMA(0,0) model is appropriate for both the 1 min returns and for the daily returns.To further confirm the order of the mean equation, several ARMA(p,q) models are estimated, and the best model was chosen based on two criteria: the minimum Akaike information criterion (AIC) value and the maximum log-likelihood value.As stated by Mondal et al. (2014), the Box-Jenkins methodology states that the value of and for an ARIMA( , , ) model should be equal to or less than 2, or the total number of parameters should be no more than 3. Therefore, the AIC and log-likelihood values are checked only for those ARMA model with parameters p and q having a value of 2 or less.The ARMA(0,0) model provided the lowest AIC value and the maximum log-likelihood value for both the 1-min return and for the daily returns series and therefore outperforms the other ARMA specifications for the conditional mean. Model Checking for the Mean Equation According to Tsay (2005), there is a need to eliminate any significant correlations in the return series prior to fitting any GARCH-type model.The residuals of the mean equation are therefore tested for the presence of autocorrelations using the Ljung-Box Q test.All the p values were greater than 5% at 10 and 20 degrees of freedom, implying that the residuals of the mean equation are not serially autocorrelated for the two return datasets. At this stage, since the two return datasets exhibit stylized features such as excess kurtosis and clustering of volatility and given the adequacy of the ARMA specifications for the mean equations, the specification of GARCH models to the returns datasets is analysed. Estimation of Daily Variance Forecast As stated by Engle and Sokalska (2011), the implementation of the MC-GARCH model first necessitates a model for the daily variance component.The GARCH(1,1) and the EGARCH(1,1) models are implemented under the five error distributions, and the best model is retained for the daily variance component.The parameter estimates of the GARCH-type models for the daily variance forecast are statistically significant.Since the parameter , which is the indicator for asymmetric volatility, was significant across all innovations for the EGARCH(1,1) model, this is indicative that an asymmetric GARCH might be preferred over a symmetric GARCH model.The parameter being positive irrespective of the error distribution used, imply that shocks including both good news and bad news which may impact the daily EUR/USD returns will affect volatility for a long period of time in the future. To choose the best model for the daily variance component, three criteria will be used: the AIC value, the Bayesian information criterion (BIC) value, and the log-likelihood value.The best model will be the one minimising both the AIC and BIC score while maximising the log-likelihood value.The results are presented in Table 1 and Table 2 -likelihood 11,788.4 11,804.14 11,804.15 11,804.8 11,810.2It can be observed that the asymmetric EGARCH(1,1) model outperforms the GARCH(1,1) model under all error distributions since the former model yields the minimum AIC and BIC scores and yields higher log-likelihood values.This can be explained by the fact that the EGARCH(1,1) models are able to capture the leverage effect feature of the daily return series.However, the bestperforming model is clearly the EGARCH model under the GED innovation assumption (EGARCH-GED) since this model yields the minimum AIC and BIC value while maximising log-likelihood.Hence, this model specification will be used for the daily variance forecast. Fitting Performance The MC-GARCH models is now fitted to the complete dataset of 28,289 1-min EUR/USD observations.Table 3 displays the results of the MC-GARCH parameter estimation.The corresponding p-values provided within parentheses.All the parameter estimates are statistically significant at 5% level except for the conditional mean, which is insignificant at 5% level across all innovations for the MC-GARCH model, and also, the skewness parameter is insignificant for the JSU innovation.Almost all the parameter estimates being statistically significant gives an indication that the MC-GARCH models are correctly specified. The statistical significance of the ARCH parameter and GARCH parameter for all innovations of the MC-GARCH model suggests that lagged conditional variance and lagged squared disturbance have an impact on the current conditional variance.This simply implies that news about volatility from the previous periods have an explanatory power on the current volatility.Moreover, the high significance of the parameter validates the presence of volatility clustering in the dataset.The shape, , parameter being highly statistically significant and greater than 4 for the Student's-t and skewed Student's-t error distributions and less than 2 for the GED innovation confirms the presence of thick tails as was shown by the excess kurtosis in the return dataset of the 1-min EUR/USD returns.Moreover, the skewness parameter for the skewed Student's-t innovation being highly statistically significant also confirms the presence of skewness in the return series as was shown by the negative skewness of the dataset.These results suggest that a non-normal innovation might be a more suitable candidate for the MC-GARCH model. To determine the best fitting model, three criteria will be used namely the AIC value, BIC value and the Log-Likelihood.These results are displayed in Table 4, below.From Table 4, it can be observed that the model yielding the worst results is the MC-GARCH model under the normal innovation.This can be explained by the fact that being a symmetric distribution and having a kurtosis of 3, the MC-GARCH model under the normal error distribution fails to capture features such as the leptokurtic nature of the 1-min EUR/USD returns. The best model is clearly the MC-GARCH model under the GED innovation, since it yields the highest log-likelihood value of 212,976.5 while simultaneously yielding the lowest AIC value of −15.057 and BIC value of −15.055. Model Validation: In-Sample Fit: In this section, the chosen GARCH model is validated.The estimated, standardised residuals of the MC-GARCH model under the GED innovation should be independent and identically distributed and for this purpose, the ACF of the standardised residuals is analysed.It can be observed from Figure 4 that there are no significant lags, and therefore the residuals are not serially correlated and behave as a white noise process.The ARCH LM test was performed on the residuals of the MC-GARCH models at various lag lengths, and it was found that the null hypothesis stating that there is no ARCH effects cannot be rejected.This suggests that the conditional heteroskedasticity that was present in the raw series was successfully removed, thereby validating the MC-GARCH model.This result was backed by the Ljung-Box Test on the residuals. The empirical density of the standardised residuals is plotted below to check whether the GED distribution gives the best fit.Indeed, from Figure 5, it can be seen that the GED assumption fits well to the residuals, as compared to the other distributions.The MC-GARCH model is also valid, since it includes a component to cater for the intraday seasonality (sigma (Diurnal)). Intraday VaR Forecast The 99% intraday VaR is forecasted using the MC-GARCH models on the 1-min intraday return series.A rolling backtest procedure is then undertaken on the out-sample period and a moving window of 1 day will be used in the VaR backtesting procedure.The backtesting period is one day, which relates to 1500 1-min datapoints. Kupiec's Test The first backtest used is the Kupiec's unconditional coverage test, where the 1500 intraday VaR forecasts estimated are compared against the actual intraday returns.The results (Table 5) of the backtest speaks in favour of the MC-GARCH model, as all the models, except the MC-GARCH under the normal distribution, passed this test since the p-values, being greater than the 5% significance level, indicate that the null hypothesis cannot be rejected.The second column 'b' is the estimated Weibull parameter for the different models.Since the pvalues for all models are greater than the significance level of 5%, this gives evidence that the duration of time between the VaR violations possess no memory and that they do not cluster.All the models passed the VaR duration-based backtest. Backtesting VaR Using an Asymmetric Loss Function A more rigorous backtesting procedure is carried out.As stated in Bernardi et al. (2014), though the Kupiec's test is able to compare VaR violations of several competing models, it fails, however, to rank the models according to their predictive accuracy of the VaRs.Moreover, many models satisfy the unconditional coverage test, as it is observed in this study.The risk manager therefore cannot select a unique method.Lopez (1998) suggested to measure the accuracy of VaR forecasts based on a loss function and the models are ranked accordingly. To present results which are less sensitive to the low number of theoretical violations and to deal with the problem of the Kupiec's test, the Model Confidence Set (MCS) procedure proposed by Hansen et al. (2011) is applied together with the asymmetric VaR function of González-Rivera et al. (2004).The results for the MCS procedure are presented in Table 7.Only those models which passed the Kupiec's test and the VaR duration test are considered.The best-performing model according to this procedure is the MC-GARCH under the skewed Student's-t distribution, since it minimises the loss function.The sigma forecast plot and the VaR backtesting plot for the MC-GARCH(1,1) model under the skewed Student's-t distribution are displayed below in Figure 7 and Figure 8 respectively. Intraday ES Forecast The backtesting of the Expected Shortfall (ES) is now conducted, and three backtests are implemented to determine the accuracy of the ES forecast. A Regression-Based ES Backtesting Procedure: the Bivariate ES Regression Backtest The results for this backtest, both with and without bootstrapping, are shown in Table 8, below.All p-values are greater than the 5% significance level.The null hypothesis, which states that the 'ES forecasts are correctly specified' is not rejected.Moreover, the bootstrap p-values are also highly significant.Therefore, it can be concluded that the MC-GARCH models are able to forecast accurately the risk measure ES.The null hypothesis, which states that "Mean of Excess Violations of VaR is Equal to zero", is not rejected, since all p-values are greater than 5% confidence level.Based on this backtesting procedure, it can therefore be ascertained that the MC-GARCH models succeed in accurately predicting the ES estimates.Although the actual ES exceedances are comparable across the four MC-GARCH specifications, it can be observed that the MC-GARCH model under the GED error ditribution yields the least exceedances. V-Tests The V-Test statistics backtesting procedure can be regarded more as a diagnostic tool than a formal statistical testing procedure, since there is no null hypothesis involved. Table 10 displays the results for the , and test statistics.The first observation is that the sign of the , and test statistics are positive, thus implying that all the models are, on average, overestimating the ES risk measure.Moreover, since the magnitude of the values of the test statistics are very close to zero, it implies that the models are only slightly overestimating the ES.These results speak in favour of the MC-GARCH models since risk managers are less concerned about overestimation of the risk metric as compared to an underestimation.Furthermore, it can be observed that the magnitude of the test statistic is smaller for the MC-GARCH model under GED innovation assumption as compared to the other innovation assumptions thereby indicating that it performs relatively better.The same observation can be made for the other two test statistics, and .Therefore, the MC-GARCH model under the GED error ditribution is the best model for ES under this backtest. Figure 9 displays the ES forecasts for the MC-GARCH model under the GED innovation process.Once more it can be seen that the MC-GARCH models are able to adequately forecast ES. Conclusions A typical question that sparks a lot of interest in the high-frequency trading literature concerns which GARCH model tends to be the best when it comes to forecasting intradaily risk metrics such as Value-at-Risk (VaR) and Expected Shortfall (ES).This paper therefore focuses on the performance analysis of the MC-GARCH model in forecasting 1-min VaR and 1-min ES. The first objective of this study was to determine which GARCH-type model gives the best insample fit to the daily EUR/USD returns for the daily variance forecast for the MC-GARCH model.It was found that overall the EGARCH(1,1) models were preferred over the GARCH(1,1) models.The EGARCH model under the GED innovation assumption however yielded the best results. The second aim of the study was to analyse the effects of different distributional assumption for the innovation process of the GARCH models for both model fitting and forecasting.Overall, it was shocks and that there is a higher probability of obtaining a negative return.All kurtosis values being greater than 3, which is the kurtosis of any univariate normal distribution, imply that return distributions have thicker tails and sharper peaks at the centre as compared to a normal distribution.When comparing the degree of kurtosis and skewness for the 1-min returns (18.31108, −0.38839) with that of the daily returns (4.90965, −0.08208), it can be established that the kurtosis and the skewness values are much higher for the 1-min returns.This suggests that both the kurtosis and the degree of skewness increase with the frequency at which the data is recorded, thus confirming the findings of Andersen and Bollerslev (1998).The high kurtosis value for the 1-min returns is yet another stylised fact of high-frequency financial returns.The minimum value for the daily return occurred during the global financial crisis.These will aid to further confirm the presence of different stylised facts present in the return series. To further demonstrate that both returns series deviate from normality, the Jarque-Bera (JB) test is carried out and their kernel estimates of the density are inspected.The results are presented in Table A2.The p-value for the 1-min returns and for the daily returns series being equal to 0 for the JB normality test allows us to safely conclude, at a 5% significance level, that indeed these distributions do not follow a normal distribution.The same conclusion can be derived when analyzing the kernel density estimates in Figure A1 for both the 1 min returns (left) and the daily returns (right) since they clearly display leptokurticity. To determine whether the series is stationary, the Augmented Dickey-Fuller (ADF) test is carried out on both return series.If the ADF test detects the presence of a unit root in the series, it can be deduced that the series is non-stationary and need differencing.Table A3 shows the results for the ADF test: The p-values for both returns series are less than 5% and this allows the rejection of the null hypothesis that a unit root is present in the series.Therefore, both return series are stationary and the order of the parameter in the ARIMA( , , ) model for both series is equal to 0. Figure 1 . Figure 1.Return series plot for the 1-min EUR/USD exchange rate returns. Figure 2 . Figure 2. Correlogram of the absolute returns for the 1-min EUR/USD returns.ACF: Auto Correlation Function. Figure 3 . Figure 3. ACF and PACF plots for intraday and daily return series.PACF: Partial Auto Correlation Function. Figure 5 . Figure 5. Empirical density of the standardised residuals. Figure 10 Figure 10 displays the forecast for both VaR and ES at 99% for the MC-GARCH_GED model. Figure A1 . Figure A1.Kernel density estimates for 1 min returns (left) and daily returns (right). Table 6 . Intraday VaR forecast: duration-based approach VaR backtesting results. Table 7 . Intraday VaR forecast: MCS results and ranking. Table 8 . McNeil and Frey (2000)bivariate ESR backtest results.ER) backtest ofMcNeil and Frey (2000)is also employed in this paper.The corresponding results are displayed in Table9, below.
v3-fos-license
2018-12-15T02:07:57.487Z
2016-01-12T00:00:00.000
67739146
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBYSA", "oa_status": "HYBRID", "oa_url": "https://jurnal.uns.ac.id/pjl/article/download/342/312", "pdf_hash": "3a2eba5c3c1211fa0386926f411b2eb5d90815f8", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42796", "s2fieldsofstudy": [ "Linguistics" ], "sha1": "3a2eba5c3c1211fa0386926f411b2eb5d90815f8", "year": 2016 }
pes2o/s2orc
The study aims to investigate the translation of ellipsis and event reference in JK The study aims to investigate the translation of ellipsis and event reference in JK Rowling‘s‘ Harry Potter and the Goblet of Fire. In this present study, a qualitative content analysis method was employed. In translating the ellipsis and event reference, semantic and syntactic referents should be taken into account. Concerning with reference to eventualities, three forms of referents namely verb phrase ellipsis, so anaphora and pronominal event reference are analysed. Some adjustments such as literal translation, explicitation, omission, and the like are made. I. INTRODUCTION The term ‗translation' means the reproduction in the receptor language of the closest natural equivalent of the receptor message, first in terms of meaning, and secondly in terms of style (Nida: 1982).The definition implies that in translation, the most important thing to remember is to find out the nearest equivalent between the ST (Source Text) and the TT (Translation Text) dealing with the meaning, content or message, when the TT is written as if it is not a translation.Besides, styles should also be taken into account.In short, meaning should be given priority since the main purpose of translation is to convey the content of the source text into the content of the target text with appropriate style. In literary translation, the terms ‗meaning' and ‗style' play a pivotal role.In this case, translators are required to be able to produce translation which is as beautiful and forceful as the original with a good accuracy.Literary translations, therefore, can be said to be the works of the literary translators….theirimaginative, intellectual an intuitive writings should not be lost to the abstraction which is often described as ‗translation ' (Bassnet: 2002).Literary translation is not only replicating a text in another verbal style of signs but also an ordered sub-system of signs within a given language in another corresponds ordered sub-system within a related language.In other words, translation is not the act of transposing signs, but a revitalization of the original in another verbal order and temporal space, since when the translation has been finished, the original work is still in the original position (Ibid, 2006: 86-87). Therefore, in the translation of prose, including novels, in order to be able to produce a good translation, there are six general rules the translator should obey (Belloc in Bassnet, 2002) namely : (1) the translator does not translate word by word or sentence by sentence, but should consider the work as an integral unit; (2) the translator must render idiom by idiom; (3) the translator must render intention by intention; where intention mans the weight of a given expression may have in a particular context in the SL that should be disproportionate if the translated literally into the TL;( 4) the translator should be cautious about les faux miss, those words or structures that may appear to correspond both in SL and TL but actually do not; (5) The translator is advised to ‗transmute boldly' where the essence of translating is ‗the resurrection of an alien thing in a native body '; and (6)the translator should never embellish. In this matter, Belloc really stresses the need for the translator to consider the prose text as structures while taking into account the stylistic and syntactic forms. If the six general rules are adopted by literary translators, high-quality translation is produced.In general, a high-quality translation includes accuracy, clarity and naturalness (Barnwell: 1984).Accuracy means that the content of the ST is properly rendered into the TT.Clarity implies that although there are some ways to express messages or ideas, but there is the best way used to express them so that the translation is easily understood by the readers.Naturalness is that the message in the ST is conveyed naturally ad properly so that the translation will not be felt as translation. In this present study, the object of the study is JK Rowling's Harry Potter and the Goblet of Fire.This novel is chosen due to the fact that it is one of JK Rowling's famous works.It has been translated into many languages, including Indonesian, and even it has been filmed.The study would be focused on the translation of ellipsis and event reference existing in this novel.Ellipsis and event references are chosen because from the best knowledge of the writer, the ellipsis and event reference are difficult to translate since it involves deep interpretation. In natural languages, to refer to things in a discourse, a wide variety of linguistic devices that may be used by speakers are available (Kehler and Word: 2006).The use of the reference of things or of referring expressions is constrained by three sources of information namely the speakers' belief about the knowledge of the hearer, the state of the hearer's discourse model (representing the entities and eventualities introduced) and about the situational context of the discourse (including entities and eventualities within the interlocutors' perceptional spheres. In analyzing the constraints, in this study, three properties will be distinguished.The first distinction is based on the level of representation, namely whether a referential expression needs the existence of an antecedent of a syntactic form or just a semantic referent.Hankamer and Sag (1976) in Kehler and Ward (2006) proposed deep and surface anaphora, where surface anaphora are syntactically controlled ,namely regaining a linguistic antecedent of a particular syntactic form, and deep anaphora are pragmatically controlled, where the referent is induced semantically without any linguistic introduction.For example in the sentence A peace agreement in the Middle East needs to e negotiated.The book had been marked down to 99 cents, the phrase The book is discourse-old since it was previously introduced. The third distinction in reference in discourse is salience.According to Price (1981a), Ariels (1990), Gundel et all (1993) and Lambrecht (1994) in Kehler and Ward (2006), salience deals with old-discourse referents.Salience serves as an important factor in explaining the cognitive status of referents.Gundel et all proposed a givens hierarchy containing 6 categories distinguishing levels of salience that determine the forms of definite reference (but ere just 4used); namely (1) in focus-referent (pronouns); (2) activated-referents (demonstratives: that, this, this N); 3)familiar referents (that N) and uniquely identifiable (a N). So far referents have been classified in terms of 3 properties: the level at which they are represented in the discourse model (syntax or semantics; the case of semantic referents (old or new information status) and hearer(-old) referents (their relative levels, salience).Now, concerning with reference to eventualities, there are four forms: gapping, verb phrase ellipsis, so anaphora and pronominal event reference.Gapping construction.Hankamer and Sag (1976) in Kehler and Ward (2006) categorizes this gapping construction as a form of surface anaphora.For example in George won the electoral vote, and Al, the popular vote.The sentence consists of two parts: the first part is called the source clause (antecedent clause) and the second is target clause (containing elliptical or referential form).Levin and Prince (1986) differentiated between symmetric and asymmetric readings which may result in ambiguity for instance, in sentence Sue became upset and Nan became downright angry and Sue became upset and Nan ---downright angry.In the first sentence (symmetric interpretation), it is shown that Sue and Nan both expressed independent emotions (from the same provocateurs), whereas in the second sentence (asymmetric interpretation) it is shown that Nan became angry because of Sue's becoming upset. Verb phrase ellipsis construction.Hankamer and Sag (1976) categorized verb phrase ellipsis as a form of surface anaphora, where it is only licensed if an antecedent of an appropriate syntactic form is available.In the sentence George claimed he won the election, and Al did too.The pronoun in the source clause may produce two possible interpretations for the target clause: either Al claimed that George won the election or Al claimed that Al won the election.However there are some examples showing that verb phrase ellipsis is felicitous although appropriate syntactic antecedent is available.See the following example: George expected Al to win the election vote when he didn't.In this sentence the ellipsis he didn't refers to expect Al to win the election.So anaphora construction.The particular form of anaphoric so in this case can appear in preverbal and post verbal positions as shown in the following: 1. -… and with complete premeditation resolved that this Imperial Majesty Haile Selassie should be strangled because he was head of the feudal system.‖He was so strangled on Aug.26, 1975 in his bed most cruelly. Section 1 provides the examples to be derived by Gapping, and a formulation of Gapping able of doing so. . The Ellipsis doing so represents deriving the examples. From the two examples, the anaphor so and the form do so are treated as surface anaphors. However, there are also examples showing that verb phrase ellipsis can be used although appropriate syntactic antecedent is unavailable.For example: George expected Al to win the election even when he didn't.In this sentence, the ellipsis he didn't refers to expect Alto win the election. However, the ellipsis does so can be used although the voice between the source and the target clauses is mismatched.For instance in the sentence Section 1 provides the examples to be derived by Gapping, and a formulation of Gapping capable of doing so.In the example, no active voice syntactic representation for the verb phrase -deriving the examples‖ is unavailable in the source clause therefore the doing so representing deriving the example. Moreover, ellipsis do so can be used when its efferent is evoked from a nominalization; for example: The defection of the seven moderate, who knew they were incurring the worth of colleagues in doing so, signaled that …..In the example, the doing so refers to deflecting which is for the Noun the deflection. Pronominal event reference construction.Pronominal event reference here is represented by do it, do this and do that, as presented in the following examples: Volume 03, Number 01, APRIL 2014 1.As the said about Ginger Rogers:‖She did everything Fred Astaire did, and she did it backwards and in high heels.‖2. Writing is a passion, and a film about the genesis of a writer should delve into the mind and heart of its subject; That -Becoming Colette‖ tries to do this is irrelevant, because it doesn't succeed. 3. So off he goes, writing in his diary the whole 3 day trip and complaining about the food and the runs I suppose, like all English people do when they go abroad, but he writes very well considering he's riding on a bumpy train, I mean he even smears his ink -once‖ (I'm bitter, I can't do that with ‗my' fountain pens.). The verb do in these construction function as the main verb, instead of an auxiliary, although this verb is transitive.And these forms are full verb phrases where nothing is elided. In the expressions the anaphoric properties are taken from the pronouns occupying the object position constrained by the transitive main verb do in order to specify an event.The expressions clearly are forms of deep anaphora which do not need antecedent. Moreover, the word that can be used to refer to different type of referents as shown in the example: 1. That's my brother-in-law 2. That is a lie. 3. That's false 4. That's a funny way to describe the situation. When did that happen. The referents in each case are (1) an entity; (1) a speech act; (3) a proposition; () an expressed description and (5) an event.Therefore, although the referent that may be constrained for its accessibility in the discourse context the type of referent in fact is relatively unconstrained. II. METHODOLOGY The study is conducted using a qualitative content analysis method, and attempts to analyze the translation of Verb Phase Ellipsis, So Anaphora and Pronominal Event reference in the JK Rowling's Harry Potter and the Goblet of Fire in order to know whether the translator may rightly identify the proper referents. In order to get the data, the researcher read the source text many times, identified the ellipsis and reference, and then found out their translation in the Indonesian version.Then they were classified into the forms and then shifts or deviations were explained. III. FINDINGS AND DISCUSSION As stated above, people use their natural languages in a wide variety of linguistic devices in order to refer to things.But, they do not use them in a random way, but they might depend their choices on their beliefs whether their hearers have prior knowledge of the referent, they have been mentioned before or they are situated in the surroundings of the participant (Kehler and Ward, 2006). The followings will be presented the findings and also the discussions in terms of three, instead of four, options in terms of verb phrase ellipsis, so anaphora, and pronominal event reference. Verb Phrase Ellipsis Hankamer and Sag (1976) proposed a distinction between deep and surface anaphora.The first is syntactically controlled, meaning that a linguistic antecedent of a particular syntactic form is required.Whereas the second is pragmatically controlled where such an antecedent is not needed, and the referent is evoked situationally.These can be seen in the following examples: The Hanged Man, the village pub, did a roaring trade that night; the whole village seemed to have turned out to discuss the murders.They were rewarded for leaving their firesides when the Riddles' cook arrived dramatically in their midst and announced to the suddenly silent pub that a man called Frank Bryce had just been arrested. -Frank!‖ cried several people.-Never!‖ (p.1) The utterance -Never‖ on the example does not refer to a linguistics of a particular syntactic form, but it depends on the situation previously described, where this is called deep anaphora.According to Kehler (200a, 2002) this may be explained from the interaction between properties of verb phrase ellipsis and those of the inference processes underlying the establishment of coherence in discourse.The example shows that the people said -Never‖ since they wandered that Frank Bryce was arrested for the murder.The literal rendering of the verb phrase ellipsis he did it is dia melakukan pembunuhan itu.But for the sake of naturalness as suggested by Nida and Taber(1984) the translation is dia pelakunya where this translation is appropriate due to the fact that the verb phrase ellipsis is in an oral context.The following example shows the verb phrase ellipsis with the type of surface anaphora.It is because the ellipsis is made based on the existing appropriate syntactic antecedent, kill Bertha. In the example 3 below, the referent is clearly represented by the antecedent. (Example 3) -Wormtail, Wormtail,‖ said the cold voice silkily, -why would I kill you?I killed Bertha because I had to.(7) "Wormtail, Wormtail," kata suara dingin itu licin, "buat apa aku membunuhmu?Aku membunuh Bertha karena terpaksa.(5) As in the above cases, the translator makes a little shift in meaning from the translation I had to aku harus (membunuhnya) into terpaksa (membunuhnya).In this case terpaksa means that the doer actually did not want to kill him, but it was the condition that he did the killing.If the translator rendered had to into harus membunuhnya, it implies that there was someone else who asked him to kill the person. In example 4, the verb phrase ellipsis is syntactically controlled or called surface anaphora. The translation of I really don't is benar-benar tidak tahu with the omission of I because there is difference in system where English is subject prominent language. Surface anaphora is also shown with the ellipsis of that + be bellow: -Krum's one decent player, Ireland has got seven,‖ said Charlie shortly.-I wish England had got through.That was embarrassing, that was.‖(39) "Krum cuma satu pemain hebat, Irlandia punya tujuh," kata Charlie singkat."Sayang sekali Inggris tidak berhasil lolos.Memalukan benar‖.29) The that was as the target clause is omitted in the translation.This happens because the translator might think that the omission will not reduce the meaning or the message. The following verb phrase ellipsis has a similar same form from that of Example 4, but it is semantically-controlled or deep anaphora. (Example 8) -Aye,‖ he said thoughtfully.-People from all over.Loads of foreigners.And not just foreigners.Weirdos, you know?There's a bloke walking ‗round in a kilt and a poncho.‖-Shouldn't he?‖ said Mr. Weasley anxiously (p.48) "Ya," katanya sambil menerawang."Orang dari segala tempat.Banyak sekali orang asing.Dan bukan cuma orang asing.Orang-orang aneh, Anda tahu?Ada yang berkeliaran memakai kilt dan ponco.""Apa itu aneh?" tanya Mr Weasley ingin tahu.(p.36)The verb phrase ellipsis is of the type of deep anaphora, referring to the previous utterance There's a bloke walking ‗round in a kilt and a poncho.The verb phrase ellipsis -Shouldn't he‖ means that is it amazing if there is a person wandering while wearing kilt and poncho.That's is why that the translation of the verb phrase is -Apa itu aneh?‖ (from Apa itu aneh kalau ada rang yang berkeliaran memakai kilt and poncho). Dealing with deep anaphora, the form negative answer‖ is employed as shown in . kurasa aku juga mau menaruh barang-barangku," kata Ron segera tanggap (68) The type of this ellipsis is surface anaphora, where the antecedent is derived from the the syntactical form of the previous sentence.The verb phrase ellipsis -I will too‖ means I will go and dump my stuff in your too.In the translation, the translator gives a full sentence namely Yeah... kurasa aku juga mau menaruh barang-barangku instead of -Saya juga mau‖ which is an unnatural utterance in bahasa Indonesia. So Anaphora This do so construction, in terms of the position, there are two namely preverbal and post-verbal ones.In the preverbal position, so is followed by a verb.See example 11 below, where so is followed by well protected. -Laying hands on Harry Potter would be so difficult, he is so well protected -‖ (p.5) "Yang Mulia, itu masuk akal," kata Wormtail, sekarang terdengar lega sekali."Menangkap Harry Potter akan sulit sekali, dia dilindungi amat ketat..." (4) The translation of the so anaphora so well protected is literal, meaning that the passive form is translated into passive too, and so it becomes dilindungi amat ketat in Bahasa Indonesia. An in this case deep anaphora is adopted where the intended referent must e inferred from the source clause. (Example13) He hadn't thought of that.How were the Weasleys going to pick him up?They didn't have a car anymore; the old Ford Anglia they had once owned was currently running wild in the Forbidden Forest at Hogwarts.But Mr. Weasley had borrowed a Ministry of Magic car last year; possibly he would do the same today?-I think so,‖ said Harry.(p.25)Harry tidak memikirkannya.Bagaimana caranya keluarga Weasley akan menjemputnya?Mereka tak lagi punya mobil.Ford Anglia tua yang pernah mereka miliki sekarang berkeliaran di Hutan Terlarang di Hogwarts.Tetapi Mr Weasley meminjam mobil Kementerian Sihir tahun lalu, mungkin hari ini juga begitu?"Kurasa begitu, " kata Harry. (p.19)The so anaphora of think so is semantically controlled, where in the construction above, the meaning is derived from the precious sentence.The translator also translate the construction literally, namely Kurasa begitu. Deep anaphora in this construction can also be seen in example 13. (Example 13) They did it in groups today; Harry, Ron, and Hermione (the most conspicuous, since they were accompanied by Pigwidgeon and Crookshanks) went first; they leaned casually against the barrier, chatting unconcernedly, and slid sideways through it… and as they did so, platform nine and three-quarters materialized in front of them.( 102 From the translation of they did so into berhasil masuk, it seems that the translator really explicates the so anaphora into the intended meaning in the source language deriving from the semantic sense. The next example shows that the anaphora construction is also used to give an opinion about what has been stated by the interlocutor. Pronominal Event Reference Pronominal event references in this case are in the forms of that Is true, that was a big thing, the can't do that, that was a lie.The pronominal event references may show a wide types of reference to which that can be used to refer to.In example 15 -16 that refers to a speech act and an expressed description, respectively (Weber, 1991 in Kehler andWard, 2006). (Example 15) -That is true,‖ said the second man, sounding amused.-A stroke of brilliance I would not have thought possible from you, Wormtailthough, if truth be told, you were not aware how useful she would be when you caught her, were you?‖(p.6) "Itu betul," kata pria yang kedua, kedengarannya geli."Itu tindakan brilian, tak pernah terpikir olehku kau bisa melakukannya, Wormtail... meskipun, kalau mau jujur, kau tidak sadar betapa bergunanya dia ketika kau menangkapnya, kan?" (p.5) The translation is That is true which is as a speech act is Itu betul.In this case not shift happens.Because there is a similar intention between the English utterance and its intention in bahasa Indonesia.-He'll be all right,‖ said Mr. Weasley quietly as they marched off onto the moor. -Sometimes, when a person's memory's modified, it makes him a bit disorientated for a while… and that was a big thing they had to make him forget.‖(p.91) "Dia akan baik-baik saja," kata Mr Weasley pelan sementara mereka berjalan ke tanah kosong."Kadang-kadang, kalau memori orang dimodifikasi, untuk dia jadi bingung selama beberapa waktu... apalagi yang harus dilupakannya hal besar.(p.67)In this example, that was a big thing is translated into hal besar.In this form, the word that refers to description to convey, and there is omission of that was in the translation. But it will not change the meaning conveyed. (Example 17) Dumbledore sat down again and turned to talk to Mad-Eye Moody.There was a great scraping and banging as all the students got to their feet and swarmed toward the double doors into the entrance hall. -They can't do that!‖ said George Weasley, who had not joined the crowd moving toward the door, but was standing up and glaring at Dumbledore.The pronominal demonstrative that in ‗They can't do that!‖ is deep anaphora in its type.As mentioned above, it is semantically controlled, meaning that the meaning of that is derived from the previous sentences.Concerning with the translation, -Tidak bisa begitu‖, there is no subject at all.But this form is still able to convey a complete meaning of the original. IV. CONCLUSION From the findings and the analysis above it can be stated that translating verb phrase ellipsis and event reference especially in the JK Rowling's' Harry Potter and the Goblet of Fire is not easy.It is due to the fact that before translating them, translators should derive meanings from the antecedents by taking into account of surface and deep anaphora, new and old information, salience, and the like.Despite such difficulties, the translations of ellipsis and reference in this novel are good enough.It can be proved from the findings that the translator is successful in conveying the content or the message of the expression into bahasa Indonesia with good readership and acceptability. (a) An agreement between India and Pakistan does too, (b) Colin Powell volunteered to do it.The does too in (a) is surface anaphora, while the does too in (b) is deep anaphora.The second distinction deals with semantic referents, whether they are old or new (speaker's or hearer's beliefs).For instance, in I bought a book at e bookstore today and I bought The Handbook of Pragmatics, the phrases a book and The handbook of Pragmatics represent discourse-new referents.But in the sentence I bought a book at the bookstore today. ( Example 6) -Wowhope it does this time!‖ said Harry enthusiastically.-Well, I certainly don't,‖ said Percy sanctimoniously.-I shudder to think what the state of my in-tray would be if I was away from work for five days.‖(40) "Wow-mudah-mudahan kali ini juga!" kata Harry antusias."Kuharap tidak," kata Percy sok rajin."Aku bergidik memikirkan tumpukan surat-masukku kalau aku meninggalkan kantor selama lima hari."(29) The translator translates I certainly don't into Kuharap tidak.It is a literal translation.The ellipsis is literally translated because the translator might think that the reader has known the implicit information from the antecedent -Wow -hope it does the time‖ The negative interrogative form is shown in the verb phrase ellipsis, referring to the previous utterance as shown in Example 9. down about a hundred sacks of Galleons a year!‖ one of them shouted.-I'ma dragon killer for the Committee for the Disposal of Dangerous Creatures.‖-No, you're not!‖ yelled his friend.-You're a dishwasher at the Leaky Cauldron… but I'm a vampire hunter, I've killed about ninety so far -‖ (p.78) "Aku pembunuh naga untuk Komite Pemunahan Satwa Berbahaya.""Bohong!" teriak temannya."Kau pencuci piring di Leaky Cauldron... tapi aku pemburu vampir, sejauh ini aku sudah membunuh sembilan puluh..." (p.57)The verb phrase ellipsis -No.you're not‖ actually is from the complete utterance -Now, you're not the dragon killer for the Committee for the Disposal of Dangerous‖, so that ellipsis is categorized into a surface anaphora.And the translator does not translate ‖No, you are not‖ into Bukan, anda bukan‖ but -Bohong‖, in order to give an emphasis.The last case in Verb Phrase Ellipsis is the form Subject + will + too, as shown in Example 10.(Example 10)Ron and Hermione looked curiously at Harry.With a meaningful look at both of them he said, -All right if I go and dump my stuff in your room, Ron?‖ -Yeah… think I will too,‖ said Ron at once.-Hermione?‖(93) Ron dan Hermione memandang Harry ingin tahu.Dengan pandangan penuh arti kepada mereka, Harry berkata, "Boleh aku taruh barang-barang di kamarmu, Ron?" "Yeah.. Besides pre-verbal construction, as shown I Example 11, post-verbal construction also exists with the construction of V + so.(Example 12) -Hope it's Angelina,‖ said Fred as Harry, Ron, and Hermione sat down.-So do I!‖ said Hermione breathlessly.-Well, we'll soon know!‖ (170) "Mudah-mudahan Angelina," kata Fred ketika Harry, Ron, dan Hermione duduk."Aku juga berharap begitu!" kata Hermione menahan napas."Yah, kita akan segera tahu!" (p.122)The translation of So do I is Aku juga berharap begitu.Why the translator translated the phrase into such a sentence because he thought that the Hermione understood that Fred really hope that something good happens to Angelina. It seems that the translator properly rendered the utterance -Never‖ into -Mana mungkin" instead of -Tidak pernah‖.This shows that the translator really realized that the people are surprised with the arrest of Frank.Compare the referent above with the utterance presented in Example 2.
v3-fos-license
2019-04-27T13:09:41.059Z
2018-04-23T00:00:00.000
123764817
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-163X/8/5/175/pdf?version=1525349665", "pdf_hash": "d628005922c2142d37b4af7407b3ae661fa9a098", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42805", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "sha1": "432fd60d5d36cdb1e2a71bcc04b26154446a97e5", "year": 2018 }
pes2o/s2orc
The Hydrothermal Breccia of Berglia-Glassberget, Trøndelag, Norway: Snapshot of a Triassic Earthquake The quartz-K-feldspar-cemented breccia of Berglia-Glassberget in the Lierne municipality in central Norway forms an ellipsoid structure 250 m× 500 m in size. The hydrothermal breccia is barren in terms of economic commodities but famous among mineral collectors for being a large and rich site of crystal quartz of various colours and habits. Despite being a famous collector site, the mineralization is rather unique in respect to its geological setting. It occurs within Late Palaeoproterozoic metarhyolites of the Lower Allochthon of the Norwegian Caledonides regionally isolated from any other contemporaneous hydrothermal or magmatic event. In order to understand better the formation of the Berglia-Glassberget breccia, the chemistry, fluid inclusion petrography and age of the breccia cement were determined. Structural features indicate that the Berglia-Glassberget is a fault-related, fluid-assisted, hydraulic breccia which formed by single pulse stress released by a seismic event. 40Ar-39Ar dating of K-feldspar cement revealed a middle Triassic age (240.3± 0.4 Ma) for this event. The influx into the fault zone of an aqueous CO2-bearing fluid triggered the sudden fault movement. The high percentage of open space in the breccia fractures with cavities up 3 m× 3 m× 4 m in size, fluid inclusion microthermometry, and trace element chemistry of quartz suggests that the breccia was formed at depths between 4 and 0.5 km (1.1 to 0.1 kbar). The origin of the breccia-cementing, CO2-bearing Na-HCO3-SO4 fluid may have been predominantly of metamorphic origin due to decarbonation reactions (T > 200 ◦C) of limestones of the underlying Olden Nappe. The decarbonation reactions were initiated by deeply derived, hot fluids channelled to sub-surface levels by a major fault zone, implying that the breccia is situated on a deep-seated structure. Regionally, the Berglia-Glassberget occurs at a supposed triple junction of long-lived fault zones belonging to the Møre-Trøndelag, Lærdal-Gjende and the Kollstraumen fault complexes. These fault systems and the associated Berglia-Glassberget earthquake are the expression of rifting and faulting in northern Europe during the middle/late Triassic. Introduction Breccias are fragmented rocks which are commonly found in the highest, most fluid-saturated part of the crust, where brittle deformation is dominant e.g., [1,2]. They occur across a wide range of settings: sedimentary breccia, impact breccia, fault breccia (gouge, cataclasite, pseudotachylite), hydrothermal breccia, hydrothermal-magmatic breccia, and purely magmatic breccia. The study of mineralized hydrothermal and hydrothermal-magmatic breccias have been of major interest for ore deposit research due to their potential of hosting economic mineralization, e.g., [3][4][5][6] whereas studies of non-mineralized breccias are more scarce. However, understanding the nature and genesis of breccias is important not only economically but also in the context of regional tectonics and earthquake prediction. Brecciated fault zones, for example, preserve a rich historical record of seismic faulting; a record that is yet to be fully studied and understood e.g., [2,[7][8][9][10][11]. Breccia bodies exhibit diverse features and are general characterized by: (1) the chemistry, mineralogy and texture of the breccia cement; (2) the nature (lithology, frequency, size, habit, etc.) of the fragments; and (3) the geometry and dimension of the breccia bodies. Genetic aspects like the degree of involvement of magmatic processes and the depth of formation may provide additional information to define hydrothermal and hydrothermal-magmatic breccias, in particular. Classifications of these breccia types commonly include both partly genetic and purely descriptive nomenclature, as discussed by [3,12,13]. In this contribution, the hydrothermal breccia of Berglia-Glassberget in Trøndelag, Norway, is studied. The Berglia-Glassberget breccia is barren in terms of economic commodities but famous among mineral collectors for being a large and rich site of high-quality crystal quartz of various colours and habits found in open cavities [14][15][16]. The mineralization is rather unique in respect to its geological setting: it occurs within Late Palaeoproterozoic rocks of the Lower Allochthon of the Norwegian Caledonides, regionally isolated from any other contemporaneous hydrothermal or magmatic activities. The breccia formation post-dates the Caledonian deformation and a hydrothermal mineralization of such young age (<390 Ma) has not been described from central Norway. The aims of this study are to better understand the formation of the Berglia-Glassberget breccia in terms of pressure-temperature-composition (P-T-X) conditions, the origin of breccia-cementing fluids, the age of breccia, and the circumstances which have led to the breccia formation. Finally, the results are discussed and evaluated to place the breccia-forming event in a regional context. Regional Geology and Characteristics of the Berglia-Glassberget Breccia Geographically the quartz-K-feldspar cemented breccia of Berglia-Glassberget is situated in the Lierne municipality in the Trøndelag county of central east Norway (Figure 1). The breccia is hosted by Late Paleoproterozoic mylonitic (very fine-grained), greyish to pinkish metarhyolite of the Formofoss Nappe Complex. The upper greenschist facies Formofoss Nappe Complex represents the upper unit of the Lower Allochthon of the Norwegian Caledonides which overlies the lower greenschist facies Olden Nappe [17,18]. Both nappes form the Grong-Olden Culmination [19] where the Berglia-Glassberget breccia is situated at its eastern edge. The breccia is in the hanging wall of a gentle SE plunging anticline formed by rocks of the Olden Nappe, a few meters above the thrust fault which separates the Olden Nappe and the overlying Formofoss Nappe (Figure 2). [20] with breccia extension and sample locations. The breccia forms a c. 250 × 500 m large, ellipsoid structure ( Figure 2) comprising a dense network of randomly orientated, breccia-filled, mainly quartz-cemented and subordinate K-feldspar-cemented fractures (3 cm to 4 m wide). The breccia structure crops out at altitudes ranging from 550 m above sea level (a.s.l.) in the SW to 600 m a.s.l. in the NE. Most of the area is covered with post-glacial soil, woods and swamps. The upper part of the deposit is close to the tree line. The area of most intense brecciation is found in the SW of the structure which is named breccia center in the following. In the center the fragmented metarhyolite is strongly silicified and dark grey to black in colour instead of pinkish grey. The borders of the breccia structure are transitional: the fractures getting thinner and less common with increasing distance from the breccia center. The randomly orientated fractures hosting hydrothermal breccias are mainly matrix-supported except parts of the breccia center, with 0 to 75 vol % clasts, 25 to 100 vol % matrix, and 0 to 80 vol % open space (cavities). The lithology of the breccia fragments is exclusively metarhyolitic (monomictic) corresponding to the closest wall rock ( Figure 3A,B). The size of clasts is highly variable ranging from millimeter-scale to meter-scale. Most clasts are angular, often platy-shaped due to preferentially rock-splitting along foliation planes. Most of the larger cavities (>0.1 m 3 ) collapsed shortly after formation and are filled with clay. The cavities contain quartz crystals of varying quality, mostly milky quartz with common crystal sizes of 0.5 to 5 cm. The deposit produced high quantities of collector quality crystal quartz specimen of different colour and habit over a period of about 100 years [14][15][16][29][30][31][32][33][34][35]. The northeastern part of the mineralization has been known by local mineral collectors since its discovery. In the 1980s and 90s collectors as Inge Rolvsen, Egil Skaret and Harald Kvarsvik started to take out quartz crystals for the mineral collector market. In the late 1990s Lars Jørgensen leased the area for systematic collection of specimens. In 2005 large cavities (up to 3 m × 3 m × 4 m in size) with smoky quartz were discovered in the southwestern part of the breccia structure ( Figure 3C,D and Figure 4A). Since than there has been a lot of collection activity by amateurs. It is still possible to find good quartz specimens in the area. Before sampling, one of the land owners, Arne Jostein Devik (adevik@online.no), has to be contacted. Despite the intense collection activities there is unfortunately very little documentation and literature about the mineralization [16]. The volume proportion of flesh-coloured K-feldspar cement (var. adularia) in the breccia veins increases from the breccia center from 5 vol % to up to 100 vol % towards NE and SE of the mineralized area. K-feldspar crystallized prior to quartz and covers the fracture and cavity walls and breccia fragments ( Figure 3A,B). In open cavities K-feldspar may form euhedral crystals up to 1 cm in size ( Figure 4B). Most of the cavities contain clear whitish to greyish crystal quartz. Smoky quartz of different shades occurs in the central part of the mineralization ( Figure 4C). The largest smoky quartz crystals which have been found were about 30 cm × 10 cm in size and the largest clear quartz crystals are up to 10 cm × 2 cm in size. The largest quartz crystal cluster on a metarhyolite fragment recovered from the large cavity in 2005 weights 1.8 ton and is now part of the collection of the Natural History Museum of Oslo (NHM collection nr. KNR 43853). Scepter quartz and one Japanese twin crystal ( Figure 4D) have been found as well as smaller pockets filled with double-termed moron crystals. Calcites of different shapes and colours have been found in some cavities. In addition, albite, galena, rutile, and laumontite have been recorded. Albite is one of last phases crystallized (up to 3 mm in size) and forms crystal lawns on quartz crystal faces. A single find of galena grown on smoky quartz has been described. According to the exposed textures the process responsible for the formation of the Berglia-Glassberget breccia can be described as a fluid-assisted hydraulic brecciation initiated by a single pulse stress e.g., [36]. The Berglia-Glassberget breccia has not been affected by Caledonian deformation and, thus, post-dates the Caledonian orogenesis. Of particular interest is the fact that there is no other known tectonic, hydrothermal or magmatic activity in the area which can be related to the breccia formation. Sampling The sampling was performed in 2012 and covered the entire brecciated area ( Figure 2). The samples include breccia-related quartz crystals and K-feldspar, bulk rock samples of the host rock, and samples exhibiting representative breccia textures. The quartz samples were prepared as: (1) double-polished wafers (c. 250 µm) for fluid inclusion microthermometry; and (2) as polished thick sections (c. 300 µm thick) clued on standard glass slides 2.8 cm × 4.8 cm for laser ablation inductively plasma mass spectrometry (LA-ICP-MS) analysis. The bulk compositions of host rocks and K-feldspar were determined at ACME laboratories in Vancouver, Canada [37]. Scanning Electron Microscope Cathodoluminescence The quartz crystals were studied with scanning electron microscope cathodoluminescence (SEM-CL) to visualize intra-granular growth zoning and alteration structures and to choose areas for LA-ICP-MS analysis. SEM-CL imaging reveals micro-scale (1 µm to 1 mm) growth zoning, alteration structures and different quartz generations which are not visible with other methods. Grey-scale contrasts visualized by SEM-CL are caused by the heterogeneous distribution of lattice defects (e.g., oxygen and silicon vacancies, broken bonds) and lattice-bound trace elements e.g., [38]. Although the physical background of the quartz CL is not fully understood, the structures revealed by CL give information about crystallisation, deformation and fluid-driven overprint. The used CL detector was a Centaurus BS bialkali-type attached to a LEO 1450VP analytical SEM based at the Geological Survey of Norway in Trondheim, Norway. The applied acceleration voltage and current at the sample surface were 20 kV and 2 nA, respectively. The SEM-CL images were collected from one scan of 43 s photo speed and a processing resolution of 1024 × 768 pixels and 256 grey levels. The brightness and contrast of the collected CL images were improved with PhotoShop software. Fluid Inclusion Microthermometry Fluid inclusion wafers were prepared from 2-3 cm long, euhedral quartz crystals originating from different cavities across the deposit. Microthermometric measurements of 23 fluid inclusions were performed on a calibrated Linkam MDSG 600 heating/freezing stage at NTNU Trondheim, Norway, using the software Linksys 32. All inclusions were rapidly cooled to −196 • C and held for one minute, then heated at a rate of 30 • C/min and held for one minute to ensure sufficient undercooling. A stepwise heating procedure was performed, during which the temperature was held for one minute at −50.0 • C and heated at 1 • C/min until ice melting was imminent. The inclusion was then cooled a few degrees to recrystallize the ice in order to observe the final melting temperature more accurately. In cases where Tmice was lower than the holding temperature, the inclusion was frozen again and holding temperature was adjusted. Laser Ablation Inductively Coupled Plasma Mass Spectrometry Concentrations of Li, Be, B, Mn, Ge, Rb, Sr, Na, Al, P, K, Ca, Ti, and Fe were determined by LA-ICP-MS at the Geological Survey of Norway in Trondheim. Five quartz crystals (1 to 3 cm in length) were cut central, parallel to the c-axis and prepared as surface-polished, 300-µm-thick sections mounted on standard glass slides or embedded in Epoxy in 25.4 mm diameter sample mounts. The analyses were undertaken on a double-focusing sector field inductively coupled plasma mass spectrometer, high resolution sector field ICP-MS, model ELEMENT XR from Thermo Scientific at the Geological Survey of Norway in Trondheim, Norway. The instrument is linked to a New Wave Excimer UP193FX ESI laser probe. The 193 nm laser had a repetition rate of 15 Hz, a spot size of 75 µm, and energy fluence about 5 to 6 J/cm 2 on the sample surface. A continuous raster ablation on an area of approximately 150 µm × 300 µm was applied. The approximate depth of ablation was between 10 and 50 µm. An Hitachi CCD video camera, type KP-D20BU, attached to the laser system, was used to observe the laser ablation process and to avoid micro mineral and fluid inclusions. The carrier gas for transport of the ablated material to the ICP-MS was He mixed with Ar. The isotope 29 Si was used as the internal standard applying the stoichiometric concentration of Si in SiO 2 . External multistandard calibration was performed using three silicate glass reference materials produced by the National Institute of Standards and Technology, USA (NIST SRM 610, 612, 614, and 616). In addition, the applied standards included the NIST SRM 1830 soda-lime float glass (0.1% m/m Al 2 O 3 ), the certified reference material BAM No. 1 amorphous SiO 2 glass from the Federal Institute for Material Research and Testing in Germany, and the Qz-Tu synthetic pure quartz monocrystal provided by Andreas Kronz from the Geowissenschaftliches Zentrum Göttingen (GZG), Germany. Certified, recommended, and proposed values for these reference materials were taken from Jochum et al. [39] and from the certificates of analysis where available. For the calculation of P concentrations, the procedure of Müller et al. [40] was applied. Each measurement comprised 15 scans of each isotope, with the measurement time varying from 0.15 s/scan for K in medium mass resolution mode to 0.024 s/scan of, for example, Li in low mass resolution mode. An Ar blank was run before each reference material and sample measurement to determine the background signal. The background was subtracted from the instrumental response of the reference material/sample before normalization against the internal standard in order to avoid effects of instrumental drift. This was carried out to avoid memory effects between samples. A weighted least squares regression model, including several measurements of the six reference materials, was used to define the calibration curve for each element. Ten sequential measurements on the BAM No.1 SiO 2 quartz glass were used to estimate the limits of detection (LOD) which were based on 3 × standard deviation (3sd) of the 10 measurements. 40 Ar/ 39 Ar Dating of K-Feldspar The K-feldspar sample (12071216; for origin see Figure 2) was crushed, grounded and subsequently sieved to obtain 180-250 µm fraction. The fraction was washed in acetone and deionized water several times and finally handpicked under a stereomicroscope. Mineral grains with coatings or inclusions were avoided. The sample was packed in aluminum capsules together with the Taylor Creek Rhyolite (TCR) flux monitor standard along with pure (zero age) K 2 SO 4 and CaF 2 salts. The sample was irradiated at IFE (Institutt for Energiteknikk, Kjeller, Norway) for c. 140 h with a nominal neutron flux of 1.3 × 10 13 n × (cm −2 × s −1 ). The correction factors for the production of isotopes from Ca were determined to be ( 39 Ar/ 37 Ar) Ca = (3.07195 ± 0.00784) × 10 −3 , ( 36 Ar/ 37 Ar) Ca = (2.9603 ± 0.026) × 10 −4 and ( 40 Ar/ 39 Ar) K = (1.3943045 ± 0.0059335) × 10 −1 for the production of K (errors quoted at 1sd). The sample was put in a 3.5 mm pit size aluminum sample disk and step heated using a defocused 3.5 mm laser beam with a uniform energy spectrum (Photon Machines Fusions 10.6 at the Geological Survey of Norway, Trondheim). The extracted gases from the sample cell were expanded into a two stage low volume extraction line (c. 300 cm 3 ), both stages equipped with SAES GP-50 (st101 alloy) getters, the first running hot (c. 350 • C) and the second running cold. They were analyzed with an automated Mass Analyzer Products Limited (MAP) 215-50 mass spectrometer in static mode, installed at the Geological Survey of Norway. The peaks and baseline (AMU = 36.2) were determined during peak hopping for 10 cycles (15 integrations per cycle, 30 integrations on mass 36 Ar) on the different masses ( 41-35 Ar) on a Balzers electron multiplier (SEV 217, analogue mode) and regressed back to zero inlet time. Blanks were analyzed every third measurement. After blank correction, a correction for mass fractionation, 37 Ar and 39 Ar decay and neutron-induced interference reactions produced in the reactor was applied using in-house software AgeMonster, written by M. Ganerød. It implements the equations of McDougall and Harrison [41] and the newly proposed decay constant for 40 K after Renne et al. [42]. A 40 Ar/ 36 Ar ratio of 298.56 ± 0.31 from Lee et al. [43], was used for the atmospheric argon correction and mass discrimination calculation using a power law distribution. We calculated J-values relative to an age of 28.619 ± 0.036 Ma for the TCR sanidine flux monitor [42]. We define a plateau according to the following requirements: at least three consecutive steps overlapping at the 95% confidence level (1.96σ) using the strict test: ≥50% cumulative 39 Ar released, and mean square of weighted deviates (MSWD) less than the two tailed student T critical test statistics for n − 1. Weighted mean ages were calculated by weighting on the inverse of the analytical variance. Secondary Ion Mass Spectrometry We conducted δ 18 O profiles on the tree quartz crystals 12071206, 12071215 and 1207127; each crystal was embedded in Epoxy in its own individual 25.4 mm diameter sample mount and the given crystal was sectioned by polishing. Surface quality was judged using an optical microscope at high magnification and the total roughness of the sample surfaces were judged to be no more than a couple of micrometers. Thus, along with a separate sample block for calibration materials, there were four such mounts employed for our oxygen isotope determinations. Prior to conducting the δ 18 O analyses each crystal was imaged in monochromatic cathode luminescence after which the sample mount was ultrasonically cleaned in high-purity ethanol, was coated with a 35 nm thick, high-purity gold film and was place in a low magnetic susceptibility sample holder and held in place using brass tension springs. Our Secondary Ion Mass Spectrometry (SIMS) analyses were conducted using the Cameca 1280-HR) instrument in Potsdam using a~2 nA Gaussian 133 Cs + primary beam that was focused to an~5 µm diameter on the sample's surface. Each analysis was preceded by a 70 s preburn using a 20 µm raster. Low energy electron flooding was used for charge compensation, with the total electron current <1 µA. During data acquisition our primary beam was rastered over a 10 µm × 10 µm area in order to reduce isotopic drift during the analysis; this rastering was compensated using the dynamic transfer capability of the 1280-HR's extraction optics. Our mass spectrometer was operated in static multi-collection mode with a mass resolution of M/∆M ≈ 1900 at 10% peak height, which is adequate to remove all significant isobaric interferences. The 16 O − count rate was measured using our L2' Faraday cup in conjunction with an e+10 Ω resistor, whereas the 18 O − count rate was measured using our H2' Faraday cup in conjunction with an e+11 Ω resistor. An analysis consisted of 20 integrations each lasting 4 seconds; hence a single δ 18 O analysis, including preburn and automated centring routines, took nearly 3 min. Total sampling mass consumed during data collection was~260 pg as determined by volume estimates based on white light profilometry. In Figure 5 the 3D surface topographic map of a sputtering crater is shown. All data were collected during a single session lasting 6.3 h, during which we acquired 37 determinations on our quartz samples in addition to 47 calibration runs. We used the quartz sand reference material NBS28 in order to calibrate our absolute δ 18 O values; this material has a published value of δ 18 O VSMOW = 9.57 ± 0.10‰ (1sd) [44]. We have used an absolute value for the zero-point on the Vienna Standard Mean Ocean Water (VSMOW) delta-scale set at 18 O/ 16 O = 0.00200520 [45], which we have directly transferred to the zero-point value for the VSMOW scale. A total of 31 determinations were made on NBS28. We also checked for the presence of analytical drift by analysing a piece of NIST SRM 610 silicate glass a total of 16 times; this glass is believed to be homogeneous in its δ 18 O composition at the sub-nanogram scale. The piece of polished NIST SRM 610 was embedded in epoxy alongside the NBS28 quartz reference material. No analytical drift could be detected during the course of our run, reflected by an analytical repeatability for the determined 18 O − / 16 O − ratio of ±0.095‰ (1sd) on the glass. Our analytical series consisted of multiple analyses on the reference sample mount, followed by a full profile on the 12071206 quartz, followed by multiple analyses again on the reference mount and so on, until all three quartz crystals had been analyzed. Within run uncertainties based on the standard error from the 20, four second integrations were typically around ±0.05‰, whereas the analytical repeatability on the NIST SRM 610 glass was ±0.095‰. An additional source of uncertainty in the δ-values obtained on our three quartz crystals is the assigned uncertainty of ±0.10‰ (1sd) given on the NBS28 reference sheet (IAEA, 2007). Combining all of these uncertainty sources, we estimate that the total analytical uncertainty on our individual δ 18 O determinations to be circa ± 0.14‰ (1sd). The repeatability on the N = 31 determinations of the NBS28 quartz sand was ± 0.32‰ (1sd), which is consistent with the level of heterogeneity reported previously for this material when evaluated at the sub-nanogram sampling scale [46]. We used the quartz sand reference material NBS28 in order to calibrate our absolute δ 18 O values; this material has a published value of δ 18 OVSMOW = 9.57 ± 0.10‰ (1sd) [44]. We have used an absolute value for the zero-point on the Vienna Standard Mean Ocean Water (VSMOW) delta-scale set at 18 O/ 16 O = 0.00200520 [45], which we have directly transferred to the zero-point value for the VSMOW scale. A total of 31 determinations were made on NBS28. We also checked for the presence of analytical drift by analysing a piece of NIST SRM 610 silicate glass a total of 16 times; this glass is believed to be homogeneous in its δ 18 O composition at the sub-nanogram scale. The piece of polished NIST SRM 610 was embedded in epoxy alongside the NBS28 quartz reference material. No analytical drift could be detected during the course of our run, reflected by an analytical repeatability for the determined 18 O − / 16 O − ratio of ±0.095‰ (1sd) on the glass. Our analytical series consisted of multiple analyses on the reference sample mount, followed by a full profile on the 12071206 quartz, followed by multiple analyses again on the reference mount and so on, until all three quartz crystals had been analyzed. Within run uncertainties based on the standard error from the 20, four second integrations were typically around ±0.05‰, whereas the analytical repeatability on the NIST SRM 610 glass was ±0.095‰. An additional source of uncertainty in the δ-values obtained on our three quartz crystals is the assigned uncertainty of ±0.10‰ (1sd) given on the NBS28 reference sheet (IAEA, 2007). Combining all of these uncertainty sources, we estimate that the total analytical uncertainty on our individual δ 18 O determinations to be circa ± 0.14‰ (1sd). The repeatability on the N = 31 determinations of the NBS28 quartz sand was ± 0.32‰ (1sd), which is consistent with the level of heterogeneity reported previously for this material when evaluated at the sub-nanogram sampling scale [46]. Chemistry of the Breccia Host Rock The pinkish grey, very fine-grained metarhyolite, which hosts the breccia, shows a strong mylonitic foliation. Macroscopically feldspars, quartz and biotite are identified. Recrystallized biotite is strongly elongated and stretched parallel to the foliation plane. Five samples of the metarhyolite were selected for major and trace element analysis, which were performed at ACMELabs [37] (Table 1). On the total alkali oxide vs. silica (TAS) diagram ( Figure 6A), the unaltered metarhyolites fall in the rhyolite field, but from sampling over a larger area some of the felsic rocks were found to fall in the trachyte field [23]. Compositions overlap with the undeformed Blåfjellhatten granite exposed in the adjacent (2 km to the west) Olden Nappe [24]. Two metarhyolite samples (12071212, 12071213) have very high SiO 2 (>93 wt %) due to intense silicification caused by infiltration of breccia-cementing fluids. The occurrence of silicified metarhyolite is limited to the breccia center ( Figure 2). On the molar ratio the unaltered samples are classified as weakly peraluminous I-type granitoids, as in the case of the spatially related Blåfjellhatten granite [24] and the associated felsic volcanic rocks [23] ( Figure 6B). The high-K, alkali Berglia-Glassberget metarhyolite has an A-type composition according to Whalen et al. [47] ( Figure 6C). The Nb-Y and Rb-(Y + Nb) diagrams (not shown) of Pearce et al. [48] both show that the Berglia-Glassberget metarhyolite plots in the field for within-plate granites. Within-plate granites are equivalent to the A-type, that is, anorogenic granites e.g., [47]. The relatively high large ion lithophile and high field strength element abundances in the metarhyolite analyses do confirm the A-type characteristics. For further chemical subdivision of the anorogenic granitoids, the diagrams devised by Eby [49] are useful in pointing to the likely source of the magmas. The Rb/Nb vs. Y/Nb plot (not shown), for example, classifies the Berglia-Glassberget metarhyolite in the field (A2) for magmas derived by partial melting of a continental crust that had probably been through a cycle of subduction-zone magmatism [49]. In the Upper continental crust-normalized diagram of selected incompatible elements ( Figure 6D) K, Ba and Rb in the silicified samples are extremely depleted together with Th, Nb and Ta, indicating that mainly K-feldspar was leached out by the breccia-forming fluid. These elements are enriched in the unaltered samples compared to average upper crust composition. The adjacent Blåfjellhatten granite has a similar incompatible element signature suggesting a genetic relationship. The Berglia-Glassberget metarhyolite possibly represents the volcanic expression of the Blåfjellhatten granite. In summary, the geochemical data show that the Berglia-Glassberget metarhyolite magma derived from a fairly, but not markedly evolved, crustal source in a continental, extensional and probably anorogenic setting which is typically for TIB granites. Chemistry and Age of the K-Feldspar Cement The average composition of the K-feldspar cement of the Berglia-Glassberget breccia is given in Table 2. The K-feldspar has relative pure orthoclase composition An1.6Ab0.7Or97.7 with high Rb (792 µg/g), Ba (1937 µg/g) and Th (77 µg/g) and relative low Sr (70 µg/g). The trace element concentrations are generally higher compared to the chemistry of low-temperature K-feldspar from other hydrothermal vein deposits [52,53]. The composition is, however, similar to K-feldspar found in moderately evolved granitic rocks e.g., [54]. The main results of the 40 Ar/ 39 Ar dating of the K-feldspar cement and the age spectrum and inverse isochron plots are displayed in Table 3 and Figure 7. The raw degassing experiments, corrected for blanks, can be found in the Supplementary Material Table SM01. The degas pattern shows climbing apparent ages in the first part of the heating experiment and stabilize into a plateau from steps 19-25 ( Figure 7A). From those steps an inverse weighted mean plateau age (WMPA) of 240.3 ± 0.4 Ma (MSWD = 0.589, P = 0.739) is calculated. The same steps yield an inverse isochron age ( Figure 7B) of 240.2 ± 3.9 (MSWD = 0.765, P = 0.575) with a trapped 40 Ar/ 36 Ar ratio of modern atmospheric value [43], thus, an excess 40 Ar component in the spectrum can be ruled out. Given the similarities between the apparent ages from the WMPA and the inverse isochron determinations, we interpret the WMPA as the best age estimate of these K-Feldspars. Chemistry and Age of the K-Feldspar Cement The average composition of the K-feldspar cement of the Berglia-Glassberget breccia is given in Table 2. The K-feldspar has relative pure orthoclase composition An 1.6 Ab 0.7 Or 97.7 with high Rb (792 µg/g), Ba (1937 µg/g) and Th (77 µg/g) and relative low Sr (70 µg/g). The trace element concentrations are generally higher compared to the chemistry of low-temperature K-feldspar from other hydrothermal vein deposits [52,53]. The composition is, however, similar to K-feldspar found in moderately evolved granitic rocks e.g., [54]. The main results of the 40 Ar/ 39 Ar dating of the K-feldspar cement and the age spectrum and inverse isochron plots are displayed in Table 3 and Figure 7. The raw degassing experiments, corrected for blanks, can be found in the Supplementary Materials Table S1. The degas pattern shows climbing apparent ages in the first part of the heating experiment and stabilize into a plateau from steps 19-25 ( Figure 7A). From those steps an inverse weighted mean plateau age (WMPA) of 240.3 ± 0.4 Ma (MSWD = 0.589, P = 0.739) is calculated. The same steps yield an inverse isochron age ( Figure 7B) of 240.2 ± 3.9 (MSWD = 0.765, P = 0.575) with a trapped 40 Ar/ 36 Ar ratio of modern atmospheric value [43], thus, an excess 40 Ar component in the spectrum can be ruled out. Given the similarities between the apparent ages from the WMPA and the inverse isochron determinations, we interpret the WMPA as the best age estimate of these K-Feldspars. SEM-CL and Chemistry of the Quartz Cement Applying SEM-CL imaging quartz crystals of the breccia cement show complex intra-granular growth structures (Figure 8). All investigated crystals have a relative homogeneous, bright luminescent core, with only a few oscillatory zones of low contrast. Oscillatory growth zones with less intense CL become more and more prominent towards the grain margin. The outermost growth zones are almost non-luminescent and appear black in SEM-CL images. Sector zoning is typically developed ( Figure 8E). Sector zoning refers to a compositional difference between coeval growth sectors in a crystal and results from differences in fluid-crystal element partitioning between nonequivalent faces of the crystal. The growth and sector zoning, generally considered as primary structures, are superimposed by dull to non-luminescent healed microfractures associating patchy domains of recrystallized quartz. The patchy domains are abundant in sample 12071217 occupying about 1/3 of the crystal volume ( Figure 8D). These superimposing textures are considered as secondary quartz and formed after quartz crystallization. Trace elements of five quartz crystals (1 to 3 cm in length) originating from five different cavities were analyzed by LA-ICP-MS. Six analyses were performed on each crystal (Table 4, Figure 8 (Table 4). Aluminium concentrations show an extreme data spread ranging from <4 to 2471 µg/g. The data spread within one crystal can be similar as the concentration range of the entire data set (sample 12071218). Aluminium concentrations >1000 µg/g have been described previously from hydrothermal quartz but seem not so common considering published data [55][56][57][58][59]. In addition to Al the studied quartz is strongly enriched in Li. Concentrations vary from <1 µg/g to 319 µg/g; highest Li is observed in sample 12071218. The Li enrichment is stronger than, for example, in quartz of Li-enriched granites of the Land's End pluton in Cornwall, UK [60] and Li-rich pegmatites of the Tres Arroyos granite-pegmatite suite, Spain [61] (Figure 9A). In the Al versus Li plot of Figure 9A the data show a weak positive correlation due to the substitution Si 4+ ↔Al 3+ + Li + , where Al replaces Si in the tetrahedral site and Li + enter interstitial lattice position e.g., [62]. Higher Li values plot close to the 2:1 Al:Li atomic ratio line indicating that about half of the Al 3+ defects are charged compensated by Li + ; alternative charge compensator are H + , Na + , or K + . Non-luminescent, secondary quartz, which is most prominent in sample 12071217, is strongly depleted in both Al and Li, whereas growth zones with bright CL have high concentrations of both elements. The correlation of high Al and Li with bright CL has been described previously from hydrothermal quartz by Ramseyer and Mullis [63] and Jourdan et al. [59]. Manganese concentrations vary between <0.2 to 0.6 µg/g and do not show correlations with Al or Li. Strontium concentrations range from <0.03 to 0.18 µg/g. Germanium is mostly below the LOD of <0.07 µg/g except for sample 12071215 where Ge values rise up to 15 µg/g. Titanium concentrations are consistently below the LOD of 0.5 µg/g. For illustration, Ti concentrations (≤0.5 µg/g; hatched area in Figure 9B) plotted in the logarithmic Ti and Al diagram according to Rusk [58] for comparison with data of c. 30 porphyry-type, orogenic Au, and epithermal deposits (colour-shaded fields in Figure 9B). The diagram is used to fingerprint the type of ore deposit based on the trace element composition of quartz. Data from Berglia-Glassberget overlap with epithermal deposits which confirm the hydrothermal nature of the Berglia-Glassberget breccia. [61] and the blue-shaded area those from the Li-bearing Land's End pluton in Cornwall (UK) [60]. (B) Logarithmic Ti versus Al plot of quartz of the Berglia-Glassberget breccia compared with data from Rusk [58] including c. 30 porphyry-type (Cu-Mo-Au) deposits, orogenic Au deposits, and epithermal deposits. The hatched field corresponds to the Berglia-Glassberget dataset because the Ti contents are below the detection limit of 0.5 µg/g. For symbol explanation see Figure 9A. [58] including c. 30 porphyry-type (Cu-Mo-Au) deposits, orogenic Au deposits, and epithermal deposits. The hatched field corresponds to the Berglia-Glassberget dataset because the Ti contents are below the detection limit of 0.5 µg/g. For symbol explanation see Figure 9A. Petrography and Microthermometry of Fluid Inclusions About 150 inclusions in quartz crystals from 5 different cavities of the Berglia-Glassberget breccia were examined. Microthermometric measurements were performed on representative 23 fluid inclusions based on the observation that only three genetically related types of pseudosecondary fluid inclusion were identified (Table 5; secondary inclusions are not considered): Type I: Inclusions occur in small three-dimensional clusters or in short trails preferentially in the core of euhedral crystals. For these inclusions a pseudosecondary origin is suggested [64]. The aqueous fluid of the inclusions contains two-phase bubbles (liquid and gaseous CO 2 ) occupying 25-40 vol % of the inclusion at room temperature ( Figure 10A). The presence of CO 2 was confirmed by Raman spectrometry performed at German Research Center of Geosciences in Potsdam (GFZ), Germany (Rainer Thomas, personal communication). Type I inclusions contain several types of relative large solids (5-25 µm) most of them have been identified as nahcolite and alkali sulphates by Raman spectroscopy. The group shows a narrow range of ice melting temperatures, from −0.4 to −0.6 • C, and liquid homogenization, from 242 to 248 • C. First melting was observed at −24 to −22 • C, implying some K + in addition to Na + [65]. Salinities are low, 0.7 to 1.0 NaCl wt % equivalent. Assuming a pressure during breccia formation of 1.1 to 0.1 kbar the trapping temperature of these fluid inclusions would be between 247 and 329 • C (utilizing the HokieFlincs_H 2 O-NaCl spreadsheet by Steele-MacInnis et al. [66]). Type II: Inclusions occur in short trails or in lineations ending at intra-granular growth zones. For these inclusions an early pseudosecondary origin is suggested [64]. These inclusions contain smaller bubbles than type I inclusions and one or two solids at room temperature ( Figure 10B). The solids were identified with Raman spectrometry as nahcolite and sulphates (Rainer Thomas, personal communication). Th and Tmice vary between 203 to 214 • C and −29 to −24 • C, respectively. The ice melt temperatures range from −4.6 to −0.4 • C and calculated salinities (according to Steele-MacInnis et al. [66]) vary from 0.4 to 7.3 NaCl wt % equivalent. One inclusion displayed clathrate formation with a melting temperature of 2.2 • C. Type III: Inclusions contain aqueous liquid and vapor, and occasionally needle-like solids identified as nahcolite ( Figure 10C). They occur in short trails ending at intra-crystal growth zones. These inclusions are also interpreted to be pseudosecondary [64]. Type III has Th from 136 to 189 • C. Tmice shows a wide range from −8.9 and 19.2 • C due to clathrate formation in some inclusions. The ice melt temperatures for non-clathrate-forming inclusions range from −8.9 to −0.2 • C and those for clathrate-forming inclusions 2.2 to 19.2 • C. Salinities of non-clathrate-forming inclusion vary from 0.4 to 12.7 NaCl wt % equivalent. Two Tm of clathrates exceed the maximum temperature of 10 • C in the H 2 O-CO 2 system, and these may reflect clathrate metastability or the presence of other gases e.g., CH 4 [67]. Summarizing, the breccia was cemented by K-feldspar and quartz which crystallized from an aqueous CO 2 -bearing Na-HCO 3 -SO 4 fluid with very minor Cl. However, primary fluid inclusion could not be identified. The crystallization temperature consistently decreases, whereas the salinity consistently increases from type I to type III inclusions. Table 5. Summary of petrographic and microthermometric data of primary and pseudosecondary fluid inclusions in quartz crystals from the Berglia-Glassberget breccia. Te-temperature of first melting, Tmice-temperature of final ice melting, Th-temperature of homogenization. Oxygen Isotope Geochemistry of Quartz The results of SIMS oxygen isotope measurements are provided in Table 6, and the full SIMS data set including the results of the calibration runs is given in the electronic supplementary material Oxygen Isotope Geochemistry of Quartz The results of SIMS oxygen isotope measurements are provided in Table 6, and the full SIMS data set including the results of the calibration runs is given in the electronic Supplementary Materials Table S1. Profiles of δ 18 O values across three selected quartz crystals (12071206, 12071215, 12071217) are plotted in Figure 11A-C and the locations of measurement spots are indicated in Figure 8. Figure 11D-F illustrate the δ 18 O variation in form of histograms. The measured δ 18 O values display a large variation from 0.9 to 12.2‰ δ 18 O. Highest δ 18 O values are observed across the entire 12071215 crystal from the breccia center (mean 10.8 ± 1.1‰; n = 12) and in the late secondary, non-luminescent quartz, overprinting crystal 12071217 (mean 9.8 ± 0.5‰; n = 6). Average isotope ratios of crystals form the breccia margin are much lower: 3.8 ± 2.4‰ δ 18 O for crystal 12071206 and 2.4 ± 0.7‰ δ 18 O for 12071217 (excluding δ 18 O values of secondary formed quartz). Thus, the δ 18 O values in quartz decrease considerably from the center towards the margin of the breccia. In contrast to the relative consistent isotope ratios of primary precipitated quartz of crystals 12071215 and 12071217 (not considering secondary quartz in crystal 12071217), the δ 18 O values in 12071206 display two composition steps ( Figure 11C). The first step (in growth direction) from 5.6 ± 0.4‰ to 1.3 ± 0.7‰ corresponds to the transition from the homogeneous crystal core to the crystal margin rich in oscillatory growth zoning as documented in the CL images ( Figure 8B). The second step is marked by the outermost, non-luminescent growth zone in which the isotope ratio increases abruptly to 6.6 ± 0.3‰. In addition to the documented large-scale δ 18 O variations controlled by the quartz-forming fluid, there is a subordinate systematic difference between different, adjacent crystal growth faces. Different growth faces are visualized in CL in the form of sector zoning (Figure 8). Examples of this variation are displayed in the bottom of crystal 12071215 ( Figure 8C), where δ 18 O values in one zone are 10.5 ± 0.5‰ (n = 3) and in the adjacent zone 11.9 ± 0.1‰ δ 18 O (n = 2), and in the top of crystal 12071217 ( Figure 8D), where one sector zone has 1.3‰ and the neighboring zone 2.6 ± 0.7‰ δ 18 O (n = 2). The observed δ 18 O differences in adjacent crystal faces are between 0.6 to 2.1‰, which are in the same range as δ 18 O differences described by Onasch and Vennemann [68] and Jourdan et al. [59] from hydrothermal quartz. No correlation was found between δ 18 O values and quartz trace element contents in the three quartz crystals that sere analyzed. Processes Responsible for the Formation of the Berglia-Glassberget Breccia The structural characteristics described above classify the Berglia-Glassberget mineralization genetically as fault-related, fluid-assisted hydraulic breccia formed by a single pulse stress typically for upper crust levels e.g., [1,36]. Such fault rocks represent implosion breccias, formed by the 'sudden creation of void space and fluid pressure differentials at dilational fault jogs during earthquake rupture propagation' [2]. This implosion forms commonly fitted-fabric to chaotic fault breccias, cemented by hydrothermal fluids e.g., [36,[69][70][71][72][73][74]. The implosion hypothesis envisages that faulting-induced voids are transient and filled coseismically by a dilated mass of rock fragments. The dilation breccias probably formed during the seismic event of strike-slip displacement on an irregular fault. The influx of fluids into fault zones can trigger short-term weakening mechanism that facilitate fault movement and earthquake nucleation by reducing the shear stress or frictional resistance to slip e.g., [75]. Crustal fluids can be trapped by low-permeability sealed fault zones or stratigraphic barriers. The development of fluid overpressures at the base of the fault zone can help to facilitate fault slip, which was likely the cause for the Berglia-Glassberget breccia formation. Once the barrier is ruptured by an earthquake, permeability increases and fluids are redistributed from high to low pressure areas. In this model, a fault is considered to act as a valve [76,77]. The seismic energy released by brittle failure lead to rapid, seconds-long, fragmentation and dilation of at least 30 million m 3 of metarhyolitic rock at Berglia-Glassberget. Hydraulic fracturing Processes Responsible for the Formation of the Berglia-Glassberget Breccia The structural characteristics described above classify the Berglia-Glassberget mineralization genetically as fault-related, fluid-assisted hydraulic breccia formed by a single pulse stress typically for upper crust levels e.g., [1,36]. Such fault rocks represent implosion breccias, formed by the 'sudden creation of void space and fluid pressure differentials at dilational fault jogs during earthquake rupture propagation' [2]. This implosion forms commonly fitted-fabric to chaotic fault breccias, cemented by hydrothermal fluids e.g., [36,[69][70][71][72][73][74]. The implosion hypothesis envisages that faulting-induced voids are transient and filled coseismically by a dilated mass of rock fragments. The dilation breccias probably formed during the seismic event of strike-slip displacement on an irregular fault. The influx of fluids into fault zones can trigger short-term weakening mechanism that facilitate fault movement and earthquake nucleation by reducing the shear stress or frictional resistance to slip e.g., [75]. Crustal fluids can be trapped by low-permeability sealed fault zones or stratigraphic barriers. The development of fluid overpressures at the base of the fault zone can help to facilitate fault slip, which was likely the cause for the Berglia-Glassberget breccia formation. Once the barrier is ruptured by an earthquake, permeability increases and fluids are redistributed from high to low pressure areas. In this model, a fault is considered to act as a valve [76,77]. The seismic energy released by brittle failure lead to rapid, seconds-long, fragmentation and dilation of at least 30 million m 3 of metarhyolitic rock at Berglia-Glassberget. Hydraulic fracturing was mainly responsible for the rock fragmentation. The breccia was resealed soon after the seismic event by quartz and K-feldspar cement. The very high Al and Li concentrations in the quartz cement suggest disequilibrium growth, meaning that the quartz crystallized very fast, maybe in hours or days, so that impurities were incorporated as in large quantities as defect clusters during rapid crystal growth. The resealing strengthened the fractured rock, so that rebrecciation of the same volume is uncommon, except where repeated dilational strains are focused at fault bends or jogs [11,78]. The Berglia-Glassberget does not show evidence for rebrecciation or persistent refracturing and has been formed by a single seismic event. The single event of brecciation is evident from the occurrence of only one quartz cement generation as documented by similar trace element chemistry, CL structures and fluid inclusion assemblage. However, the nature of the regional fault structure remains unclear because of the overburden covering large areas of the studied locality (see also discussion below). The proportion of open space (cavities) in brecciated fractures is up to 80 vol % in the breccia center. This high percentage of open space indicates that the hydrostatic pressure was much greater than the lithostatic pressure. In fact, the lithostatic pressure was so low that large cavities up to 3 m × 3 m × 4 m in size remained stable after the seismic event. The characteristics described above suggest that the brecciation took place at a maximum depth of about 4 km [1]. However, the post-late-Triassic erosion in the study area did not exceed 2 km as indicated by regional fission track analyses e.g., [79]; thus the formation depth of the Berglia-Glassberget breccia was presumably much less than 4 km. Characteristics and Origin of Breccia-Forming Fluids The observed fluid inclusion assemblage characterizes the breccia-forming fluid as aqueous CO 2 -bearing Na-HCO 3 -SO 4 fluid with very minor Cl. The minimum temperature of the breccia-cementing fluid was in the range of 247 and 329 • C considering type I inclusions as being representative for the fluid pumped by the seismic event into the brecciation level and assuming a formation depth of 0.5 to 4 km (0.01 to 0.11 GPa). The crystallization of paragenetic, almost Na-free, low-temperature K-feldspar (var. adularia) and low Ti concentrations (<0.5 µg/g) in the quartz indicate formation temperatures of circa 350 • C or less [80][81][82]. Type II and type III inclusions record gradually decreasing crystallization temperatures and increasing salinity of a single-source fluid during progressing cement crystallization. The fluid leached K, Ba, Rb, Th, Nb and Ta from the metarhyolite in the SW part of shattered area (breccia center). The dissolved K, Ba, Rb and Th were preferentially transported to the periphery of the breccia structure where they precipitated mainly as breccia-cementing adularia. The element transport is recorded by the increasing K-feldspar/quartz cement ratio in the breccia faults towards the margin of the breccia structure. Such a leaching explains the composition of the K-feldspar cement; its composition is inherited from the leached, rock-forming K-feldspar of the metarhyolite. The oxygen isotope ratios of breccia-cementing quartz display a large variation (0.9 to 12.2‰ δ 18 O) which can generally result from: (1) variations in temperature; (2) variations in the δ 18 O of the parent fluid; (3) disequilibrium effects; and (4) a combination of those three. Disequilibrium partitioning of oxygen isotopes (cause 3) causes δ 18 O variation in the range of 0.6 to 2.1‰ and, thus, has only a subordinate effect considering the overall variation of about 12‰. The high and relative consistent δ 18 O values of crystal 12071215 from the breccia center (10.8 ± 1.1‰) presumably reflect the initial oxygen isotope ratio of the breccia-cementing fluid. Such high δ 18 O values could be caused by any of these reservoirs: granitic rocks, metamorphic rocks or sedimentary rocks e.g., [83]. In any case, meteoric water can be excluded as the primary source of this oxygen. The average δ 18 O values in quartz decrease from the center towards the margin of the breccia, which is interpreted as dilution of the breccia-cementing fluid in heavy 18 O during progressing fluid migration and accompanied quartz precipitation. A minor decrease in fluid temperature might also have contributed to the δ 18 O decrease. The sudden drop in δ 18 O in the overgrowth of crystal 12071206 is explained by the influx of "local" meteoric water during final quartz growth. However, this seems to be a special feature of this particular cavity because similar drops are not developed in the other two investigated crystals. The authors have no plausible explanation of the high δ 18 O values (9.8 ± 0.5‰) observed in the secondary quartz replacing primary quartz of crystal 12071217 and in the outermost growth zone of crystal 12071206 (6.6 ± 0.3‰). In general, the origin of CO 2 recorded in fluid inclusions of hydrothermal mineralization is still a matter of debate. Fluid inclusion and breccia characteristics indicate a deep-seated source for CO 2 that could be of mantle, magmatic or metamorphic in origin. The source of CO 2 in hydrothermal mineralization in metamorphic terranes has usually been considered as a relic of mantle and/or lower crustal (magmatic and/or metamorphic) fluids e.g., [69,84,85] that might have been channelled along through-going faults toward Earth's surface [86] and/or transported to higher levels in the crust by magmas [87,88]. Devolatilisation of supracrustal sequences during prograde metamorphism is the widely accepted hypothesis for the origin of CO 2 for a range of hydrothermal deposit types [69,89]. Alternatively, CO 2 -dominated fluids may exsolve from felsic magmas formed at depths greater than 5 km in the crust; they are a typical feature of intrusion-related deposits [88]. Furthermore, these fluids can also be associated with granulite facies metamorphism and with charnockitic magmatism [90,91]. In the case of the Berglia-Glassberget breccia the CO 2 has most likely a metamorphic origin due to decarbonation reactions (T > 200 • C) of limestones in a thin autochthonous succession, the Bjørndalen Formation, lying unconformably upon the granitic and felsic volcanic rocks of the Olden Nappe. The Berglia-Glassberget breccia is situated immediately above a thrust fold anticline formed by the limestone-bearing autochthonous sequence ( Figure 2). Thus, decarbonation reactions of this limestone might be the possible source of the CO 2 . This implies that low-grade metamorphism or hydrothermal activity initiated thermo-metamorphic reactions, involving silicate and carbonate rocks, took place in the middle Triassic. However, there was definitely no low-grade metamorphism in the area in the middle Triassic. The most plausible explanation is that CO 2 was produced from limestone of the Bjørndalen Formation by hydrothermal reactions initiated by deeply derived, hot fluids channelled to sub-surface levels by a major fault zone. The high SO 4 content probably relates to oxidation of sulfides of wall rocks through which the fluids was channelled. The Formation of the Berglia-Glassberget Breccia in the Regional Context Tectonic, hydrothermal or magmatic activities of Middle Triassic age have not been recorded in the vicinity of the Berglia-Glassberget mineralization which could be directly related to the breccia formation. In respect to Caledonian structures, the Berglia-Glassberget breccia is at the eastern edge of the Grong-Olden Culmination, which is a Caledonian nappe structure [19]. The breccia is situated in the hanging wall, above the fold axis of a gently SW plunging anticline of the Olden Nappe, a few meters above the thrust fault ( Figure 2). This the thrust fault separates the Olden Nappe from the overlying Formofoss Nappe Complex. The close position of the breccia above the thrust fault may have had an effect on the location of breccia formation. However, these flat-dipping Caledonian structures are not known to have been reactivated during Triassic time. The eastern limb of the thrust fold and the foliation of the metarhyolite dip 30 to 35 • NE but the breccia structures are discordant (generally sub-vertical) and transect these Caledonian structures. On a global scale, the middle/late Triassic boundary (230 ± 5 Ma) marks the incipient dispersal of Pangea by the onset of continental rifting [92]. In NW Europe including southern Norway the Triassic was a period of major rifting and faulting [93][94][95][96][97][98], involving many long-lived fault zones, such as the Great Glen Fault [99] and the Møre Trøndelag Fault Complex (MTFC) [79,[100][101][102][103][104][105] (Figure 12). The MTFC is the northernmost of several important regional structures identified in southern Norway and is widely agreed to have played an important role in the development of the Norwegian margin e.g., [106,107]. The MTFC is a large-scale SW-NE-striking tectonic zone which extends onshore for about 350 km from Ålesund in the SW to Snåsa and further towards the NE. It is a long-lived tectonic feature possibly with Precambrian roots [103,108,109] and principally comprised by the Hitra-Snåsa and Verran faults ( Figure 12). The faults became multiply reactivated during the late Paleozoic, Mesozoic and late Cretaceous [79,104]. Fission-track dating of apatite, zircon and titanite by Grønlie et al. [79] along the NE part of the Verran fault revealed several Triassic reactivations overlapping in time with the formation of the Berglia-Glassberget breccia. According to Redfield et al. [104,105] the major Verran fault peters out about 30 km W of the breccia. However, a dense network of SW-NE striking structures transecting the Grong-Olden Culmination [19,20] document MTFC-related faults in the E and NE extension of the major Verran fault, immediately adjacent to the Berglia-Glassberget breccia. Considering the regional context of these long-lived fault zones, the Berglia-Glassberget breccia is situated at the NE end of the Verran fault, probably forming a triple junction with the N-S striking faults related to the northern end of the Laerdal-Gjende fault system (coming from S, e.g., [110]) and the SE end of the Kollstraumen detachment (coming from NW; e.g., [111]) ( Figure 12). Based in this geotectonic positon, the Berglia-Glassberget breccia can be considered as a seismic expression related to the Triassic reactivation of the MTFC. However, the specific nature of the major fault along which the seismic rupture occurred cannot be defined due to the heavy overburden, but might be a solved by future studies. Triassic reactivations overlapping in time with the formation of the Berglia-Glassberget breccia. According to Redfield et al. [104,105] the major Verran fault peters out about 30 km W of the breccia. However, a dense network of SW-NE striking structures transecting the Grong-Olden Culmination [19,20] document MTFC-related faults in the E and NE extension of the major Verran fault, immediately adjacent to the Berglia-Glassberget breccia. Considering the regional context of these long-lived fault zones, the Berglia-Glassberget breccia is situated at the NE end of the Verran fault, probably forming a triple junction with the N-S striking faults related to the northern end of the Laerdal-Gjende fault system (coming from S, e.g., [110]) and the SE end of the Kollstraumen detachment (coming from NW; e.g., [111]) ( Figure 12). Based in this geotectonic positon, the Berglia-Glassberget breccia can be considered as a seismic expression related to the Triassic reactivation of the MTFC. However, the specific nature of the major fault along which the seismic rupture occurred cannot be defined due to the heavy overburden, but might be a solved by future studies. [112,113] updated with onshore elements from Redfield et al. [104,105]). [112,113] updated with onshore elements from Redfield et al. [104,105]). Summary The results of the study can be summarized as follows: • The Berglia-Glassberget mineralization is a fluid-assisted, hydraulic breccia (250 m × 500 m in lateral dimension) which formed by single-pulse stress released by a seismic event during middle Triassic (240.3 ± 0.4 Ma). • The influx of aqueous CO 2 -bearing Na-HCO 3 -SO 4 fluids into the fault zone triggered a short-lived weakening mechanism that facilitated fault movement by reducing the shear stress. • The intruding fluid, pumped by the seismic event to higher levels, leached K, Ba, Rb, Th, Nb and Ta metarhyolitic host rock and simultaneously silicified the host rock and its breccia fragments in the SW part of the shattered area (breccia "center"). The dissolved K, Ba, Rb and Th were precipitated mainly as breccia-cementing, low-temperature K-feldspar (var. adularia) followed by quartz. The origin of the CO 2 -bearing, breccia-cementing fluids may be of predominantly metamorphic origin due to decarbonation reactions (T > 200 • C) of limestones in an autochthonous sequence above felsic magmatic rocks of the underlying Olden Nappe. The decarbonation reactions were possibly initiated by deeply derived, hot fluids channelled to sub-surface levels by a major fault zone. • In the regional context, the Berglia-Glassberget breccia is interpreted to be situated at a triple junction of long-lived fault zones belonging to the Møre-Trøndelag, Laerdal-Gjende and the Kollstraumen fault complexes. These fault systems are the expression of major rifting and faulting in northern Europe during middle/late Triassic. By this means the Berglia-Glassberget breccia contributes to a better understanding of the extensional tectonics of the Norwegian mainland during that period.
v3-fos-license
2018-04-03T06:22:11.555Z
2013-11-21T00:00:00.000
7633741
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0081332&type=printable", "pdf_hash": "78c59c2f524fdfe2ff2385a9a2a522d129825fde", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42806", "s2fieldsofstudy": [ "Biology" ], "sha1": "78c59c2f524fdfe2ff2385a9a2a522d129825fde", "year": 2013 }
pes2o/s2orc
Distal C Terminus of CaV1.2 Channels Plays a Crucial Role in the Neural Differentiation of Dental Pulp Stem Cells L-type voltage-dependent CaV1.2 channels play an important role in the maintenance of intracellular calcium homeostasis, and influence multiple cellular processes. C-terminal cleavage of CaV1.2 channels was reported in several types of excitable cells, but its expression and possible roles in non-excitable cells is still not clear. The aim of this study was to determine whether distal C-terminal fragment of CaV1.2 channels is present in rat dental pulp stem cells and its possible role in the neural differentiation of rat dental pulp stem cells. We generated stable CaV1.2 knockdown cells via short hairpin RNA (shRNA). Rat dental pulp stem cells with deleted distal C-terminal of CaV1.2 channels lost the potential of differentiation to neural cells. Re-expression of distal C-terminal of CaV1.2 rescued the effect of knocking down the endogenous CaV1.2 on the neural differentiation of rat dental pulp stem cells, indicating that the distal C-terminal of CaV1.2 is required for neural differentiation of rat dental pulp stem cells. These results provide new insights into the role of voltage-gated Ca2+ channels in stem cells during differentiation. Introduction Dental pulp stem cells (DPSCs) are a part of dental mesenchyme and are derived from cranial neural crest cells [1,2]. In vivo and in vitro studies have shown that dental stem cells have neural differentiation capacity under proper culture condition [3][4][5][6]. And it was recently reported that DPSCs demonstrate better neural and epithelial stem cell properties than bone marrow-derived mesenchymal stem cells [7][8][9][10][11]. These studies suggest the potential uses of dental stem cells in the field of neurodegenerative and oral diseases in the future. However, the molecular regulation of differentiation of dental pulp stem cells is not well understood. Changes in intracellular Ca 2+ concentration ([Ca 2+ ]i) play a central role in neuronal differentiation. Ca 2+ influx into cells can generate biological signals, which can modulate expression of genes involving in cell proliferation and neuronal differentiation. Among the ten different types of voltage gated calcium channels, voltage-gated L-type Ca 2+ channels (LTCs) are particularly effective at inducing changes in gene expressions [12][13][14][15][16]. Studies in neurons and cardiac myocytes have suggested that the C terminus of Ca V 1.2 is proteolytically cleaved, yielding a truncated channel and a cytoplasmic Cterminal fragment, which is also called the distal C-terminus (DCT). In neurons DCT regulates transcription of a variety of genes, and interacts with nuclear proteins and stimulates neurite outgrowth [17][18][19][20]. DCT has also been reported to repress the expression of Ca V 1.2, suggesting it as an important factor of auto-feedback regulatory pathway [21]. However the expression of DCT and its possible role in dental pulp stem cells is still unclear. We hypothesized that DCT of Ca V 1.2 channels plays a significant role in orienting DPSCs differentiation toward the neuronal phenotype. We thus investigated the neural differentiation of rDPSCs in vitro to determine whether DCT of Ca V 1.2 channels regulates the differentiation properties of DPSCs. We generated stable Ca V 1.2 knockdown cells via short hairpin RNA (shRNA). These cells were then used in neural differentiation experiments. Our results showed that in Ca V 1.2 knock-down rDPSCs neural differentiation was significantly decreased. And re-expression of DCT rescued the effect of knocking down the endogenous Ca V 1.2 on the neural differentiation of rDPSCs, indicating that the DCT of Ca V 1.2 channels is required for neural differentiation of DPSCs. Ethical approval All animal studies were approved by the Institutional Animal Care and Use Committee of Tongji University, Shanghai, China. Isolation and culture of rDPSCs rDPSCs of the incisors were harvested from the dental pulp of postnatal week 3 Sprague-Dawley rat (Harlan Sprague Dawley, Shanghai, China) and cultured according to published methods [3]. Special care was taken to prevent microbial contamination as well as contamination by other dental cell populations. Briefly, the rat's mandible was removed and all soft tissue was blunt-dissected away to reveal the incisor insertion. The incisors were then extracted from the mandible. Any loose tissue on the root ends of the teeth was trimmed off and the external portions of the teeth were sterilized via immersion in 1% povidone-iodine for 2 min, followed by immersion in 0.1% sodium thiosulphate for 1 min and then a final rinse in sterile PBS. The pulp was removed from each tooth and placed in an enzymatic bath consisting of a mixture of 3mg/ml collagenase type I and 4mg/ml dispase (Sigma). After a 40 min incubation period at 37°C , the enzymes were neutralized with 10% serum in culture medium and the pulp digest was centrifuged at 500 × g for 5 min to yield a cell pellet, which was then re-suspended in fresh culture medium and passed through a 70μm strainer (Falcon, Gibco) to obtain single-cell suspension. Cells were seeded at a density of 10 4 cells/cm 2 in culture dishes. Cells were grown in α-MEM supplemented with 100 mM ascorbic acid 2-phosphate, 2 mM GluMAX (Gibco), 100 U/ml penicillin, 100 mg/ml streptomycin, 10% fetal bovine serum(FBS, Gibco) and aerated with 5% CO 2 at 37°C. Experiments were performed with cells from passages 3 through 5. Antibody Generation Rabbit polyclonal antibody against the distal C terminus (anti-DCT) was generated against residues 2106-2120 in the distal C terminus of Ca V 1.2 channels and characterized as previously described [20]. Peptides of the following sequence DPGQDRAVVPEDES were synthesized, coupled to KLH, injected into rabbits, and affinity purified by GL Biochem. Plasmid Construction, Lentivirus packaging and transduction in rat DPSCs The recombinant DCT sequences encompassing amino acids 1642-2143 of the Ca V 1.2 (accession # AAA18905) were constructed by inserting PCR amplified portions of the Ca V 1.2 C-terminal tail into the EcoI/BamHI sites of pLVX-IRES-ZsGreen1 vector (Clontech). Lentivirus shRNA was constructed into the BamHI/EcoI sites of pLVX-shRNA2 Vector (Clontech) as previously described [22,23]. Restriction sites were added to ends of the sense and anti-sense oligo's which contained a loop and a termination signal. The oligo's designed were synthesized from Invitrogen with a 50 phosphate and PAGE purified. The oligo format is the following .Sense Oligo: 5'-GATCCGCCATTTTCACCATTGAAATTTCAAGAGAATTTCAA TGGTGAAAATGGTTTTTTG-----3'. Anti-sense oligo: 5'-AATTCAAAAAACCATTTTCACCATTGAAATTCTCTTGAAATT TCAATGGTGAAAATGGCG-----3'. The luciferase-shRNA is as negative control. The oligo format is the following. Sense Oligo: 5'-GATCCGATATTGCTGCGATTAGTCTTCAAGAGAGACTAAT CGCAGCAATATCTTTTTTACGCGTG-3'. Anti-sense oligo: 5'-AATTCACGCGTAAAAAAGATATTGCTGCGATTAGTCTCTCT TGAAGACTAATCGCAGCAATATCG-3'. Lentivirus packaging, purification, and titer determination of the lentivirus were performed as described previously [23]. The DCT vector or sh vector or luc vector were co-cultured with the lentiviral packaging vectors pRSV-REV, pMDLg/RRE, and the vesicular stomatitis virus G glycoprotein (VSVG) by Ca2+ phosphate transfection of HEK293T cells using standard protocols to obtain the recombinant viruses. At day 3 post-culture, rDPSCs were incubated with different lentiviruses for 3-5 days. Cells transduction efficiency was determined by FACScan flow cytometry (BD Biosciences, Franklin Lakes, NJ) to sort GFP + cells. Stably transduced cells were used for functional experiments. Induction and Detection of Cell Differentiation Neural differentiation of rDPSCs was done as previously described [3]. Briefly, cells were reseeded at a density of 1×10 5 /well in 2 ml/well growth media in 6-well plates. For 3 days, cells were washed with phosphate-buffered saline(PBS) and then cultured in Neurobasal Media ( Invitrogen, Carlsbad, CA) for 2 weeks, which consists of 100U/ml penicillin, 100U/ml streptomycin, 1×B27 supplement, 20ng/ml epidermal growth factor (EGF,400-15; PeproTech), and 40 ng/ml basic fibroblast growth factor (FGF, 400-29; ProspecTech). Control samples were maintained in growth media for the duration of the neuronal inductive assay. The media for all conditions were replaced every 3 days. Following the final incubation, the 6-well coated chamber slides were fixed with 4% paraformaldehyde (PFA), and cells in 6-well plates were lysed with TRizol (Invitrogen) and stored for RNA isolation and real timepolymerase chain reaction (RT-PCR) analysis. Isolation of RNA and real-time RT-PCR Total RNA was extracted using the TRIzol method. RNA samples were quantified by spectrophotometer (Eppendorf, Hamburg, Germany), and RNA integrity was checked on 1% agarose gels using a deionized formamide-based loading buffer. Reverse-transcription reactions were performed using Superscript III reverse transcriptase (Invitrogen). cDNA samples were diluted to a uniform concentration of 50 ng/μl. Real-time PCR was performed with an ABI 7500 real-time PCR system (Applied Biosystem) using the SYBR Premix Ex Taq TM (Perfect Real Time) Sample (TaKaRa, Japan). After the realtime PCR procedure, a Ct value was obtained for each sample. This Ct value showed how many PCR cycles were necessary to reach a certain level of fluorescence. The amplification efficiency of different genes was determined relative to GAPDH as an internal control in the following cycling conditions: denaturation at 95°C for 30sec, followed by 45 cycles at 95°C for 5sec and 60°C for 34sec. The fold change in gene expression relative to the control was calculated by 2 −∆∆Ct . Statistical analysis was performed by unpaired t-testing. As indicated, p < 0.05 was considered significant. Error bars represent means ± SD. The primer sets used in real-time PCR are listed in Table 1. Protein extraction and Western blots rDPSCs were lysed in RIPA buffer (50 mM Tris-Cl, pH 7.4, 150 mM NaCl, 1% NP40, 0.25% Na-deoxycholate, 1 mM PMSF) with protease inhibitors (in mmol/l: Leupeptin 0.1 and phenylmethylsulfony fluoride 0.3) and kept for 30 min on ice with vortexing every 5 min. The supernatant was collected following centrifugation at 14,000g for 15 min at 4°C. The protein concentration was determined in triplicate by a Bio-Rad DC protein assay Kit. The same amount of proteins (50μg) was loaded for each lane of standard 4-12% SDS-polyacrylamide gels. After electrophoresis, proteins were transferred to a nitrocellulose membrane, and then the membranes were blocked in PBS containing 0.1% Tween and 5% skimmed milk. Membranes were then incubated overnight at 4 °C in 1:200 Immunofluorescence Chamber slides were fixed with 4% PFA for 30 minutes at room temperature and then washed with PBS and 0.1% Tween 20 (Sigma-Aldrich) (PBS-T). Cultures were blocked (10% horse serum in PBS-T) for 30 minutes at room temperature and then incubated with primary antibody (1:100 DCT; 1:100 Map 2, Epitomics; 1:200β-III tubulin, Epitomics) in blocking solution overnight at 4°C. Goat, and rabbit controls were treated under the same conditions. After washing, the secondary antibodies (1:200 goat anti-rabbit) were added in blocking solution for 2 hours at room temperature in the dark. Finally, the slides were washed and cover slipped with Prolong gold antifade with 4,6diamidino-2-phenylindole dihydrochloride (DAPI, P36931; Invitrogen). DCT was detected in rDPSCs It was reported that DCT translocate to the nucleus of neurons from the brain and regulate directly transcriptions of a variety of genes. To investigate the possible roles of DCT in neural differentiation of rDPSCs, we first developed an antibody to a 14-amino acid peptide in the C terminus of Ca V 1.2 (aa 2106-2120) and used it to probe rDPSCs expressing Ca V 1.2. The C-terminal antibody (anti-DCT) recognized a 75kD short cleavage product that corresponds to the C-terminal fragment. Our result confirmed that a closely related C terminal fragment of Ca V 1.2 in neurons was present in rDPSCs ( Figure 1A). Location of endogenous DCT in dental pulp stem cells Previous studies in cardiac myocytes and neurons have indicated that the DCT fragment translocates to the nucleus and acts as a nuclear transcription factor [20,24]. The intracellular distribution of the DCT fragment in non-excitable cells is unclear. To further investigate the cellular distribution of endogenous DCT in rDPSCs, rDPSCs were stained with anti-DCT antibodies and DAPI, a nuclear label. The anti-DCT stained the cell body of rDPSCs but surprisingly no DCT staining was observed in the nucleus of rDPSCs ( Figure 1B). Lentivirus transfected rDPSCs To investigate the functional significant of DCT in neural differentiation of rDPSCs, a shRNA and lentiviral vectors were used to generate stable Ca V 1.2 knockdown cells. Recombinant DCT lentiviral vectors were used to transfect Ca V 1.2 knockdown cells in order to generate DCT-rescued Ca V 1.2 knockdown cells. We used q-PCR and western blot respectively to detect the transcription and expression of Ca V 1.2 and recombinant DCT gene. The results showed that Ca V 1.2 knockdown significantly reduced the mRNA level and protein expression of DCT, but the expression of DCT fragment was significantly increased in rescued Ca V 1.2 knockdown cells. Figure 2A and Figure 2B represent the mRNA and protein levels of recombinant DCT at day 5 after transfection, respectively. Role of DCT in Neural Induction of rDPSCs in Vitro In the present study, we used Neurobasal A Media to induce neural differentiation of rDPSCs as previously described [3]. Following 2 weeks of induction, morphological changes of the differentiated cells were observed by phase/contrast microscopy. rDPSCs exposed to neuronal inductive media acquired a bipolar and stellate morphology( Figure 3A), and the control culture predominantly consisted of spindle-shaped cells, indicating rDPSCs cultured in neuronal inductive media had acquired a phenotype resembling mature neurons. To investigate the functional significance of DCT in rDPSCs, we used shRNA and lentiviral vectors to generate stable Ca V 1.2 knockdown cells. In Ca V 1.2 knock-down rDPSCs, neural differentiation was lost ( Figure 3A). To determine if the inhibitory effects of Ca V 1.2 shRNAs on the neural differentiation of rDPSCs are due to reduction of DCT, we constructed a version of DCT that is insensitive to the rat Ca V 1.2 shRNA and expressed it in Ca V 1.2 knockdown cells. Expression of DCT rescued the effect of knocking down the endogenous Ca V 1.2 on the neural differentiation of rDPSCs. We further confirmed the effect of DCT on neural differentiation of rDPSCs by assessing the expression of neuron markers. After 14 days of culture in inductive medium, there was a significant increase (p < .005 and p <.004, respectively, Student t test) in the transcripts of Map2 and β-IIItubulin genes in rDPSCs cultured in neural inductive media, compared to the cells maintained in control media ( Figure 3B). And at the same time positivity was noted for neuronal markers Map2 and β-III-tubulin ( Figure 3C). The transcripts and expressions of neural markers were greatly reduced in Ca V 1.2 knock-down rDPSCs grown in neural inductive media but restored to similar levels as the control rDPSCs after rescued with recombinant DCT. Changes in transcripts and expressions of neural markers further confirmed the effect of DCT on neural differentiation of rDPSCs. This suggests that endogenous Ca V 1.2 modulates neural differentiation of rDPSCs and that this differentiation regulation depends on the production of DCT fragment from the C terminus of Ca V 1.2. Discussion Ca 2+ entry across the plasma membrane is a main pathway for Ca 2+ signal. LTCs contribute only a minority of the overall Ca 2+ entry but exert a dominant role in controlling gene expression. In excitable cells, LTCs is known to play an important role in Ca 2+ entry across the plasma membrane. However, it's not clear yet which functions LTCs perform in non-excitable cells, especially in the undifferentiated stem cells. Critical role of LTCs in neural stem/ progenitor cell differentiation has been attributed to LTCs-mediated Ca 2+ influx. It was reported that Ca 2+ entry through LTCs mediated hypoxia-promoted proliferation of neural progenitor cells [25]. Calcium influx via Ca V 1 channels supports sustained phosphorylation of cAMP response element-binding protein (CREB) and CREB-dependent gene expression in neurons [13][14][15][16]. In the present study, knock-down of Ca V 1.2 channels significantly decreased neural differentiation of rDPSCs, and restoring expression of DCT in Ca V 1.2 knock-down rDPSCs rescued neural differentiation of these dental stem cells. This result implied the marked inhibition of neural differentiation in Ca V 1.2 knock-down DPSCs is not a result of decreased Ca 2+ influx through Ca V 1.2 channel, but because of the reduction of DCT from Ca V 1.2. Ca 2+ oscillations are mainly determined by two sources: Ca 2+ entry across the plasma membrane and Ca 2+ release from intracellular stores. Although the physiological functions of Ca 2+ oscillations in stem cells are still unknown, some studies about Ca 2+ signaling pathway in human mesenchymal stem cells showed that in human mesenchymal stem cells, unlike in excitable cells, it was not the Ca 2+ entry through plasma membrane but the Ca 2+ release from intercellular store that plays an important role in [Ca 2+ ]i oscillations [26,27]. Zahanich et al. also found that L-type Ca 2+ channel blocker did not influence alkaline phosphatase activity, [Ca 2+ ]i, and phosphate accumulation in human mesenchymal stem cells during osteogenic differentiation, suggesting that osteogenic differentiation of human mesenchymal stem cells did not require L-type Ca 2+ channel function [28]. The results of our study are consistent with those of previous studies on [Ca 2+ ]i oscillations in human mesenchymal stem cells. In the present study, other types of Ca 2+ channels, and/or internal Ca 2+ stores may significantly contribute to [Ca 2+ ]i elevation in those Ca V 1.2 knock-down DPSCs. But this speculation needs more examination in a future study. Proteolytical cleavage of LTCs at their C terminus has been reported in excitable cells, such as cardiac and skeletal muscle cells and neurons [20,24,29,30]. In heart and skeletal muscle, the C-termini of Ca V 1.2 and Ca V 1.1 are proteolytically cleaved and produce a ~45kDa fragment, which plays an important role in regulating channel properties and trafficking the channel to the plasma membrane. Dolmetsch's laboratory reported that the entire C-terminal of Ca V 1.2, appearing as a ~75kDa band in western blots, translocated to the nucleus of neurons and regulated transcriptions of a variety of genes [20]. In the present study a ~75kDa fragment was detected in rDPSCs, suggesting the full length C-terminal fragment of Ca V 1.2 is proteolytically cleaved in rDPSCs, in the same manner as it is in neurons. According to our knowledge, this is the first study reporting that cleaved C-termini of LTC channels are present in non-excitable cells. It's still not known where the cleavage site is and how this process is regulated. And it is also possible that alternative splicing of the Ca V 1.2 gene may independently generate DCT [31]. These questions are important areas for future studies. We also found in the present study that the level of DCT was low in rDPSCs and there was no DCT detected in nuclei of these cells before neural induction. This result is contrary to that of Dolmetsch's studies in neurons [20], which showed the cleaved Ca V 1.2 C-terminal translocated to the nucleus and acted as a transcriptional factor. But Ca V 1.2 knock-down rDPSCs have a nearly complete loss of neural differentiation and re-expression of DCT in Ca V 1.2 knock-down rDPSCs can rescue the neural differentiation of rDPSCs, suggesting a general requirement of DCT in the neural differentiation of rDPSCs. It's possible that in the present study DCT from Ca V 1.2 regulate neural differentiation of rDPSCs through indirect way other than acting as a transcription factor itself, for example activating such transcription factors as CREB and nuclear factor of activated T-cell (NFAT) and indirectly regulate expression of genes related with cell differentiation. It will also be interesting in future studies to determine the cellular distribution of DCT in DPSCs and the molecular mechanisms of its action in neural differentiation of dental stem cells. Our study provides strong evidence that distal C-terminal fragment of Ca V 1.2 is present in rDPSCs and it acts as an important regulating factor in the neural differentiation of rDPSCs. Further studies concerning the possible mechanisms how DCT is produced and how it regulates cell differentiation would be of utmost importance to realize the use of these stem cells in nerve regeneration.
v3-fos-license
2023-05-13T06:16:58.178Z
2023-05-12T00:00:00.000
258638410
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/advs.202207314", "pdf_hash": "2ea2ceb228a3cccd7932e9e766085c2bd9094a92", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42807", "s2fieldsofstudy": [ "Environmental Science", "Materials Science", "Engineering" ], "sha1": "b5f1b20472e8c09ad1f072020699dd31e8f5bfcd", "year": 2023 }
pes2o/s2orc
Floating Carbon Nitride Composites for Practical Solar Reforming of Pre‐Treated Wastes to Hydrogen Gas Abstract Solar reforming (SR) is a promising green‐energy technology that can use sunlight to mitigate biomass and plastic waste while producing hydrogen gas at ambient pressure and temperature. However, practical challenges, including photocatalyst lifetime, recyclability, and low production rates in turbid waste suspensions, limit SR's industrial potential. By immobilizing SR catalyst materials (carbon nitride/platinum; CN x |Pt and carbon nitride/nickel phosphide; CN x |Ni2P) on hollow glass microspheres (HGM), which act as floating supports enabling practical composite recycling, such limitations can be overcome. Substrates derived from plastic and biomass, including poly(ethylene terephthalate) (PET) and cellulose, are reformed by floating SR composites, which are reused for up to ten consecutive cycles under realistic, vertical simulated solar irradiation (AM1.5G), reaching activities of 1333 ± 240 µmolH2 m−2 h−1 on pre‐treated PET. Floating SR composites are also advantageous in realistic waste where turbidity prevents light absorption by non‐floating catalyst powders, achieving 338.1 ± 1.1 µmolH2 m−2 h−1 using floating CN x versus non‐detectable H2 production with non‐floating CN x and a pre‐treated PET bottle as substrate. Low Pt loadings (0.033 ± 0.0013% m/m) demonstrate consistent performance and recyclability, allowing efficient use of precious metals for SR hydrogen production from waste substrates at large areal scale (217 cm2), taking an important step toward practical SR implementation. Introduction Recycling of waste plastic and the production of CO 2 -neutral fuels are two critical contemporary environmental challenges. Currently, recycling is suboptimally implemented with an estimated global recycling rate of only 9% and less than 1% of plastics being recycled more than once. [1,2] Hydrogen, often considered as the frontrunning energy vector to replace carbon-rich DOI: 10.1002/advs.202207314 fossil fuels, [3] is most commonly produced using "grey" (CO 2 -releasing) methods, accounting for almost the entirety of global production and releasing ≈900 Mt CO2 in 2020. [4] Solar reforming (SR) offers a potential pathway to circular carbon use and clean production of hydrogen through dual functional photocatalysis that makes use of both oxidation and reduction half-reactions, a critical requirement for an efficient heterogeneous photocatalytic process. [5,6] The oxidation half-reaction has the potential to valorize diverse waste streams, including plastics and biomass, into useful materials, while the reduction half-reaction offers a "green" alternative to "grey" hydrogen. Other, more conventional technologies, such as gasification (partial thermal oxidation to produce syngas), liquefaction (direct biomass conversion using higher pressure H 2 ), or pyrolysis (thermal conversion in the absence of O 2 ), also seek to use waste as feedstock materials to generate fuel and other useful chemicals, but often require high temperatures and pressures and provide poor product purity, restricting economic viability. [6,7] SR distinguishes itself through critical advantages such as its ability to be performed at ambient temperatures (20-60°C) and the possibility of controlled catalytic oxidation for selective product synthesis. [7,8] In this way, SR can be more easily implemented and help mitigate plastic waste or use abundant waste biomass while contributing to an energy transition using H 2 as energy carrier. SR starts with a photocatalyst that absorbs solar energy, promoting electrons from the valence to the conduction band. Those photoexcited conduction band electrons are then transferred to a hydrogen evolution co-catalyst (e.g., Pt, Ni 2 P) and the valence band holes directly oxidize organic substrates, such as soluble plastic or biomass monomers, thereby replenishing the electrons and providing protons for reduction. [6] The theoretical overall reaction coupling the complete oxidation of organics to CO 2 and reduction of water to H 2 is described by Equation (1) [9] at approximately the H 2 evolution potential, as opposed to water oxidation which has a high thermodynamic (H 2 O → H 2 + ½O 2 , ΔG°= 237 kJ mol −1 ; E 0 = 1.23 V vs RHE) and kinetic energy barrier. [7] While, in principle, SR produces CO 2 as an end oxidation product, in practice, the reaction does not proceed to completion and >50% of the carbon remains as small organic products (e.g., formate) dissolved in solution with the remaining mineralized CO 2 fixed as CO 3 2− or HCO 3 − under alkaline conditions, allowing SR to sequester carbon and concentrate CO 2 for future processing. [10] Photoreforming was first demonstrated using a UV-light absorbing RuO 2 /TiO 2 /Pt photocatalyst in a sugar, starch, or cellulose substrate solution, [11] with a more recent focus on using solar light and application to a wider range of realistic waste materials, including lignocellulose, [12,13] polymers such as PET, polylactic acid, and polyurethane, [9,14] and food waste. [15] Further materials development has focused on utilizing visible light-absorbing photocatalysts, such as CdS/CdO x , [14] carbon nitride (CN x ), [9,12,16] or carbon dots, [17] and inexpensive co-catalysts as an alternative to noble metals such as Ni 2 P, [9] MoS 2 , [13] and NiO. [18] Currently, waste solar reforming offers H 2 production with a smaller carbon footprint than standard steam methane reforming, but a variety of improvements need to be made to achieve costcompetitiveness and market-readiness such as optimized photocatalyst durability, reuse, and application on a variety of real waste materials and real wastewaters. [10,19,20] Among these improvements, photocatalyst durability is a critical consideration, with catalyst reuse lifetimes of at least 1 year proposed as necessary for cost effective H 2 production. [10] CN x recycling has been deployed using three methods: magnetization, immobilization, and as photocatalytic membranes. [21] In the former two, CN x is directly exposed to the substrate solution for direct oxidation and reduction, whereas in the latter, the substrates are first removed from the solution by adsorption, then subsequently oxidized by CN x photocatalysis. [21] In general, immobilization techniques and membranes are preferred over magnetic recovery due to lower maintenance, stability, and ease in efficient recovery from remaining waste solutions or solids. Membranes also present challenges such as fouling, oxidation damage from the photocatalyst, and scaling challenges due to their complicated production processes. [21] Catalyst immobilization can be performed on a variety of support materials ranging from static, macroscopic supports, such as glass plates, tubes, or reactor vessel walls, to free supports such as glass beads, activated carbon, or sand. [22,23] For example, large-scale reactors using immobilized photocatalysts have demonstrated consistent H 2 production over months with solar-to-hydrogen efficiencies of up to 0.76% (SrTiO 3 :Al water splitting) [24,25] and 0.12% (mesoporous CN x with a sacrificial electron donor). [26] In the context of SR, CN x has already been immobilized on flat glass panels as part of a flow-through reactor, reaching activities of 52 ± 3 μmol m −2 h −1 with a Ni 2 P cocatalyst and pre-treated PET substrate in 0.5 mol L −1 KOH. [16] Though significant recent progress has been made on photocatalytic panel systems, slurry-based systems may present a more scalable and eco-nomic approach enabled by improved light harvesting and mass transfer, allowing higher photon conversion efficiency. [27] Among free supports suitable for application in a slurry system, floating materials have gained attention due to their enhanced light harvesting, gas exchange, and recovery resulting from catalyst localization at the surface of the solution. [23,28,29] Already explored for water treatment, floating composites make use of robust, lowdensity support materials such as expanded perlite, [30,31] fly ash cenospheres, [32] hollow glass microspheres (HGMs), [29] and polymer beads. [33] The integration of fuel-generating catalysts with floating platforms allows for versatile and decentralized deployment scenarios such as efficient use with turbid waste (TW) streams or enabling fuel production over open waters. [34] To date, the scope of SR with reusable photocatalyst composites has been limited to panel immobilization and exploring floating SR composites is a promising next step for further scaling this technology. Here, we introduce the deposition of CN x on low-density HGMs, which enable floating in aqueous media, and couple this composite with a benchmark noble metal H 2 evolution co-catalyst (platinum; HGM/CN x |Pt), as well as noble-metalfree nickel phosphide (HGM/CN x |Ni 2 P) for SR ( Figure 1). We demonstrate that such floating SR composites can be easily recovered and reused over ten consecutive SR trials using realistic illumination conditions and pre-treated model wastes including pretreated biomass (cellulose) and plastic (PET). We also track the change in SR activity in both co-catalyst systems and show the advantage of floating materials in turbid waste (TW) environments in a large areal-scale SR system using real and model wastes. Floating photocatalyst composites offer a practical method for scaling SR without loss in areal activity in a solar-light-driven process. Synthesis of Floating Carbon Nitride Several candidate floating support materials, including two varieties of perlite and five varieties of HGMs with different crush strengths, particle diameters, and densities, were screened for floating and durability, and iM30k (3M Company; 18 μm average particle diameter) was selected for all experiments going forward as it had the highest floating mass fraction following washing procedures (Table S1, Supporting Information). In addition to CN x , P25 TiO 2 , and nitrogen-doped graphitic carbon dots [17] were also assessed as potential SR photocatalysts anchored to the HGMs using a sodium metasilicate-based binder, but neither was selected for further study due to poor composite stability and co-catalyst incompatibility (see Supporting Information for additional details; "alternative composite compositions"). Floating CN x composites were prepared by pyrolysis of melamine in the presence of HGMs at a temperature of 550°C . The resulting yellow cake was then extracted from the crucible, milled with a mortar and pestle, and washed by gravimetric separation in a separatory funnel to isolate the floating fraction of the composite. Synthesis conditions varying the amount of melamine (2-8 g) while maintaining a constant HGM content (2 g) were tested to assess how the yield and CN x content of the floating composite changed ( Table 1). Decreasing the Figure 1. Floating solar reforming catalyst composite structure and mechanism. The floating composite (1 cm scale, in a small reactor with a surface area of 5 cm 2 and volume of 46 mL) gravimetrically separates to the surface of an aqueous system. The structure of the floating composite (100 μm scale) comprises continuous CN x particles with interspersed glass bubbles. A hydrogen evolution co-catalyst (Ni 2 P or Pt; 10 μm scale) facilitates the reduction of protons to H 2 gas while organics oxidation is performed over CN x . The floating composite was also applied at a larger scale in a large reactor (5 cm scale) with a surface area of 217 cm 2 and volume of 1.5 L. HGM:melamine ratio introduced a composite yield versus quality trade-off whereby loading the HGMs with more CN x resulted in a smaller fraction of the product with a density low enough to float on water. By using a 1:1 ratio of HGM:melamine, 50% of the final product was a useable floating composite with a CN x content of 32%, while using a 1:4 ratio of HGM:melamine produced a floating fraction of only 12% in the final product, but with a 43% CN x content (Figure 1). The HGM:melamine ratio also influenced the size and morphology of the floating composite (Figure 2). At an HGM:melamine ratio of 1:1, the composite appeared to be primarily individual HGMs ( Figure 2G) coated with a thin shell or small particles of CN x ( Figure 2A). As the mass of melamine was increased with respect to the mass of HGMs, the CN x mass-fraction of the composite also increased, and at a HGM:melamine ratio of 1:3 or above, the composite morphology became continuous clusters of CN x with embedded HGMs ( Figure 2C,D). The elemental distribution of N and Si from EDS elemental mapping in Figure 2E,F supports this interpretation, demonstrating high N content throughout the rough, continuous phase (yellow; CN x ) while the spherical HGM particles are evidenced by their high Si content (blue; SiO 2 ). The energy dispersive X-ray (EDX) maps in Figure 2 show floating CN x composites prepared from an HGM:melamine ratio of 1:3 with deposited Ni 2 P ( Figure 2E) or Pt ( Figure 2F), but the co-catalyst elemental content is low enough (≈0.9% Ni; ≈0.1% Pt) that it is not easily discernable in the overlay mapping. Individual elemental maps can be found in the Supporting Information ( Figures S4 and S5, Supporting Information). ICP-OES characterization of the composite confirmed the low Ni and Pt loadings, with an average Ni loading in HGM/CN x |Ni 2 P (1:3 HGM:melamine) of (1.26 ± 0.07)% m/m and a Pt loading in HGM/CN x |Pt (1:3 HGM:melamine) of (0.033 ± 0.0013)% m/m. Though the Pt loading is remarkably low, its presence on the surface of the The HGM/CN x composites showed remission spectra characteristic of CN x with absorbance band edges at ≈450 nm, though all HGM/CN x composites showed higher remission using diffuse-reflectance UV/vis spectroscopy than CN x in the <400 nm range. The addition of Pt or Ni 2 P co-catalysts to the HGM/CN x (1:3 HGM:melamine) composite was also evidenced by a color change (HGM/CN x : yellow; HGM/CN x |Ni 2 P: grey; HGM/CN x |Pt: grey-yellow) and increased remission ( Figure S7, Supporting Information). These changes in color and reflectance likely result from the introduction of continuous energy bands in the Pt or Ni 2 P co-catalysts, which function as electron collectors and promote the proton reduction half-reaction, and therefore do not contribute to photo-charge generation. [35] Solar Reforming Performance of Floating Carbon Nitride The various HGM/CN x composites prepared with different HGM:melamine ratios were assessed for their activity in smallvial (7.9 mL) SR trials. An amount of HGM/CN x was added to each vial based on the CN x content of the composite (Table 1) such that the final concentration of CN x in each experiment was 1.5 mg mL −1 . A substrate solution containing 1 mol L −1 KOH and 25 mg mL −1 EG was added to each vial containing HGM/CN x (experiments) or CN x (control), along with 1.6 μL H 2 PtCl 6 (8 wt%) solution as an in situ photodeposited co-catalyst. Horizontal irradiation with a stirred catalyst system was used to allow a direct comparison between the SR activity of the floating composites with non-floating CN x , as well as a comparison with previous literature using similar experimental setups. [9,[14][15][16] Despite normalizing the amount of CN x in each vial, the specific (mass-normalized) activity of the different HGM/CN x samples generally increased as the HGM:melamine synthesis ratio decreased. The 1:4 HGM:melamine sample demonstrated specific activity comparable to the 1:3 HGM:melamine sample and both samples exceeded the activity of CN x with photo-deposited Pt ( Figure 3A and Table S2, Supporting Information), showing that the incorporation of CN x into a floating composite does not incur a trade-off between floating and catalytic activity when relying on a Pt co-catalyst photo-deposited from H 2 PtCl 6 . Exclusion control experiments demonstrated no SR activity in the absence of the co-catalyst or substrate. SR activity was highest over chemically deposited Pt on CN x (CN x |Pt chem ), which displayed the highest Pt loading (0.91 ± 0.017 wt%) with 25-40% lower activity at lower Pt loadings (CN x |Pt photo : 0.19 ± 0.058 wt%, HGM/CN x |Pt photo : 0.13 ± 0.018 wt%, HGM/CN x |Pt chem : 0.033 ± 0.0013 wt%; Table S3, Supporting Information). SR activity may also be affected by the type of deposition, in addition to the Pt surface loading, with chemical reduction producing stable metallic deposits that are not deactivated over time, as in photo-deposited Pt, possibly contributing to higher sustained activities. [36] Based The HGM/CN x composites were then applied in vertically irradiated SR conditions using small glass reactors (SA = 4.9 cm 2 V = 46 mL; Figure S2, Supporting Information) at different composite loadings with both Ni 2 P and Pt co-catalysts (chemically deposited) to determine appropriate conditions for scaled-up catalyst reuse trials ( Figure 3B and Table S4, Supporting Information). As the HGM/CN x (1:3 HGM:melamine) concentration increased from 1.5 (1.53 mg cm −2 ) to 48 mg mL −1 (49 mg cm −2 ), the specific activity of the composite declined from a maximum of 19.8 to 4.9 μmol H2 g CNx −1 h −1 with a Pt co-catalyst and 10 to 2.2 μmol H2 g CNx −1 h −1 with a Ni 2 P co-catalyst. The areal activity, which reports the time-normalized amount of H 2 produced per area, regardless of the amount of composite in the reaction, was found to increase with composite loading from 119 to 1109 μmol H2 m −2 h −1 at 1.5 and 24 mg mL −1 , respectively, with a Pt co-catalyst and from 68 to 490 μmol H2 m −2 h −1 at 1.5 and 24 mg mL −1 , respectively, with a Ni 2 P co-catalyst. This highlights the trade-off between the efficient use of photocatalyst material and maximum hydrogen production. By using a high concentration of floating composite, high areal activities can be achieved, but after a composite concentration of 12 mg mL −1 , the benefit of adding an additional catalyst is small. For example, in the case of HGM/CN x |Pt, doubling the composite concentration from 3 to 6 mg mL −1 increases the H 2 yield by a factor of 2.5, while doubling again from 6 to 12 mg mL −1 only increases yield by a factor of 1.5. Based on these findings, a composite loading of 12 mg mL −1 was selected for further trials to allow for efficient use of material and balance specific and areal activities. The mechanism of SR over HGM/CN x |Pt and HGM/CN x |Ni 2 P is expected to proceed according to SR over CN x |Pt or CN x |Ni 2 P in the absence of the inert silica support material as described in previous work. [9] Briefly, photo-generated electrons and holes drive proton reduction and substrate oxidation, respectively, with the Pt or Ni 2 P co-catalyst serving as an electron collector and hydrogen evolution co-catalyst. Substrate oxidation proceeds non-selectively through direct electron trans-fer to CN x rather than an OH· mediated pathway and produces a wide variety of intermediate products with a final mineralized product of CO 3 2− in alkaline media. [9] Though the mechanism should remain the same, CN x or HGM/CN x light absorbers may be affected by altered exposure to light and decreased kinetics, likely from reduced mass transfer between the substrate and the photocatalyst in the absence of stirring, particularly if the catalyst is not uniformly distributed across the solution surface in a floating system. In this study, a lower specific activity was therefore observed under such conditions ( Figure 3B). Evaluating the effect of light orientation on photocatalytic activity is challenging as gravity remains a factor that affects heterogeneous catalyst distribution, particularly for floating systems, and liquid geometry. Nevertheless, it is anticipated that the overall effect of light orientation would be minimal. Recyclability of Floating Carbon Nitride The recyclability of the HGM/CN x composites was assessed in the same small reactors (surface area [SA] = 4.9 cm 2 V = 46 mL) under vertical illumination (AM1.5G) using three different substrates: EG, PET, and cellulose. The EG and PET solutions were prepared in 1 mol L −1 KOH, reflecting the pre-treatment conditions for solid PET, while the cellulose was prepared by enzymatic pre-treatment and had a pH 5 solution containing 50 mmol L −1 sodium acetate. The same HGM/CN x composite (1:3 HGM:melamine) was applied in ten consecutive 2 h SR trials and each set of ten runs was performed in triplicate using different HGM/CN x samples. The floating HGM/CN x mass recovery after ten trials averaged (58.9 ± 6.4)% in the small reactors with no significant difference in recovery between the HGM/CN x |Pt (61.4 ± 3.7%) and HGM/CN x |Ni 2 P (56.5 ± 8.3%) samples (Table S5, Supporting Information). Separation was rapid with the visual clarity of the aqueous phase achieved over 10 min of separation time ( Figure S8, Supporting Information). The SR activity of the HGM/CN x |Pt was found to decrease by a small degree with each cycle in each substrate solution (Figure 4 and Table S6, Supporting Information), reaching a final areal activity of (82.3 ± 1.0)% in EG ( Figure 4A), (31.1 ± 12.1)% in PET (Figure 4B), and (67.5 ± 11.4)% in cellulose ( Figure 4C). Except for PET, the percent decrease in activity approximates the percent mass loss of the HGM/CN x composite. Compared with a nonfloating composite (CN x ), the HGM/CN x showed much greater reuse. The non-floating composite was unable to be separated from the reforming solution without assistance (e.g. filtration, centrifugation), resulting in significant catalyst loss and a decrease in activity between experiments ( Figure S10, Supporting Information). Overall, the consistent activity of the HGM/CN x |Pt samples points to the CN x |Pt system as capable of maintaining SR performance over multiple trials, with loss in activity attributable to loss of the composite material itself (either through aspiration between trials or breakage and sinking of the composite). In the small reactor trials, a meniscus effect in the 2.5 cm diameter cylindrical reactor caused the floating composite to move into a ring formation in the absence of stirring, clinging to the walls of the reactor over the 2 h irradiation period, and the speed of this separation may have affected the H 2 yield and may have been affected by solution characteristics such as surface tension. The HGM/CN x |Ni 2 P composites showed a substantial decrease in areal activity over the ten cycles, reaching minimum activities of (31.1 ± 12.2)% in EG ( Figure 4A), (19.6 ± 1.2)% in PET ( Figure 4B), and complete loss of activity (0%; H 2 not detected) in cellulose ( Figure 4C). The individual plots of HGM/CN x |Ni 2 P areal activity in the small reactor recyclability trials can be found in the Supporting Information ( Figure S11, Supporting Information) with scales that more clearly show the decline in activity from run to run. The decrease in activity exceeds the composite mass loss over ten cycles, pointing to a decline in the SR activity of the Ni 2 P cocatalyst. This is especially evident in the cellulose substrate solution ( Figure 4C) where the activity rapidly drops to 0 over only three runs. This is likely due to the dissolution of the Ni 2 P co-catalyst which has poor stability in acidic environments such as the pH 5 pre-treated cellulose solution. [37] Measuring the Ni content of the composite by ICP-OES after ten cycles in the pre-treated cellulose solution yielded (0.056 ± 0.005)% m/m compared to (1.56 ± 0.01)% after ten cycles in EG or pre-treated PET solution, confirming the difference in stability between the different pH environments. Despite Ni still being present in a higher concentration than Pt in the HGM/CN x |Pt samples (0.03% m/m) the activity is far lower, and it is expected that this is a consequence of the degradation of the catalytically active Ni 2 P species. This result also highlights the versatility and robustness of the Pt cocatalyst as it can be applied in various pH environments and would therefore be applicable to a wider range of substrates requiring different pre-treatment conditions. The performance of the HGM/CN x composites in a more realistic system mimicking real-world conditions was assessed using a large poly(vinyl chloride) (PVC) reactor (217 cm 2 1.5 L) with a Plexiglas window to allow for vertical irradiation. The volumetric and areal concentrations of the floating composite from the small reactor experiments were maintained at 12 mg mL −1 and 12 mg cm −2 in this larger system with a fluid volume of 217 mL and depth of 1 cm. The areal activity HGM/CN x |Pt composite marginally declined over ten consecutive trials in both EG ( Figure 5A) and pre-treated PET ( Figure 5B) The composite recovery was consistent across all triplicates over ten cycles, with an average of (68.2 ± 6.0)% (Table S5, Supporting Information). The HGM/CN x |Ni 2 P composite demonstrated a much more significant decline in activity, reaching minimum values of (31.1 ± 12.1)% in EG and (16.0 ± 2.9)% in PET ( Figure 5A,B and Figure S12, Supporting Information). The Ni content in the composite was measured before and after ten solar reforming cycles to test whether loss of Ni 2 P might be causing the decline in activity, but it was found that the Ni content of the floating fraction increased by (0.31 ± 0.16) wt% ( Figure S13, Supporting Information). The decrease in the activity of the HGM/CN x |Ni 2 P catalysts over multiple cycles in basic conditions is therefore not directly related to the dissolution or loss of Ni but may be due to surface modification of the Ni 2 P, for example, the formation of a passivating aerial oxidation layer that may block the catalytically active sites. [38] This finding is also consistent with previous investigations of CN x |Ni 2 P that determined Ni content remains similar before and after photocatalysis, while activity and P content decreases significantly as Ni-P and NiO x species are replaced with Ni(OH) 2 . [9] Following ten re-use cycles, the composite appeared to retain its original morphology. Large masses of CN x with embedded HGMs were evident throughout the sample and EDS qualitative analysis largely showed the same elemental distribution with Si and O localized in the spherical HGMs and C and N distributed in the interstitial material (Figures S14-S17, Supporting Information). Post-catalysis, the composite also exhibited a strong potassium signal, likely residual K + adsorbed to s-triazine rings in the carbon nitride. [39] The surface of the composite after ten reuse cycles showed a slightly different appearance, with the carbon nitride appearing somewhat more connected and "web-like" in comparison to the "before" images. Previous studies have reported the exfoliation and etching of carbon nitride under alkaline conditions (5 mol L −1 KOH) at mild temperatures (room temperature to 80°C). [39] Combustion microanalysis of the composites before and after the ten reuse cycles determined that the carbon nitride content of the composites was not substantially changed, with initial CN x contents of 46.6 ± 0.1% and 44 ± 0.1% for HGM/CN x |Pt and HGM/CN x |Ni 2 P, respectively, and final CN x contents of 44.6 ± 0.1% and 42.8 ± 0.1%. FTIR analysis of the composites before and after ten reuse cycles showed the appearance of a new shoulder peak at 970 cm −1 ( Figure S18, Supporting Information) that could be attributed to the formation of amine oxide (N-O stretching) or epoxide (C-O-C stretching) groups through autooxidation of CN x . [40,41] Although the composite showed consistent morphology and stability upon repeated use in SR conditions, mechanical abrasion under stirred conditions resulted in substantial composite breakage with only 75-85% of the floating material recovered after 30-60 min of stirring (Table S8, Supporting Information) and substantial morphological change with many broken microspheres evident (Figure S19, Supporting Information). As long as the composite was not subjected to direct abrasion, it appeared stable and retained its activity and floating. The areal activity of floating HGM/CN x composites under the SR conditions studied compares favorably to similar conditions previously explored for immobilized CN x on glass panels. The maximum activity of the HGM/CN x |Pt composite (217 cm 2 2 h, 1 mol L −1 KOH, 25 mg mL −1 EG, ≈5.4 mg CNx cm −2 ) was observed to be 2558 ± 197 μmol H2 m −2 h −1 , whereas previously reported CN x |Pt panels (1 cm 2 20 h, 0.5 mol L −1 KOH, 25 mg mL −1 EG, 1.92 mg CNx cm −2 ) were found to exhibit 280 μmol H2 m −2 h −1 areal activity. [16] Possible reasons for the increased activity in the floating system may include the increased surface density of CN x , greater surface area availability of CN x for SR reactions, or improved light harvesting. Initially, the floating HGM/CN x |Ni 2 P system also outperformed its panel-immobilized counterpart with a maximum areal activity of 370 ± 158 μmol H2 m −2 h −1 (217 cm 2 2 h, 1 mol L −1 KOH, 25 mg mL −1 PET, ≈5.4 mg CNx cm −2 ) compared to 52 ± 3 μmol H2 m −2 h −1 (25 cm 2 20 h, 0.5 mol L −1 KOH, 25 mg mL −1 PET, ≈1.92 mg CNx cm −2 ), though the activity of the floating HGM/CN x |Ni 2 P composite was found to decline substantially over 20 h (ten consecutive cycles), while the panel system maintained steady activity over 20 h. This may be due to the nature of the recycling process which routinely exposes the catalyst to oxygen after every recycle to introduce fresh substrate www.advancedsciencenews.com www.advancedscience.com solution, allowing a passivation layer to form, while the panels were operated continuously in an anoxic environment. Nevertheless, clear benefits in areal activity were realized in using the floating HGM/CN x system on free supports compared to CN x on static panel supports. Scaling up the SR system had clear benefits in the case of HGM/CN x |Pt with increases of 728 ± 179 and 562 ± 152 μmol H2 m −2 h −1 over the SR system when using EG or PET, respectively. The benefits were less obvious when using HGM/CN x |Ni 2 P, but an increase of 180 ± 81 μmol H2 m −2 h −1 was observed using PET as a substrate. The increased areal activity may be due to a more consistent distribution of the floating catalyst across the large reactor surface. In the small reactors, surface tension effects and menisci caused floating particles to separate to the edges of the reactor, effectively decreasing the light-harvesting area of the system. Two clear trends were established from the up-scaled floating composite application: floating SR reactions can be scaled with an area without a loss in specific or areal activity, or composite recovery, and Pt is a stable co-catalyst material suitable for application in multi-use SR composites. A significant benefit of the floating composite material was its SR capability in TW. To simulate a TW stream, mixed waste from a floatation separation process was pre-treated in 1 mol L −1 KOH for 24 h at 80°C and coarsely filtered through glass wool to produce a dark brown suspension which was then loaded with 25 mg mL −1 EG as a substrate ( Figure S20, Supporting Information). In the absence of added substrate, the pre-treated TW did not produce a detectable quantity of H 2 over 24 h using HGM/CN x |Pt. Vertically irradiated SR experiments (in triplicates) using 217 mL (1 cm deep, 217 cm 2 2 h, 1 mol L −1 KOH, 25 mg mL −1 EG, 12 mg mL −1 composite) of the TW+EG solution in the large reactor demonstrated that non-floating carbon nitride (CN x |Pt) was unable to produce a detectable quantity of H 2 in a 2 h period while floating HGM/CN x |Pt produced 728.5 ± 2.7 μmol H2 m −2 h −1 under the same conditions ( Figure 5C). Small-vial control experiments in well-mixed, side-illumination conditions where the light exposure to both catalysts was equally demonstrated that the CN x |Pt catalyst had approximately the same performance as HGM/CN x |Pt in both clear (EG only) and TW conditions ( Figure S21, Supporting Information). Realistic application of HGM/CN x |Pt was further demonstrated using pretreated PET from a plastic bottle in turbid waste showing similar results; the floating composite produced H 2 at a rate of 338.1 ± 1.1 μmol H2 m −2 h −1 while the CN x |Pt catalyst produced no detectable H 2 under vertical illumination. Under well-mixed, horizontally illuminated conditions, the CN x |Pt produced H 2 at a higher rate than HGM/CN x |Pt, demonstrating that the material is catalytically active when suitably exposed to simulated sunlight ( Figure S21, Supporting Information). Thus, the floatability of the HGM/CN x |Pt composite can be an asset in static, vertically illuminated environments with non-transparent waste streams more realistic to the practical application of solar reforming. Conclusion We have reported floating carbon nitride photocatalysts for upscaled solar reforming of model and waste substrates using both Pt and a noble-metal-free co-catalyst. HGM/CN x |Pt and HGM/CN x |Ni 2 P composites, where the role of the HGMs is as an inert low-density support conferring floating in water, were prepared by a simple pyrolysis procedure followed by thermal or chemical reduction to deposit the co-catalyst. Such floating composites and devices for solar fuel production enable versatile deployment scenarios, such as in turbid waste streams or on open waters, in addition to their advantages in terms of recyclability and scalability. [34] Composite preparation and application conditions were optimized for H 2 production and shown to generate H 2 gas when coupled with ethylene glycol, pre-treated PET, and pre-treated cellulose. The floating composite was evaluated at small (5 mL, 4.9 cm 2 ) and large (217 mL, 217 cm 2 ) scales under vertical solar irradiation to simulate realistic application conditions, and it was found that the activity could be maintained over up to ten cycles (2 h per cycle). Estimated continuous catalyst reuse for scaled practical solar reforming is >1 year, [10] necessitating long-term evaluation of this composite in continuously operating systems before floating catalysts can be considered a solution to reuse, though the development of this floating platform is an important first step toward reaching such a goal. It was found that the noble-metal co-catalyst was substantially more robust than Ni 2 P under the application conditions tested and that only a very small Pt loading was required to achieve H 2 production rates comparable to conventional CN x |Pt (0.033 ± 0.0013% m/m Pt in HGM/CN x |Pt vs 0.91 ± 0.017% m/m Pt in CN x |Pt). Recent work has estimated the costs of slurry catalyst materials TiO 2 |Pt (1% m/m Pt; $640 USD kg −1 ) and CN x |Ni 2 P (2% m/m Ni 2 P; $224 USD kg −1 ), [27] and using the same cost basis, a CN x |Pt composite containing 0.04% m/m Pt would cost $224 USD kg −1 . This brings the cost of a precious metal-containing composite in line with the cost of a Ni 2 P co-catalyst with higher solar reforming performance and greatly improves the cost compared to a CN x |Pt composite (1% m/m Pt; $800 USD kg −1 ). Another significant cost of solar reforming is based on the pre-treatment process; the cost of NaOH for alkaline hydrolysis outweighs all other factors, [10] therefore alternative pre-treatment methods must be developed (e.g., neutral hydrothermal or saline hydrolysis) [42] or solar reforming should initially be applied to waste streams that do not require pre-treatment. [19] Overall, the findings point toward the CN x composites in this work as a promising, scalable material for practical solar reforming that address the scaling challenges of reuse and light absorption by the waste solution by using small quantities of noble metals as co-catalysts on a reusable floating platform. www.advancedsciencenews.com www.advancedscience.com pre-treated cellulose, 1 h) compared to an external standard (2% CH 4 in N 2 ) and found to be negligible (59 nmol L −1 and 2.7 μmol L −1 , respectively) compared to the concentration of the internal standard (816 μmol L −1 ) used in other tests. Treatment of Data: All gas chromatography measurements were performed in triplicate unless otherwise stated, and presented as the unweighted mean of the three measurements ± standard deviation ( ). SR Activity was calculated from the molar concentration of H 2 present in the headspace gas normalized by time and reactor area or catalyst mass and reported as "areal activity" (μmol H2 m −2 h −1 ) or "specific activity" (μmol H2 g CNx −1 h −1 ). Percent activity (%), used for comparisons within recyclability studies, is given by A i /A 0 where A i is the SR activity of run i and A 0 is the activity of the first run. was calculated as = where n is the number of replicates, x is the value of a single measurement, andx is the unweighted mean of the measurements. The standard deviation for bar charts is presented as a black error bar with flat caps and presented as the calculated standard deviation or 5% of the value of the bar, whichever is larger, while for line graphs it is represented as a shaded area surrounding the line and data points. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
v3-fos-license
2023-07-24T04:01:44.116Z
2023-07-21T00:00:00.000
260091525
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-024-12435-z.pdf", "pdf_hash": "090d52f51948071b9022bbd8b40c2d41c7fbf419", "pdf_src": "ArXiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42808", "s2fieldsofstudy": [ "Physics" ], "sha1": "321d2c8497eda77f2a8f35b93ccd895bd0951f89", "year": 2023 }
pes2o/s2orc
Inflation Based on the Tsallis Entropy We study the inflationary scenario in the Tsallis entropy-based cosmology. The Friedmann equations in this setup can be derived by using the first law of thermodynamics. To derive the relations of the power spectra of the scalar and tensor perturbations in this setup, we reconstruct an $f(R)$ gravity model which is thermodynamically equivalent to our model in the slow-roll approximation. In this way, we find the inflationary observables, including the scalar spectral index and the tensor-to-scalar ratio in our scenario. Then, we investigate two different potentials in our scenario, including the quadratic potential and the potential associated with the natural inflation in which the inflaton is an axion or a pseudo-Nambu-Goldstone boson. We examine their observational viability in light of the Planck 2018 CMB data. We show that although the results of these potentials are in tension with the observations in the standard inflationary setting, their consistency with the observations can be significantly improved within the setup of the Tsallis entropy-based inflation. Moreover, we place constraints on the parameters of the considered inflationary models by using the current observational data. I. INTRODUCTION The first inflationary model was proposed by Starobinsky [1] in 1980 and it was based on the addition of the R 2 term in the Einstein-Hilbert action with the motivation to include semi-classical quantum effects to the gravity theory.It is interesting to point out that although this model is the first inflationary model, it is in very good agreement with the current observational data.Then, in 1981, Sato [2,3], suggested a scenario in which the Universe has undergone a rapid acceleration in the early stages of its evolution, and afterward it turned into a fireball with a very high temperature.Subsequently, in the same year, Guth [4] has shown that by including an accelerating phase before the radiation-dominated era, one can resolve the hot Big Bang cosmology problems such as the flatness problem, the horizon problem, and the magnetic monopole problem.Guth's model [4] is known as the old inflation and it is based on a scalar field that goes from a false vacuum towards a true vacuum through a first-order phase transition in the form of a quantum-tunneling process. The old inflation [4] suffers from some fundamental problems, and to resolve its problems, other inflationary models were suggested later [5][6][7][8][9].For a nice review of the history of the first 30+ years of inflation, see [10]. Along with the evolution of the inflationary models, the theory of cosmological perturbations was developed too [11][12][13][14][15][16].The calculation of the primordial tensor perturbations in the early de Sitter stage was first elaborated by Starobinsky [11] in 1979, in terms of the related quantity which is the spectrum of primordial GW background after the Hubble radius crossing at the radiation-dominated stage.His original motivation was physically sound with the aim to investigate the initial state of the Universe.Then, Mukhanov and Chibisov [12] calculated the spectrum of the perturbations in the range of observable scales in the context of the Starobinsky R 2 inflation [1].In addition, the quantitatively correct expression for the GWs spectrum produced in the Starobinsky model [1] was first presented in [16].The quantum fluctuations of the scalar field during inflation lead to the generation of the perturbations whose growth can seed the Large-Scale Structure (LSS) formation and the anisotropies observed in the Cosmic Microwave Background (CMB) radiation [11][12][13][14][15][16]. Thus, from the observations corresponding to the LSS formation and CMB anisotropies, we can obtain valuable information about the physics of the early Universe.Important observational results about inflation have been presented by Planck collaboration [17], which are obtained from measurements of the CMB anisotropies in both temperature and polarization.Using these observational results, one can discriminate between different inflationary models. The simplest inflationary model is based on a single scalar field minimally coupled to gravity [18,19].The scalar field responsible for inflation is known as the inflaton.If the potential energy of the inflaton dominates over its kinetic energy, then the inflaton rolls downward its potential slowly.In the chaotic inflationary scenarios [7], the slow-roll motion of the inflaton is provided by the Hubble friction term in the equation of motion of the scalar field, which is predominant in the early stages of the inflaton, and it then suppresses as the inflation goes towards its end.In the new inflation models [5], however, the slow-roll phase of inflation is provided by a plateau-like potential for the inflaton field. Inflation has occurred in the GUT energy scale which is about two orders of magnitude less than the Planck energy scale.Since inflation has occurred in the regime of high-energy physics, therefore it is expected that quantum gravitational effects had a decisive role in the dynamics of the early Universe.In those energy scales, the gravity theory may be modified due to the quantum gravitational effects.One way to determine the corrections to the gravity theory is the use of the gravity-thermodynamics conjecture implying that there is a deep connection between gravity and thermodynamics [20][21][22][23][24][25][26][27][28].This conjecture implies that using the thermodynamics laws for the Universe as a thermodynamical system, one can derive the gravitational equations governing the evolution of the Universe.In particular, the Friedmann equation can be derived from the first law of thermodynamics [29][30][31][32][33][34].If the entropy is assumed as the Hawking-Bekenstein entropy [35], one can derive the standard Friedmann equation.If we consider some modifications to the entropy of the system, then the gravitational equations will be modified accordingly.In the early Universe, the entropy is expected to be different from the standard Bekenstein-Hawking entropy [35] due to the quantum gravitational effects that appear in the regime of high-energy physics. One form that can be considered for the entropy of the Universe in its primordial stages is the Tsallis entropy S h ∝ A β [36] in which A is the horizon area and β is a constant parameter known as the Tsallis parameter.This entropy is a generalization of Boltzmann-Gibbs entropy [36,37], and it has been suggested to solve a thermodynamic puzzle.The Boltzmann-Gibbs statistics is not capable of describing the systems with divergent partition functions such as the gravitational systems and requires a non-additive generalization of the entropy definition [38][39][40].Accordingly, Tsallis and Cirto [36,37] introduced an entropy expression leading to non-extensive statistics.The Tsallis entropy has attracted a high level of research interest over the years, and so far, remarkable results have been found based on this entropy in a lot of complex systems such as self-gravitating stellar systems [41,42], black holes [36,37], background radiation [43], neutrinos [44,45], holographic dark energy [46][47][48][49][50] and dark matter [51], thermodynamic gravity [33,52,53], low-dimensional dissipative systems [39], and polymer chains [54].In the special case where the Tsallis parameter is taken as β = 1+∆/2 with 0 ≤ ∆ ≤ 1, the Tsallis entropy reduces to the Barrow entropy [55]. In this paper, we investigate the implications of the Tsallis entropy for the inflationary phase of the early Universe.To provide the accelerated expansion of the Universe in the inflationary phase, we assume the matter-energy content of the Universe follows the form of a canonical scalar field that plays the role of the inflaton.We take the inflaton potential to be in the form of a simple quadratic potential V (φ) = m 2 φ 2 /2 which leads to a chaotic inflationary scenario [7].In the framework of standard inflation, this potential is not favored in light of the current CMB data provided by the Planck 2018 collaboration [17].We show that the results of this potential for the inflationary observables can be improved significantly in the context of Tsallis entropy-based inflation. One another elegant inflationary scenario that provides a sensible mechanism to generate a flat potential is natural inflation [68].In this class of models, the scalar field φ enjoys a shift symmetry φ → φ + const., which is slightly broken explicitly or due to the nonperturbative quantum effects to a discrete symmetry φ → φ + 2π [68].This feature gives rise to a periodic potential, as appropriate for inflation [68,69].Note that the presence of this symmetry protects the potential from radiative corrections.The scalar field with a flat potential originating from the shift symmetry is known as an axion.In this regard, the inflaton in the natural inflation is an axion or a pseudo-Nambu-Goldstone boson.The original natural inflation is described by a cosine-type periodic potential in the framework of a standard inflationary scenario based on the Einstein gravity where the entropy of the horizon obeys the Bekenstein-Hawking area law [35].In this setup, the natural inflation is not very favored by the latest observation of the Planck 2018 collaboration [17], and its results can satisfy only the 95% CL constraint of Planck 2018 data [17].This point motivates us to study the natural inflation in the setup of the Tsallis inflation and compare its predictions with the Planck 2018 constraints [17]. Although the Friedmann equations of the Friedmann-Robertson-Walker (FRW) Universe in the framework based on the Tsallis entropy have already been derived in [33], the study of inflation in this context also requires the equations of the power spectra of the primordial scalar and tensor perturbations.To this aim, we need the action of the gravity theory.In the absence of a unique action for our Tsallis inflation model, we construct an f (R) gravity model which is thermodynamically equivalent to the Tsallis gravity in the slow-roll regime of inflation.The modified f (R) gravity is a conceivable generalization of general relativity in which f (R) is an arbitrary function of the Ricci scalar R.Over the years, the f (R) gravity models have been extensively studied in the literature (see, e.g., [70][71][72][73][74][75][76][77][78][79][80]).One of the most important motivations that led us to this choice is that it is possible in the f (R) gravity to construct a model whose entropy is proportional to the powers of the horizon area, like the Tsallis entropy.Moreover, the f (R) gravity models have phenomenological effective features that can describe inflation in the regime of high-energy physics.On one side f (R) actions are general enough to cover some basic features of higher-order gravity, on the other side they are sufficiently simple to be easy to work with [71].Besides, there is one other important reason that convinces us that the f (R) theories of gravity are good candidates to help us evaluate the inflationary observables based on the Tsallis entropy.These models can avoid serious problems such as negative energies and related instabilities that are called the Ostrogradski instabilities [71,81,82].This feature made the f (R) theories unique compared to other higher-order gravity theories. The paper is structured as follows.In Sec.II, we briefly study the thermodynamics of the Tsallis cosmology and the background equations derived from the first law of thermodynamics.In Sec.III, we review the inflationary dynamics and spectra of primordial perturbations in the f (R) gravity.The thermodynamic behavior of the field equations in the f (R) gravity will be discussed in Sec.IV.Then, in Sec.V, we reconstruct an f (R) model which is thermodynamically equivalent to the Tsallis model.Using this equivalence, we derive the equations of the inflationary observables in our Tsallis inflation scenario.In Sec.VI, we examine the observational compatibility of the quadratic and natural potentials with the Planck 2018 data in the framework of the Tsallis inflation.Finally, we summarize our concluding remarks in Sec.VII. II. THERMODYNAMICS OF THE TSALLIS COSMOLOGY In [33], it has been shown that by starting from the first law of equilibrium thermodynamics, dE = T h dS h + W dV, at the apparent horizon of the FRW Universe and taking the entropy associated with the apparent horizon in the form of the Tsallis entropy [36], one can derive the modified Friedmann equations.Here, E is the total energy content of the Universe, T h is the temperature of the apparent horizon, W is the work density, and V is the volume inside the apparent horizon.For these quantities, we have [33,83,84] where ρ and p represent respectively the energy density and pressure of all matter components in the Universe, and they satisfy the following continuity equation In the above equations, H ≡ ȧ/a denotes the Hubble parameter, rA indicates the radius of the apparent horizon, and the overdot indicates the derivative with respect to the cosmic time t.The horizon entropy is denoted by S h , and it will be taken in the form of the Tsallis entropy which is a non-extensive generalization of Boltzmann-Gibbs entropy [36][37][38]] Here, A = 4πr 2 A is the area of the apparent horizon and β is a real parameter known as the Tsallis parameter that measures the degree of non-extensivity [36].In addition, γ is an unknown constant, and several definitions have been presented for it in some literature for convenience [33,[85][86][87][88].In our investigation, however, this parameter will be treated as an unknown parameter.It is clear that for β = 1 and γ = 1/(4G), where G is the Newton gravitational constant, the Tsallis entropy (6) reduces to the Bekenstein-Hawking entropy [35]. On the flat FRW background, the modified Friedmann equations based on the Tsallis entropy (6) can be derived as follows [33] where the effective gravitational constant is defined as In this paper, we use the unit system in which c = = κ B = 1 where c is the speed of light, is the reduced Planck constant, and κ B is the Boltzmann constant.We define where m P is the Planck mass with a reduced value M P = m P / √ 8π = (8πG) −1/2 which throughout this paper we take it equal to unity, M P = 1. From Eqs. ( 7), (8), and (9), one can easily show that the standard Friedmann equations based on the Einstein gravity in which the entropy associated with the horizon follows the form of Bekenstein-Hawking entropy [35], are recovered for β = 1 and γ = 1/(4G), as expected.It is worth mentioning that although the author of [33] has defined γ as γ ≡ (2 − β)(4π) 1−β /(4β G), we do not follow this convention in the present work.For a spatially flat Universe, we have rA = 1/H, and so one can rewrite the area of the apparent horizon A in terms of the Hubble parameter H as We assume that the matter-energy content of the Universe to be a scalar field φ in the form of a perfect fluid with the energy-momentum tensor T µ (φ) ν = diag (−ρ φ , p φ , p φ , p φ ).Here, ρ φ and p φ denote the energy density and pressure of the scalar field, respectively, and they are given by In these equations, V (φ) is the potential energy of the scalar field.The energy density ρ φ and pressure p φ of the scalar field fulfill the continuity equation From this equation together with Eqs.(11) and (12), the equation of motion of the scalar field equation will be obtained as where (, φ) indicates the partial derivative with respect to the scalar field φ.As we see, Eq. ( 14) is the same as the one that is valid in the standard scenario based on the Einstein gravity. To study the inflationary epoch in the Tsallis entropy-based scenario, we need to derive the fundamental relations governing the theory of cosmological perturbations.For this purpose, we need the action of the model.But we don't know the action of the model, and to overcome this problem, we try to reconstruct an f (R) gravity model that is equivalent to our model from the thermodynamic point of view.In the f (R) gravity, the horizon entropy is given by [72,89] where F ≡ df (R)/dR.In the following, we review the basic formulas governing the theory of cosmological perturbations in the f (R) gravity, as well as the thermodynamic behavior of field equations in this theory.Then we apply the obtained results in our Tsallis entropybased model and find the necessary relations to study inflation. III. INFLATIONARY DYNAMICS AND SPECTRA OF PRIMORDIAL PERTUR-BATIONS IN THE f (R) GRAVITY The f (R) gravity is described by the following action [72] where g is the determinant of the metric g µν , f (R) is an arbitrary function of the Ricci scalar R, and X ≡ − 1 2 g µν ∂ µ φ∂ ν φ is the canonical kinetic term.For the flat FRW metric, the Ricci scalar R is given by [72] For the f (R) gravity model with action ( 16), the Friedmann equations turn into [72] 3F Following [70,72], we introduce the slow-roll parameters as where In the slow-roll regime, the quantities | ε i | (i = 1, 2, 3, 4) are much smaller than unity. Under the slow-roll limit, the Ricci scalar R in Eq. ( 17) reduces to The spectrum of the curvature perturbations generated during inflation and in the slowroll limit can be estimated as [72] where It is worth mentioning that P s is computed at the time of horizon exit at which k = aH, where k is the comoving wavenumber.The observational value of the amplitude of scalar perturbations at the CMB pivot scale k * = 0.05 Mpc −1 has been constrained by the Planck 2018 CMB observations to be P s (k * ) ≃ 2.1 × 10 −9 [17]. The scalar spectral index n s during the slow-roll regime, in the framework of the f (R) gravity is given by [72] The observational constraint from the Planck data on the scalar spectral index at the TTTEEE+lowℓ+lowE) [17]. The tensor power spectrum in the framework of the f (R) gravity is given by [72] Also, the tensor spectral index n t in this setup is obtained as This parameter determines the scale dependence of the tensor power spectrum.Currently, there is no precise measurement for this quantity and we hope the future observations can provide some constraints on this observable. Using Eqs. ( 23) and ( 26), one can find the tensor-to-scalar ratio in the f (R) gravity setting and in the slow-roll regime as [72] r The Planck 2018 data sets an upper bound on the tensor-to-scalar ratio as r < 0.0522 at the CMB pivot scale k * = 0.05 Mpc −1 (68% CL, Planck 2018 TTTEEE+lowℓ+lowE) [17].The most recent upper limit on this parameter is r 0.01 < 0.028 at 95% CL which is obtained at the CMB pivot scale k = 0.01 Mpc −1 using 10 datasets from the BICEP/Keck Array 2015 and 2018, Planck releases 3 and 4, and LIGO-Virgo-KAGRA Collaboration [92]. In the case of f (R) = R, using Eqs.( 24), ( 26), (27), and ( 28), we easily find that As we see, in this case, these results are reduced to the well-known relations in the standard inflation. GRAVITY Let us discuss the relation between the first law of thermodynamics and field equations in the f (R) gravity.In [72,89], it has been shown that the first law of equilibrium thermodynamics, dE = T h dS h + W dV, does not hold at the apparent horizon of the FRW Universe in the f (R) gravity.Consequently, to derive the Friedmann equations, we should apply the non-equilibrium thermodynamics and write the first law of thermodynamics as In this equation, S h is the entropy associated with the apparent horizon and it is still given by Eq. ( 15).Furthermore, S implies the non-equilibrium entropy and involves the non-equilibrium thermodynamic effects of the f (R) gravity [72,89]. From the non-equilibrium relation Taking the differentiation of Eq. ( 1), and then using Eq. ( 4) and also the relation dV = 4πr 2 A dr A , we obtain With the help of Eq. ( 5), we get It is assumed that the entropy of the horizon S h is in the form of Eq. ( 15).Differentiating the entropy (15), it follows that where we have used A = 4πr 2 A in deriving the above equation.Now, using Eqs.( 2), ( 3), ( 4), (31), and (32), in the right side of Eq. ( 29), we find Supposing the matter-energy content of the Universe to be a scalar field φ with the energy density ρ = ρ φ and the pressure p = p φ , we use Eqs.(11) and (12), and then we can rewrite Eq. ( 33) as Note that in deriving Eq. ( 34), we have used the relation rA = 1/H.Finally, with the help of the second and third relations in Eq. ( 20), we find From Eq. ( 29), it is clear that in the absence of the term d S/dS h the first-law of equilibrium thermodynamics dE = T h dS h + W dV on the apparent horizon holds.Therefore, Eq. ( 35) is an important relation in our examination.In the following, we show that in the framework based on the Tsallis entropy and under the slow-roll approximation, the term d S/dS h vanishes, and hence d S = 0. Therefore, this point allows us to use the obtained relations in the f (R) gravity to examine the inflationary models in the Tsallis entropy-based setting. V. INFLATIONARY DYNAMICS IN THE TSALLIS ENTROPY-BASED MODEL In this section, we assume that the Tsallis entropy-based model and the f (R) gravity are equivalent thermodynamically in the slow-roll regime.With the help of Eq. ( 22), one can easily express the horizon area A in Eq. ( 10) in terms of Ricci scalar R as Since we have supposed that the Tsallis entropy-based model and the f (R) gravity are equivalent in the slow-roll limit, we set Eqs. ( 6) and ( 15) equal to each other.Next, by using Eq. ( 36), we reach the following differential equation Since F (R) ≡ df (R)/dR, solving this differential equation analytically, the function f (R) is obtained as the following form As we see, f (R) is a power-law function of R as f (R) = µR n where µ ≡ (4 −1+2β (3π) −1+β γ G)/(2− β) and n ≡ 2 − β.For β = 1 and γ = 1/(4G), we find f (R) = R, and then Eq. ( 15) reduces to the Bekenstein-Hawking area-law of entropy [35] in the Einstein gravity. Here, it is worthwhile to point out that the R n Lagrangian was originally regarded in the context of higher derivative theories [93,94], and then applied to inflation [95][96][97], which provides a simple and practical generalization of the Starobinsky R 2 inflation [1].In particular, in [97], this Lagrangian with n ≈ 2 has been investigated to establish a way to measure a deviation from the R 2 inflation [1].However, our methodology in the present paper differs from the approach of [97] in several aspects which are as follows.The action in our model is completely different from the action of [97] because, in our model, the gravitational part of the action consists of only the R n term, but the action of [97] contains the term R + R n which includes also the standard Einstein-Hilbert term.Furthermore, in our model, the contribution of a scalar field has been included in the action beside the R n , but such a contribution is absent in the action of [97].In the analysis of [97], a conformal transformation from the Jordan frame to the Einstein frame has been performed, and the calculations of the inflationary observables are accomplished in the Einstein frame. In contrast, in our analysis, all calculations are performed in the Jordan frame.Finally, the form of the potentials that we consider in our work differs from the potentials that are regarded in [97] for the Einstein-frame scalar field. Using Eq. ( 22), one can easily rewrite Eq. ( 38) as the following form From the slow-roll parameters (20), we find that the field equations ( 7) and ( 14) in the slow-roll limit lead to It can be shown that in the slow-roll approximation, the same equation as Eq. ( 40) can also be derived by using the first Friedmann equation ( 18) in the f (R) gravity.This arises from the fact that in the slow-roll regime, the non-equilibrium entropy can be neglected in front of the equilibrium entropy, as we will show explicitly at the end of this section.Applying Eq. ( 40), we can rewrite the function f in Eq. ( 39) in terms of the scalar field φ as Taking the time derivative of Eq. ( 40) and then applying Eq. ( 41), we can rewrite ε 1 in Eq. Since d/dt = φd/dφ and d/dR = (1/R ,φ )d/dφ, we can also rewrite the slow-roll parameters ε 2 , ε 3 , and ε 4 in Eq. ( 20) in the following forms In deriving Eqs. ( 44), (45), and ( 46), we have also used Eqs.( 21) and (41).Besides, applying Eqs. ( 21), (24), and (41), the scalar power spectrum P s in Eq. ( 23) takes the form Substituting Eqs. ( 43), ( 44), (45), and (46) into Eq.( 25), the scalar spectral index n s reads We can also rewrite the tensor-to-scalar ratio r in Eq. ( 28) in terms of the scalar field φ as where we have used Eqs.( 21), (24), and (41) together with the relations d/dt = φd/dφ and It is convenient to evaluate the inflationary observations in terms of the so-called e-fold number N which measures the growth of the scale factor a during inflation.It is defined as where the subscript "e" refers to end of inflation.The definition (50) leads to Note that the anisotropies observed in the CMB exit the Hubble horizon around N * ≈ 50-60 e-folds before the end of inflation [99,100].The precise value of horizon exit e-fold number N * depends on the energy scale of inflation and also on the details of the reheating process after inflation [99,100].In our model, like most conventional inflationary models, the features of the reheating mechanism after inflation are unknown to us, and therefore it is not possible to determine the precise value of N * . Using the last equality in Eq. ( 51) and also applying Eqs. ( 40) and ( 41), we reach the following differential equation One can solve Eq. ( 52) to obtain the scalar field φ as a function of the e-fold number N in the slow-roll approximation.In this way, we find the inflationary observables in terms of N. Using Eqs. ( 40) and ( 41), we can rewrite Eq. ( 35) in the slow-roll limit as Using Eq. ( 22), we obtain R ,φ = 24HH ,φ , and since F = df /dR = f ,φ /R ,φ , we have ). Substituting this relation into Eq.( 53), we get Now substituting Eqs. ( 43) and (45) into Eq.( 54), and then using Eqs.( 9), (40), and (42), we will have which means that in the slow-roll Tsallis entropy-based inflation, the first law of equilibrium thermodynamics dE = T h dS h + W dV on the apparent horizon holds.This point makes us sure to apply the results derived in the f (R) gravity to find the inflationary observables in the Tsallis entropy-based cosmology. VI. OBSERVATIONAL CONSTRAINTS In this section, we apply the obtained results in the previous section to investigate the observational consistency of two different inflationary potentials in the framework of the Tsallis entropy-based inflation.These potentials are the quadratic and natural potentials that are not in good agreement with the current CMB data in the setting of standard inflation. A. Quadratic potential Let us continue studying the inflationary scenario in the Tsallis entropy-based setting by considering the quadratic potential [7] V where m is the inflaton mass.Applying the potential (56) and also using Eqs.( 9), ( 22), (40), and ( 42) , the slow-roll parameters in Eqs. ( 43), ( 44), (45), and ( 46) take the form The parameter Θ in these equations is defined as It is easy to show that for β = 1, we have Θ = γ.Therefore, for β = 1 and Θ = 1/(4G), we find the same relations in the Einstein gravity. With the help of Eq. ( 69), we can rewrite Eq. ( 64) in terms of the e-fold number N and the parameters β, Θ, and m as Applying Eq. ( 69) in Eq. ( 65), we take Now, we can substitute Eq. ( 69) into Eqs.( 57), (58), and ( 59), and obtain the slow-roll parameters as the following forms Using Eq. ( 69) into Eq.( 61), we find the scalar power spectrum as As we see from Eq. ( 75), the power spectrum of the curvature perturbation is a function of the e-fold number N and three free parameters β, Θ, and m.One can use Eq. ( 75) and then impose the CMB normalization at the observable scale to find a constraint on the parameter m. Applying Eq. ( 69), the scalar spectral index (62) and the tensor-to-scalar ratio (63) take the forms In the limit of standard inflation where β = 1, Eqs. ( 76) and ( 77) reduces to n s = 1 − 4/(2N + 1) and r = 16/(2N + 1), respectively.These results are the same equations that we find in the setup of the standard inflation.Let us now consider the limits where β ≪ 1 and N ≫ 1.From Eqs. ( 76) and ( 77), the leading contributions to n s and r become This means that in the regime β ≪ 1 and N ≫ 1, our theory reduces to the R 2 inflationary model proposed by Starobinsky [1].In this regime, the power spectrum (75) reduces to the form Using Eq. ( 80) and then impose the CMB normalization at the pivot scale k * = 0.05 Mpc −1 [17], we find In Fig. 1, the parameter β has been taken as a varying parameter in the range 0 < β < 2. It should be noted that each value of β in this range is related to a special case of our powerlaw f (R) scenario, and in the case with β = 1 and γ = 1/(4G), our f (R) model reduces to the Einstein general relativity (GR).This does not mean that a transition from f (R) to GR has occurred during inflation in our scenario.In other words, to investigate each case of our scenario, we should fix the value of β at the first step, and it is not the case that this parameter varies during inflation and causes a transition from f (R) to GR.Since we cannot determine the parameter γ in our investigation, therefore we cannot determine the time at which the f (R) gravity in Eq. ( 38) transits to GR. Fig. 1 shows that the prediction of the potential m 2 φ 2 /2 in the standard setting is not in good consistency with the Planck 2018 observations [17], while in the framework of the Tsallis entropy-based inflationary scenario, it can be in very good agreement with these data, and its results can lie inside the 68% CL region of Planck 2018 data [17].From the figure, we see that for small β, the model shows better consistency with the observations and the prediction of the model can enter the 68% CL region of these data.As we have proved, in the limit β ≪ 1, the inflationary observables n s and r approach to the same values in the Starobinsky R 2 inflation [1]. With the help of Eqs. ( 76) and ( 77), and also the Planck observational constraints on the r-n s plane, we can estimate the ranges of the parameter β for which the results of the model in the r − n s plane are consistent with the 68% CL region of the Planck 2018 data [17].In the case N * = 50, if 0 < β 0.045 the result of the model is in agreement with the 68% CL constraint of these data, and for N * = 60 the prediction of our model can enter the 68% CL region of the Planck 2018 data [17], provided that 0 < β 0.011. Here, we are interested in applying the recent constraint of [92] on r 0.01 to present some observational constraint on the model parameter β.For this purpose, we should determine the e-fold number at which the comoving wavenumber k = 0.01 Mpc −1 exits the Hubble horizon during inflation.To do so, we examine the behavior of the comoving wavenumber k as a function of the e-fold number N, at which the mode with comoving wavenumber k leaves the Hubble horizon, k = aH.With the help of this relation, we can easily find which a(N * ) and H(N * ) are the scale factor and the Hubble parameter at the time of horizon exit of the mode k * = 0.05 Mpc −1 , respectively.Using Eq. ( 50), we can obtain the scale factor a in terms of the e-fold number N as where we have normalized the scale factor to its value at the epoch of horizon crossing of the mode k * = 0.05 Mpc −1 .Finally, applying Eqs. ( 71) and ( 84) in Eq. ( 83), the comoving number k can be found as a function of N as Using Eq. ( 85) and setting N * = 60, we plot in Fig. 2 the variation of the comoving wavenumber k against the e-fold number N. In the figure, we have also specified the comoving wavenumber k = 0.01 Mpc −1 and its corresponding e-fold number which is N 0.01 ≃ 61.62. In Fig. 3, with the help of Eq. ( 77), the variation of the tensor-to-scalar ratio r 0.01 is plotted as a function of the parameter β by taking N 0.01 ≃ 61.62.The gray shaded region is excluded by the constraint reported by Galloni et al. [92] at the CMB pivot scale Here, it should be noted that throughout this paper, we have worked in the framework of the f (R) gravity given by Eq. ( 38), and we have treated GR as a special case of our f (R) gravity model that is realized by taking β = 1 and γ = 1/(4G).To clarify this point, we note that for a given set of model parameters (β, γ), one can find Θ from Eq. ( 60 function and causes the deviation from GR.Since γ is a free parameter in our model, we may take its value such that the regime of the f (R) gravity dominates over the GR regime throughout inflation, and consequently any transition would not occur during inflation at all.This means that in the effective action which may contain the f (R) term together with the Einstein-Hilbert term R, the contribution of the latter will be negligible compared to the former contribution, and therefore the effective action will be reduced to the action (16) which is used in the present work.Besides, since our investigation is not able to determine the precise value of γ, it is not possible to specify the time of the transition from f (R) to GR that may happen during inflation or in the post-inflationary Universe, in the present work.However, if future studies provide some understanding for us about the nature of the inflaton field, we may determine its effective mass, and accordingly estimate value of the γ.This enables us to estimate the time of such a transition.In addition, we may provide some observational constraints of our model parameter β in the setup of the Tsallis inflation, like the analysis performed in [63] for the case of the Barrow cosmology. B. Natural inflation The next model that we consider in our investigation is the natural inflation model in which the inflaton field is presumed to be an axion or pseudo-Nambu-Goldstone boson with a cosine-type periodic potential [68,101] where Λ is some non-perturbatively generated scale and φ 0 is the scale that determines the curvature of the potential.Both of these constants have dimensions of mass.It seems that the super-Planckian value of the scale φ 0 , i.e. φ 0 M P , is impossible in the context of string theory because all known controlled string theory constructions are restricted to φ 0 < M P [69,102].In the framework of standard slow-roll inflation, the prediction of the natural potential in the r − n s plane is in tension with the latest observations [17], in the sense that its prediction can lie within the 95% CL region of these data only for N * = 60. This issue motivates us to study natural inflation in the Tsallis entropy-based scenario to see whether the novel framework can improve the consistency of the model with the Planck 2018 observations [17]. With the help of Eqs. ( 9) and ( 86), we can rewrite Eq. ( 42) as Substituting Eqs. ( 9) and ( 86) in Eq. ( 40), the Hubble parameter H is found as the following form We can apply the above equation together with Eq. ( 84) in Eq. ( 83), to find the comoving number k as To find the scalar field χ in terms of the e-fold number N, we perform the same steps as the previous subsection.In the first step, using Eq. ( 87) for a given set of the model parameters (β, η), we solve the equation ε 1 (χ e ) = 1 numerically to find χ e .After that, with the help of the obtained value of χ e , we solve differential equation ( 94), numerically, and consequently we find χ = χ(N).Substituting χ(N) into Eqs.( 87), ( 88), ( 89), ( 90), ( 95), (96), and ( 99), the slow-roll parameters ε 1 , ε 2 , ε 3 , ε 4 , the inflationary observables n s and r, and also the comoving number k are derived in terms of N, respectively. In Fig. 6 Using the Planck observational constraints on r-n s plane and also Eqs. ( 95) and ( 96), we also can estimate the ranges of parameter η for which the prediction of our model for To present some constraint on the model parameter η by using the recent constraint of [92] on r 0.01 , we determine the e-fold number at which the comoving wavenumber k = 0.01 Mpc −1 The marginalized joint 68% and 95% CL regions of the Planck 2018 TTTEEE+lowℓ+lowE data [17] are specified by dark and light blue, respectively. exits the Hubble horizon during inflation, with the help of Eq. ( 99) by taking β = 1/2 and N * = 60.It is found that N 0.01 ≃ 61.6. In Fig. 7, using Eq. ( 96), we plot the variation of the tensor-to-scalar ratio r 0.01 against the parameter η by setting β = 1/2 and N 0.01 ≃ 61.6.The gray-shaded region is excluded by the constraint on the upper bound on r 0.01 reported by Galloni et al. [92].The prediction of the model is in agreement with this constraint if η 0.593. The evolution of the slow-roll parameters ε 1 , ε 2 , ε 3 , and ε 4 versus the e-fold number N for the natural potential ( 86) is plotted in Fig. 8, by using Eqs.( 87 In Fig. 9, with the help of Eq. ( 97), the evolution of the function f (R) as a function of VII. CONCLUSIONS Inflation has occurred in the regime of high energy physics at which the gravitational theory is expected to be modified.Therefore, the entropy-area relation may undergo some modifications in those energy scales.This motivated us to regard the entropy of the early Universe to be in the form of the Tsallis entropy which is a generalization for the Bekenstein- Hawking entropy [35] and possesses the non-additivity and non-extensivity property.This form of entropy has a relation with the horizon area as S h = γA β , in which A is the area of the horizon and β and γ are unknown constants.We have studied the inflationary era in the context of the Tsallis entropy-based cosmology.Since there is no definite action for this setup, it was not possible to derive the power spectra of the primordial scalar and tensor perturbations in our Tsallis inflation scenario.To resolve this issue, we reconstructed an f (R) model which is thermodynamically equivalent to our setting in the slow-roll approximation. This equivalence allows us to use the equations of the scalar and tensor power spectra Moreover, we have proved that in the limits β ≪ 1 and N ≫ 1, the behavior of the quadratic potential in the r−n s plane coincides exactly with the prediction of the Starobinsky R 2 inflation [1].Furthermore, we have estimated that for N * = 50, the prediction of the model is compatible with the 68% CL constraint of the Planck 2018 observations, if 0 < β 0.045.In the case of N * = 60, the model is consistent with the 68% CL region of the observational data provided that 0 < β 0.011.The recent constraint r 0.01 < 0.028 (95% CL) [92] on the tensor-to-scalar ratio at the scale k = 0.01 Mpc −1 constrains this parameter to 0 < β 0.202. We further examined the viability of the natural potential (86) in the inflationary setting based on the Tsallis entropy.In the framework of the standard inflation, the prediction of this potential in the r − n s plane is not very preferred according to the current CMB observations, regarding the fact that its results can satisfy only the 95% CL constraints of the Planck 2018 data [17].In this case, we have found the inflationary observables n s and r in terms of the model parameters β and η numerically.We have focused on the cases β = 1/2 and drawn the r − n s diagram for N * = 50, 60, with varying η in the range of η > 0. We have demonstrated that in the framework of the Tsallis entropy-based inflation, the natural potential provides a great fit to the Planck 2018 data [17].The prediction of the model lies inside the 68% CL region of these data.Furthermore, we presented some observational constraints on the model parameters by using the Planck 2018 data [17].Our results imply that for N * = 50, 60, the results of our model can lie inside the 68% CL region of the Planck 2018 data [17], provided that 0.172 η 0.433 and 0.281 η 0.613, respectively.The observational bound on r 0.01 provided in [92], also gives rise to the condition η 0.593 for the model parameter. m ≃ 6 . 614 × 10 −7 Θ for N * = 60 .(82) As we see the value of the parameter m depends on what value the parameter Θ takes.With the help of Eqs.(76) and (77), we can plot the r − n s diagram and compare the prediction of our model with the Planck 2018 CMB data [17].Fig. 1 shows the prediction of our model in the r − n s plane for two typical values of N * and varying β in the range of 0 < β < 2. The dashed and solid black curves illustrate the results of the model for N * = 50 and N * = 60, respectively.Besides, The red solid line between the dashed and solid black curves shows the prediction of the potential in the standard inflation which corresponds to β = 1 in our model, in the range of 50 ≤ N * ≤ 60.Moreover, the prediction of the Starobinsky R 2 inflationary model [1] has been shown by the orange solid line with 50 ≤ N * ≤ 60. FIG. 1 . 2 inflation [ 1 ] FIG. 1.The r − n s diagram for the quadratic potential (56) in the slow-roll inflation based on the Tsallis entropy for two different values of N * with varying β in the range of 0 < β < 2. The results for N * = 50 and N * = 60 are shown by the dashed and solid black curves, respectively.The red solid line between the dashed and solid black curves shows the results of the potential in the standard inflation, in the range of 50 ≤ N * ≤ 60.Moreover, the result of the Starobinsky R 2 inflation [1] is specified by the orange solid line with 50 ≤ N * ≤ 60.The marginalized joint 68% and 95% CL regions of the Planck 2018 TTTEEE+lowℓ+lowE data [17] are specified by dark and light blue, respectively. 1 ]FIG. 2 . FIG. 2. Evolution of the comoving wavenumber k versus the e-fold number N for the quadratic potential (56) in the Tsallis inflationary setting.The pink horizontal and vertical dashed lines specify k = 0.01 Mpc −1 and its corresponding e-fold number N 0.01 ≃ 61.62, respectively. FIG. 3 . FIG.3.Variation of the tensor-to-scalar ratio r 0.01 versus the parameter β by setting N 0.01 ≃ 61.62, for the quadratic potential(56) in the framework of the Tsallis inflation.The gray-shaded region is excluded by the constraint on the upper bound on r 0.01 , reported by Galloni et al.[92].The pink vertical dashed line specifies β ≃ 0.202 which is the maximum value of the parameter β for which the model satisfies the constraint r 0.01 < 0.028. FIG. 5 . FIG. 5. Evolution of the function f (R) versus the e-fold number N for the quadratic potential (56) in the Tsallis inflationary scenario with β = 0.01 and some typical values of γ.Also, the red curve illustrates the Ricci scalar R predicted by general relativity (GR). Fig.6clears that the result of the potential(86) in the standard inflation for N * = 50 lies completely outside the allowed regions of the Planck 2018 observations[17] and only for N * = 60 enters the 95% CL region of these data.In the Tsallis entropy-based inflationary setting, however, its prediction can lie well within the 68% CL region allowed by the Planck 2018 data[17]. FIG. 6 . FIG. 6.The r−n s diagram of the natural potential (86) in the Tsallis inflationary setting by taking β = 1/2 for two different values of N * with varying η in the range of η > 0. The results of our model for N * = 50 and N * = 60 are shown by the dashed and solid black curves, respectively.The red dashed (N * = 50) and solid (N * = 60) curves show the prediction of the standard natural inflation. )-(90).In this figure, we have considered β = 1/2 and η = 0.65.The figure shows that the slow-roll approximation holds during inflation.This point verifies the viability of our investigation which is based on the slow-roll approximation.Moreover, from the figure, we see that the first slow-roll parameter ε 1 reaches unity at the end of inflation with N = 0. FIG. 7 . FIG.7.Variation of the tensor-to-scalar ratio r 0.01 against the parameter η for the natural potential(86) in the Tsallis inflationary scenario with β = 1/2 and N 0.01 ≃ 61.6.The gray-shaded region is excluded by the constraint on the upper bound on r 0.01 , reported by Galloni et al.[92].The pink vertical dashed line specifies η ≃ 0.593 which is the minimum value of the parameter η for which the model agrees with the constraint r 0.01 < 0.028. FIG. 9 . FIG. 9. Evolution of the function f (R) against of the e-fold number N for the natural potential (86) in the Tsallis inflation with β = 1/2 and η = 0.65, for some typical values of γ.The prediction of general relativity (GR) for the Ricci scalar R is also shown by the red color. obtained in the f (R) gravity for our Tsallis inflation model.We have considered two different inflationary potentials in our scenario and checked their viability with the Planck 2018 observations[17].First, we studied the observational consistency of the quadratic potential (56) which provides a chaotic inflation model.In the standard inflationary setting based on the Einstein gravity, the consistency of this potential is not favored by the Planck 2018 observational data[17].This motivated us to investigate whether this potential can be resurrected in light of the Planck 2018 results in the setting of the Tsallis entropy-based inflation.We have derived analytic formulas for n s and r in terms of the parameter β and the e-fold number N, and then plotted the r − n s diagram for N * = 50, 60, with varying the parameter β in the range of 0 < β < 2. Our results imply that this potential can be in excellent consistency with the Planck 2018 data in the Tsallis entropy-based scenario, such that its results can lie inside the 68% CL region of the observational data. [17]d different values of N * is compatible with 68% CL constraint of the Planck 2018 data[17].In the case N * = 50, the model is compatible with 68% CL constraint of the Planck 2018 data for 0.172 η 0.433.We also find that for N * = 60, the results of our model can lie inside the 68% CL region of the Planck 2018 data, if 0.281 η 0.613.
v3-fos-license
2023-01-15T16:01:06.091Z
2023-01-01T00:00:00.000
255821530
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2023/02/bioconf_itsm2023_03003.pdf", "pdf_hash": "9e5bfbee117534bac578f01ca2d71731e68b9494", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42809", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "sha1": "2455e682c5011e49c54a227c326be462397e303f", "year": 2023 }
pes2o/s2orc
Development of Clonal Micropropagation Technology for Ludisia Discolor (Ker Gawl.) A. Rich. In Vitro Conditions . The features of clonal micropropagation in three variations of Ludisia discolor were studied. Brood buds, apical shoot meristems, immature seeds and mature seeds were used as explants. They are grown using Murashige & Skoog modified medium. A special technique has been developed for the sterilization and planting of microscopic mature seeds. Mature seeds germinated for 60-70 days. Seeds from 25-day-old capsules showed the best results among immature seeds. They gave mass shoots already on the 25-30 day after planting. The roller of the future first leaf of the protocorms was laid in the dark 2 months after germination. Switching to the light mode induced rapid gemmorizogenesis - the formation of the first green leaves, stem and adventitious roots. Seeds from 15-day-old capsules did not germinate at all. Planting with immature seeds is the most effective method of clonal reproduction of L. discolor in vitro. Introduction It is known that the tissues of natural and cultivated plants of terrestrial orchids from the genera Anoectochilus and Ludisia contain a high content of various glycosides, polysaccharides and flavonoids, which have significant bioactivity (Poobathy et al., 2019, Yan-bin Wu et al., 2020). In China, almost all species of Anoectochilus and Ludisia are used as folk medicine. Among these species, A. roxburghii (Jinxianlian in Chinese) is considered the most famous and popular medicinal and edible species of Anoectochilus in China. Fresh or dried whole plant A. roxburghii is mainly used for the treatment of diabetes, hepatitis, hypertension, tuberculous hemoptysis, fever, rheumatism and rheumatoid arthritis, pleurodinia in China (Huang, 2006, Huang et al., 2007, Ye et al., 2017, Zeng et al., 2017. Species from the genus Ludisia are also valuable raw materials for the production of flavonoids, anthocyanins and antioxidants (Poobathy et al., 2019). Anoectochilus roxburghii is officially designated as the only source of Jinxianlian in the "Standards of Chinese Medicinal Materials of Fujian Province in China" (2006 edition) (Huang, 2006). Popularly, other Anoectochilus species are also called Jinxianlian in local herb markets due to similar traditional medicinal efficacy, including antipyretic effect and detoxification, expelling wind and eliminating dampness (Ye et al., 2017). In addition, some clinical applications of Ludisia species are similar to those of A. roxburghii in the treatment of tuberculosis (tuberculosis hemoptysis), rheumatism, rheumatoid arthritis, snake bite, etc. (Ai, 2013). The demand for A. roxburghii raw materials in the pharmaceutical market is growing every year due to its unique medicinal and edible properties. Such extensive consumption has led to a sharp reduction in the natural reserves of A. roxburghii to a critical level. Therefore, active research is currently underway to find suitable substitutes for officially registered species for the production of Jinxianlian based on Anoectochilus or other related genera. (Yan-bin Wu et al., 2020). However, the introduction of new Anoectochilus and Ludisia species into circulation will not fundamentally solve the problem of the shortage of the resource base of Jinxianlian medicinal raw materials, since intensive collection of raw materials from natural populations will quickly deplete this new resource base and put these species on the verge of extinction. Lidisia discolor is also intensively harvested at home as a valuable medicinal raw material for traditional medicine (Ranjetta et al., 2018). In addition, L. discolor is actively withdrawn from natural habitats by plant collectors as "precious orchids", which are very popular among orchid collectors because of the unique features of coloring, texture and unique mosaic pattern of leaves. Currently, L. discolor is the leader in the number of specimens in private collections among other "precious orchids". The only effective solution for expanding the resource base and preserving natural reserves is the development and introduction into production of biotechnologies for growing these valuable orchids. However, in this way of solving this problem, it is necessary to study a number of issues. The first of them is connected with the complexity of seed reproduction of these rare orchids. Specialized pollinators are needed for seed production, since the morphology of the flowers is unique. The seeds of these orchids are completely full of spare nutrients and germinate only with the help of specialized mycorrhizal fungi. Moreover, the seedlings of these orchids have been living exclusively underground for several years and feed on mycorrhizal fungi. These features of reproductive biology must be carefully taken into account when developing biotechnological methods of growing medicinal plants outside of natural populations, i.e. where there are no natural consorts -specific insect pollinators and mycorrhizal fungi. Cultivation of Anoectochilus and Ludisia species in interaction with specific mycorrhizal fungi is of fundamental importance for obtaining high-quality medicinal raw materials from these orchids. The few studies of the features of early development in some species clearly indicate that the level of accumulation of glycosides, polysaccharides and flavonoids in plants grown under sterile conditions in vitro and in the presence of mycorrhizal fungi is very different. It is likely that the accumulation of biologically active compounds in the tissues of orchids is a kind of immune response to the penetration of mycorrhizal fungi into the tissues. Therefore, two-stage biotechnology is of interest for the production of high-quality medicinal raw materials on an industrial scale. Methods of mass reproduction of this rare plant under in vitro conditions can act as one of the elements of such technology. Therefore, the purpose of this study was to study the features of the course of clonal micropropagation of Ludisia discolor. To achieve this goal , it was necessary to solve the following tasks: 1) to study the behavior of various explants on the nutrient medium; 2) to develop an effective method of sterilization and seed planting. Three variations of Ludisia discolour from the natural habitats of Southeast Asia were used as the object of research: var. nigrescens, var. dawsoniana, and var. dawsoniana f. variegate, which differed among themselves in color, mosaic and texture of the leaves (Fig. 1). The collection material was obtained from the tropical orangery of the Botanical Garden of the Botanical Institute named after V. L. Komarov RAS, St. Petersburg. Subsequently, these plants were cultivated for two years in the tropical greenhouse of the Crimean Federal State University named after V.I. Vernadsky, Simferopol. Brood buds, apical shoot meristems and seeds were used as explants. Materials and Methods A modified Murashige & Skoog medium was used for planting (Murashige & Skoog, 1962). Unlike the standard version, it had half the amount of mineral components, sucrose at a concentration of 15 g/l. Phytohormones 6-BAP and indoleacetic acid were used in the ratios 1:2, 1:1, 2:1. To obtain seeds, we carried out geitenogamous pollination between flowers of the same inflorescence. The full ripening of fruits in all studied variations of Ludisia discolor in the greenhouse conditions occurred on 30-35 days. Mature seeds of L. discolor have microscopic dimensions. Therefore, a specially developed technology of sterilization and planting was used for their inoculation of medium in vitro. It was based on a method that was developed by E.V. Andronova et al. (2007) for sterilization of mature microscopic orchid seeds with a chemical solution using a syringe and a special fine metal mesh trapping (personal message). We did not use a metal mesh to trap the seeds. L. discolor seeds were placed in filter bags (paper bags from a coffee filter). A portion of dry mature seeds was poured into such a bag and they were covered with a round paper circle with a number. The diameter of the circle was slightly smaller than the diameter of the piston of the syringe. The tight fit and the presence of a paper circle according to the size of the piston contributed to a good mixing of the contents of the bag with the sterilizing liquid during the movement of the piston. The finished bag was tied and placed in a syringe. A mixture of distilled water and a 10% solution of sodium hypochlorite in a ratio of 1:1 was used as a sterilizer. Sterilization lasted 20 minutes. The procedure of washing the seeds from the sterilizer was carried out in the same way in distilled water 2 times for 10 minutes. With sterile scissors, the tail part was cut off from the bag. The head part of the bag was completely unfolded and transferred to a Petri dish with a nutrient medium. Planting Ludisia discolor with mature seeds from already opened capsules turned out to be a technically time-consuming operation. Therefore, we tested the technology of planting this orchid on a sterile nutrient medium using immature seeds from unopened capsules according to the method of P. J. Kauth et al. (2008). Affiliations In the first stage from Ludisia discolor var. dawsoniana capsules of different ages were taken to determine the optimal stage of seed development for planting. In these studies, 15, 20 and 25-day capsules were used from the moment of initial pollination. All the capsules were planted on the medium from one flask. Petri dishes with planted seeds were kept for the first 60 days in a dark thermostat at 18-20 °C. Seeds from 25-day capsules showed the best germination (Fig. 2.1). Initially, the seed embryos swelled and acquired a milky white color. A month after planting, the protocorms massively tore the seed coat from the seeds and the first root hairs formed in their basal part (Fig. 2.2). After 2 months, the protocorms formed an apical roller of the future first leaf. The basal part by this time was already covered with several rows of root hairs (Fig. 2.3). Fig. 2. Seed germination and haemorrhizogenesis of protocorms in Ludisia discolor var. dawsoniana in the dark phase: 1 -immature 25-day seeds after planting on a nutrient medium; 2 -mass germination of seeds and the release of protocorms from the test, 40 days; 3 -the basal part of the protocorms is covered with several rows of root hairs, a roller of the future first leaf is laid in the apical part, 60 days. The number of days is indicated from the moment the seeds are planted on the nutrient medium. Source: Compiled by the authors. After 60 days from the moment of planting, the protocorms were transferred to a growth cabinet with an 8-hour lighting period and a temperature of 25 °C. After 5 days, there was a massive greening of the protocorms (Fig. 3.1). After another 5 days, the leaf blade of the first leaf began to form from the apical roller ( Fig. 3.2). After a month of keeping crops in the light, the seedlings had the development of 2-3 stem metamers with root hairs (Fig. 3.3). Seeds from 20-day capsules germinated much worse. In a dark thermostat on the 30th day after planting, only about 20% of the sown seeds swelled or formed protocorms. In size, the protocorms were smaller than the protocorms from 25-day-old seeds. The relative delay in the development of seedlings persisted even when kept in a growth cabinet. Seeds from 15-day-old capsules showed no signs of germination during the entire observation period in a dark thermostat and growth cabinet. In the other studied variations of Ludisia, seeds were taken for planting only from 25day-old capsules. Seed germination in the dark phase and gemmorizogenesis in the growth cabinet occurred in them similarly to the first variation. Previously obtained data have been confirmed that the induction of brood bud development in L. discolor under in vitro conditions is not always successful and depends on the optimal concentration of phytohormones in the nutrient medium (Ying et al., 2021). In addition, the introduction of brood buds into culture in vitro is often accompanied by the development of fungal and bacterial microflora, despite the harsh treatment of explants with various sterilizers (Poobathy et al., 2019). The application of the developed methods of sowing mature and immature seeds for clonal micropropagation allows us to increase the multiplication coefficient of L. discolor by thousands of times. It was previously shown that in order to preserve rare species of polymorphic orchids, an integrated approach based on population monitoring, research of intraspecific structure, as well as features of biology of both seed and vegetative reproduction should be applied (Shirokov et al., 2020). Our studies of another rare polymorphic orchid, Ludisia discolor, have shown that when seed propagation is carried out in vitro, combined methods of planting with various types of explants should be used: brood buds, immature seeds and seeds from opened capsules. At the same time, the choice of a specific type of explant depends on the general tasks -maintaining the genetic diversity of the population or multiplying a rare variation. Whereas when choosing the method of planting seeds, it is necessary to take into account the possibility of repeated exits in the population for collecting material and the timing of its delivery to the laboratory. Conclusion The features of the course of clonal micro-propagation of Ludisia discolor using various types of explants: brood buds, mature and immature seeds in vitro on a standard Murashige & Skoog medium have been studied. It has been established that the use of immature seeds from 25-day-old capsules is the most effective for mass reproduction of this rare orchid. The use of mature seeds from already opened capsules is limited by the difficulties in their sterilization due to their microscopic size. We have developed an effective method of sterilization of small orchid seeds. It also facilitates the subsequent planting of such seeds on nutrient media. However, its success depends on the accuracy of the choice of the stage in the development of seeds, which can only be done in greenhouses. When taking material from natural populations, mass planting using mature seeds is more reliable, but sterilization and planting of such seeds is much more difficult. A special technique is required, which greatly simplifies sterilization and planting. Thus, for the first time in the Russian Federation, a fundamentally new technology of clonal micropropagation of Ludisia discolor has been developed in vitro. Andronova E. V. carried out research in the framework of the institutional research project (no. AAAA-A18-118051590112-8) of the Komarov Botanical Institute of the Russian Academy of Sciences "The diversity of morphogenetic programs of plants reproductive structures development, natural and artificial models of their realization".
v3-fos-license
2020-08-27T09:15:07.554Z
2020-01-01T00:00:00.000
229377832
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://scholarhub.ui.ac.id/cgi/viewcontent.cgi?article=1185&context=mjhr", "pdf_hash": "2cbc26feecedfc26cfb931bb1ccfb3edd493fa68", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42810", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "3915268ae286ff4a72a4e56ed2b309b508dbcfce", "year": 2020 }
pes2o/s2orc
Effects of L-fucose supplementation on the viability of cancer cell lines Background: Fucose is a deoxyhexose sugar. While the biological roles of L-fucose remain unclear, the sugar is known to accelerate the malignant potential of cancer cells. Therefore, this study aimed to evaluate the viability pattern of human cancer and normal cell lines treated with fucose. Methods: The human gingival fibroblast (HGF-1), colorectal adenocarcinoma (HT-29) and skin malignant melanoma (A375) cell lines were cultured and treated with fucose at three concentrations of 1, 5, and 10 mg/ml. Cell viability was then measured using (3-(4, 5-dimethylthiazolyl-2)-2, 5diphenyltetrazolium bromide (MTT) assay. The data were analyzed using Statistical Package for the Social Sciences software. Results: The percentage of HGF-1 cell viability showed a rapid decline after day 1 of treatment. HT-29 and A375 were capable of surviving treatment with high fucose concentrations. The data were highly significant at p < 0.001. Conclusion: Whereas a high concentration of fucose is toxic to the HGF-1 cell line, the HT-29 and A375 cell lines could potentially adapt to this condition. Downor upregulation of certain molecules that could induce or inhibit cell death may explain such adaptation. Further testing of upand downregulated molecules should be conducted in future work. Introduction Cancer presents multiple risk factors. Hyperglycemia has been reported to raise the prevalence and mortality of certain cancers, such as breast, liver, bladder, pancreatic, colorectal, and endometrial cancers. 1 Hyperglycemia could promote the proliferation, invasion, and migration of cells and induce apoptotic resistance. 1 This condition is also known to be a factor for the development of cancer among patients with diabetes, especially type 2 diabetes. 1,2 Fucose is a natural deoxyhexose sugar with a structure similar to that of glucose except for its lack of a hydroxyl group on carbon 6. 3 Mammalian cells utilize fucose in the Lenantiomer form, although other deoxyhexoses are used in the D-enantiomer form. L-Fucose is incorporated onto glycoproteins during the synthesis of N-and Olinked glycans in mammalian cells. 4 Fucosylated glycans perform several functions, such as inflammatory response regulation, signal transduction, cell growth, transcription, and adhesion. For example, cell-cell interactions can be partially modulated by the presence of L-fucose-specific lectin-like adhesion molecules on the cell surface. Fucosylation of cell membrane receptors and proteins, including epidermal growth factor receptor (EGFR), TGFβ, Notch, E-cadherin, integrins, and selectin ligands, has also been reported to influence ligand binding, dimerization, and signaling capacities. 3 Activation of EGFR tyrosine kinases is a key factor for lung cancer progression. 5 Increased L-fucose in serum has been detected in breast, oral, head, neck, liver, ovarian, and prostate cancers. 3 Addition of glycans to glycoproteins is known as glycosylation. Glycosylation is the major posttranslational modification of proteins contributing to malignant transformation and metastasis. 6,7 Fucose is a basic constituent of oligosaccharides and associated with cancer and inflammation. Increases in fucose level may lead to altered or unique glycoconjugates. 8 Fucose is one of the essential components of the Lewis ligands, selectin family that play the important role for cell adhesion. For instance, during inflammation, E-and Pselectins appear on activated endothelial cells to interact with leukocytes through sialyl-Lewis (x) and sialyl-Lewis (a) antigens. 9 These selectins represent typical tumor-associated carbohydrate antigens that have been reported to be responsible for the adhesion of human cancer cells to the endothelium. 10 Fucosylation produces fucosylated markers, and the upregulation of these markers could promote canceration. Previous study reported that key enzymes, such as alpha1,2-fucosyltransferases (e.g., FUC1 and FUC2), are highly expressed in malignant tissues, for example, breast cancer. 11 Several checkpoints are available in the cell cycle, and these checkpoints are crucial in preventing abnormal or cancer cells from replicating. When a checkpoint detects the abnormality of a cell, signals for repair and apoptosis are initiated. Supplementation of the cell culture medium plays a pivotal role in modulating available checkpoints. For instance, high L-fucose concentrations cause fucosylation, which subsequently produces molecules that could prevent apoptosis. Normal and cancer cells similarly require nutrients to replicate. Treatment with high concentrations of fucose is expected to cause either up-or downregulation of molecules that are important to cell replication. In addition, when treated with high concentrations of fucose, normal and cancer cell lines could show similar or different effects in terms of cell viability. Therefore, the cell viability patterns of normal and cancer cell lines treated with high concentrations of fucose should be investigated to explore possible treatments for cancer. Information related to these patterns could be used to improve cancer therapy in the near future. Herein, this study aimed to investigate the viability patterns of human cancer and normal cell lines treated with three different concentrations of fucose. Methods Selection and preparation of cancer and normal cell lines. Three cell lines were used in this study, namely, the human colorectal adenocarcinoma (HT-29, ATCC HTB-38, USA) cell line, the human malignant melanoma (A375, ATCC CRL-1619, USA) cell line, and the human gingival fibroblast (HGF-1, ATCC CRL-2014, USA) cell line. In this work, the colorectal adenocarcinoma and malignant melanoma cell lines were used to examine the effects of fucose treatment on the internal and external parts of the body, respectively. The human gingival fibroblast cell line was selected because cells of the mouth are the first site in which metabolism takes place. Selection of monosaccharides. The monosaccharide used in this study, L-fucose, was procured from Nacalai Tesque Inc. (Kyoto, Japan) in powder form. Optimization of concentration and treatment period. The methods used to optimize the concentration and treatment period for in vitro stability studies were adapted and modified from a previous study. 12 Fucose was diluted properly to three different concentration of 1, 5, and 10 mg/ml. The monosaccharide was dissolved completely in DMEM to ensure no possible physical effect toward the cell lines. The control for this study was composed of DMEM containing only 10% fetal bovine serum (FBS) and 1% penicillin + streptomycin solution. Four incubation periods of 1, 3, 5, and 10 days were applied as a treatment parameter to study the effect of fucose on the cell viability of the tested cell lines. Cell culture technique. The HT-29, A375, and HGF-1 cells were thawed and revived in a 25 ml culture flask containing basal medium and incubated overnight at the optimum temperature of 37 ℃ with 5% CO2 and 90% humidity. The medium was changed after 3 days of incubation to ensure sufficient nutrients for the cells to develop and grow. Cell proliferation activity was observed daily until approximately 80% confluence was achieved. The cell cultures were then subjectively observed by comparing occupied and unoccupied areas. Upon reaching confluence, the cells were dissociated using dissociation media and then seeded. Viability test by MTT assay. MTT assay was conducted to assess the metabolic activity of the cells. Prior to the cell viability test, the cells were counted using a hemocytometer by adopting the method from previous study. 13 Approximately 1000 cells were aliquoted into different 12-well plates containing DMEM that had been added with 10% FBS, 1% antibiotics, and fucose. Plates with fucose were incubated for 1, 3, 5, and 10 days. The treated cell lines were washed with phosphate buffer saline. Then, 20 µl of 0.5 mg/ml MTT solution was added to each well, and the plates were incubated for at least 2 hours. The MTT solution was then removed, and the wells were added with 200 µl of dimethyl sulfoxide to solubilize the formazan crystals. Absorbance was measured using a spectrophotometer at 450 nm. Because all cell viability procedures were considered light-sensitive processes, the plates were wrapped with aluminum foil. Analysis of data. Data were analyzed by using Statistical Package for the Social Sciences software. All treatment groups were compared with the control group, and values are expressed as standard error mean ± standard deviation. Significant differences between treatment groups and the control were determined using one-way ANOVA. A p ≤ 0.05 was considered to indicate statistical significance. Table 1 reveals the cell viability of the normal HGF-1 cell line after treatment with fucose. Comparison of the cell viability patterns of the control and different treatment groups reveals remarkable differences. This result indicates that the cell viability pattern of control group is likely similar to that of normal cell growth. All of the treatment groups showed a drastic decline in cell viability after day 3 of treatment, a slight increase on Effects of L-fucose on cell viability in cancer cell lines 71 Makara J Health Res. August 2020 | Vol. 24 | No. 2 day 5, and another decrease on day 10. While the effect of 5 mg/ml fucose on cell viability on day 10 seems to be less extensive than that of 10 mg/ml fucose, the observed viability patterns are similar. Table 2 illustrates the viability of A375 cells after treatment with fucose. The control and 1 mg/ml fucose treatment groups showed similar patterns of cell viability, which means 1 mg/ml fucose does not affect cell viability. The control and 1 mg/ml fucose treatment groups showed continuous increases in cell viability from day 1 to day 3. Cell viability declined from day 5 to day 10. Treatment of A375 cells with 5 and 10 mg/ml fucose revealed no increase for the cell viability which the cells continue to decline from day 1 until day 10. Unlike HGF-1 cells, these cancer cells were able to withstand the effects of high fucose concentrations; thus, the cell viability of this line did not decline on days 5 and 10 in comparison with that of the HGF-1 line. Fucose treatment resulted in concentrationdependent decreases in cell viability. However, cell viability after treatment with 5 mg/ml fucose on day 10 appeared to be higher than that observed after treatment with 1 mg/ml fucose. Table 3 shows the cell viability patterns of HT-29 cells after treatment with fucose. The cell viability of the control group indicated an increment from day 1 to day 3. Cell viability then declined from day 5 to 10. By contrast, the cell viability of all treatment groups declined from day 1 to day 10. In addition, on day 1, cell viability increased as the treatment concentration increased. An increase in cell viability was also observed on days 5 and 10 after treatment with 5 mg/ml fucose. By comparison, cell viability decreased after treatment with 10 mg/ml fucose. Table 4 shows that the results of ANOVA are highly significant with p < 0.001. Discussion Cellular growth may be categorized into four important phases. A previous study explained the phases of cellular growth. 14 The first phase is known as the lag phase. During this phase, cellular adaptation to the new environment takes place, new enzymes are synthesized, and a slight increase in cell mass and volume occurs, but the cell number does not increase. The second phase is the exponential phase, during which balanced growth, that is, all cell components grow at a similar rate, occurs. In the phase, the cells have adjusted to their new environment and replicate exponentially. The growth rate is independent of the nutrient concentration because nutrients are in excess in this phase. The third phase, the stationary phase, begins when the net growth rate is equal to zero. Finally, the declining or death phase is characterized by a decrease in the living cell population over time because of a lack of nutrients and increase in the amount of toxic metabolic by-products. Excess fucose in the medium would affect the cell growth of normal and cancer cell lines. According to the findings in this study, excess fucose presents a level of toxicity that causes normal human gingival fibroblasts to die. This toxic effect could be due to the oxidative stress effect. The oxidation process of fucose yields toxic metabolic by-products. Yorek and Dunlap reported that L-fucose causes the generation of reactive oxygen species (ROS) and activation of NF-kB after cellular processing. 15 Excess ROS has been reported to cause DNA damage, which subsequently leads to cell death. 16 NF-kB plays an important role in causing apoptosis, which is also known programmed cell death. 17 Therefore, normal cells die under high concentrations of L-fucose. The cell viability patterns of the cancer cells reveal that these cells could adapt to a new environment. Moreover, the results indicate that cancer cells could survive treatment with excess monosaccharides, which may be due to the disruption of the apoptotic pathway. For instance, a previous study reported that mutations or downregulation of molecules involved in the Fas receptor-Fas ligand (FasR-FasL) apoptotic pathway are well-known mechanisms exploited by cancer cells to escape apoptosis. 7 Elevation of ROS in cancer cells is a hallmark of cancer cell progression. Recent studies have demonstrated that cancer cells are highly adaptive to elevated levels of ROS by activating antioxidant pathways. ROS play a vital role in cancer development, including initiation, promotion, and progression. 16 Increased intracellular ROS levels may activate oncogenes and oncogenic signals, including constitutively active mutant Ras, Bcr-Abl, and c-Myc, all of which are involved in cell proliferation, inactivation of tumor suppressor genes, angiogenesis, and mitochondrial dysfunction. 16 Glycosylation is an enzyme-directed site-specific process that links saccharides to produce glycans, which, in turn, are attached to proteins, lipids, or other organic molecules. 18 Glycosylation is one of the most important posttranslational modifications and related to many different diseases. 19 It is involved in numerous essential biological processes, such as cell proliferation, differentiation, migration, cell-cell integrity and recognition, and immune modulation. 19 As mentioned earlier, fucosylated glycans have functions in inflammatory response regulation, signal transduction, cell growth, transcription, and adhesion. For example, cell-cell interactions can be partially modulated by the presence of L-fucose-specific lectinlike adhesion molecules on the cell surface. 3 Breast cancer tissues overexpress fucosylated glycans, such as sialyl-Lewis X/A, and α-1,3/4-fucosyltransferases to promote disease progression and metastasis. 20 Although the biological roles of L-fucose remain unclear, the monosaccharide is known to accelerate the malignant potential of cancer cells. 21 Previous research on human breast cancer suggested that α-L-fucose is not a bystander molecule but a pathophysiological effector. 22 Indeed, α-L-fucose has been proposed to be an important component of the malignant and metastatic phenotypes of human breast cancer. 22 Other studies conducted using cell lines derived from several adenocarcinomas, certain melanomas, and some leukemias and lymphomas reveal drastic differences in the glycan expression of neoplastic cells in comparison with that of normal cells. 22 In 2010, glycans were shown to be intrinsically important in the pathobiology of most common human malignancies. 23 A review paper reported the role of fucosylated ligands in human breast cancer, particularly as expressed in CD44 variants. 22 Cancer cells directly take up Lfucose and secrete fucoproteins. 21 Fucoproteins are glycoproteins containing fucose sugar units as one of their carbohydrates. 18 Changes in fucoprotein could help determine the cancer diagnosis and prognosis. 24 An increase in enzymes involved in fucosylation would result in increased levels of fucosylated proteins. 25 Effects of L-fucose on cell viability in cancer cell lines 73 Makara J Health Res. August 2020 | Vol. 24 | No. 2 Physiologically, serum concentrations of L-fucose are generally low in normal persons but high in cancer patients. 26,27 An earlier study reported that fucose levels are higher in serum and tissue of the cancer patients. 28 Elevation of L-fucose levels in the serum and body fluids may be attributed to the release of glycoproteins from the tissue as a result of cell destruction or the local synthesis and secretion of glycoproteins by tumor cells. 21 However, some investigators believe that an increase in serum Lfucose levels reflects tissue proliferation rather than tissue destruction. 3,29 Fucosylated haptoglobin, which appears in the serum of some cancer patients, may be a promising biomarker for the prognosis of colorectal cancer. 30 Early detection of cancer is very important to achieve treatment efficacy. Loud and Murphy reported that cancer treatment is most effective when the disease is detected at an early stage prior to the onset of symptoms. 31 A previous study reported that the 5-year survival rate for patients with breast cancer is nearly 99% if the cancer is detected at the early stages. 20 However, if the tumor has metastasized, the survival rate drastically decreases to 25%. 20,32 Knowledge of the effects of fucose and fucoproteins could be harnessed to provide effective cancer treatment. For instance, fucosylation inhibitors could be applied to treat cancer. An earlier study reported that the use of 2-fluorofucose (2-FF) inhibits the adhesion of a human primary breast cancer cell line to E-selectin under physiologic flow conditions and reduces the migration ability and proliferation rate of this line. 20 Another study showed that the cell proliferation and integrinmediated cell migration of a liver cancer cell line are significantly suppressed by treatment with 2FF. 33 Conclusion According to the cell viability patterns observed in this study, treatment with high concentrations of fucose is toxic to the HGF-1 cell line. By contrast, the HT-29 and A375 cell lines appear to be able to adapt to high fucose concentrations. The ability of these cancer cells to survive could be due to the downregulation of molecules involved in the apoptotic pathway. For instance, molecules involved in the FasR-FasL apoptotic pathway are well-known mechanisms exploited by cancer cells to escape apoptosis. Hyperglycemia induces chemoresistance, which allows cells to proliferate and undergo progression. 34 Therefore, future studies should be conducted to investigate the molecules potentially involved in allowing cancer cells to survive in the presence of high fucose. Future advances in technology may see fucose become a target for anti-cancer therapy via nanoparticles or micellar formation.
v3-fos-license
2024-07-12T06:17:32.715Z
2024-07-11T00:00:00.000
271095969
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "438da2ccaeb63c0ab83ca83e46a7d3b8a4d09150", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42811", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "sha1": "e60f56a3d233a0ca3468e87a7c11bd5f3c97e173", "year": 2024 }
pes2o/s2orc
Bacteria extracellular vesicle as nanopharmaceuticals for versatile biomedical potential Bacteria extracellular vesicles (BEVs), characterized as the lipid bilayer membrane-surrounded nanoparticles filled with molecular cargo from parent cells, play fundamental roles in the bacteria growth and pathogenesis, as well as facilitating essential interaction between bacteria and host systems. Notably, benefiting from their unique biological functions, BEVs hold great promise as novel nanopharmaceuticals for diverse biomedical potential, attracting significant interest from both industry and academia. Typically, BEVs are evaluated as promising drug delivery platforms, on account of their intrinsic cell-targeting capability, ease of versatile cargo engineering, and capability to penetrate physiological barriers. Moreover, attributing to considerable intrinsic immunogenicity, BEVs are able to interact with the host immune system to boost immunotherapy as the novel nanovaccine against a wide range of diseases. Towards these significant directions, in this review, we elucidate the nature of BEVs and their role in activating host immune response for a better understanding of BEV-based nanopharmaceuticals’ development. Additionally, we also systematically summarize recent advances in BEVs for achieving the target delivery of genetic material, therapeutic agents, and functional materials. Furthermore, vaccination strategies using BEVs are carefully covered, illustrating their flexible therapeutic potential in combating bacterial infections, viral infections, and cancer. Finally, the current hurdles and further outlook of these BEV-based nanopharmaceuticals will also be provided. Graphical Abstract Graphical Abstract 1 Introduction Vesiculation is a crucial and fundamental process across all kinds of species to produce the extracellular vesicles that serve as essential mediators of basic physiological events [1].These nanovesicles are filled with molecular patterns originated from parent cells, including metabolites, nucleic acids, proteins, and signaling molecules, to maintain cell growth and homeostasis [2,3].Biological regulation by extracellular vesicles is widespread across both prokaryotes and eukaryotes [4].Notably, bacteria, as one of the major inhabitants in the human body, establish intricate relationships with host health and disease, wherein bacterial extracellular vesicles (BEVs) are indispensably involved in these processes [5,6].With the increasing and deep understanding of their biological function, BEVs are found to influence various cellular behaviors, including the transport of genetic information, phage infection, mediation of metabolism, as well as interaction between bacteria-bacteria and bacteria-host [7,8]. Particularly, BEVs are characterized as nanosized nanoparticles, surrounded by lipid-bilayer membranes, ranging from 20 to 400 nm in diameter [7].In account of the diversity of bacterial types and biogenesis mechanisms, the BEVs could carry versatile cargos inherited from mother cells, such as lipopolysaccharides (LPS), endotoxins, genetic information, cytosolic and membrane proteins [9].By thanking the unique structure and intrinsic properties of BEVs, these naturally occurring nanovesicles attract the research interest to be developed as novel nanopharmaceuticals, prompting further exploration of their biomedical applications [10,11].Generally, BEVs are widely evaluated as biotherapeutics in different forms.Firstly, these nanovesicles with hollow structures could serve as novel drug delivery platforms, facilitating the transport of diverse bioactive molecules and therapeutic cargo to the recipient cells at the lesion site [12].Thanks to the stability of naturally-occurring membrane structure, the BEVsbased drug delivery platform could carry the therapeutic genetic tools (e.g.siRNA, CRISPR-Cas9, etc.), protecting them from enzymatic degradation or hydrolysis in the complex physiological environment [13,14].Besides, benefiting from the ease of modification, BEVs-based drug delivery platforms could efficiently load the small molecular therapeutics [14], or directly produce the synthetic cargo (e.g.antigens, enzymes, therapeutic proteins, etc.) on the BEVs by editing the desired gene in parent bacteria.The BEVs-based drug carriers are also featured in their capability for targeting delivery toward the disease area to enhance drug accumulation and availability [15].Moreover, the BEVs drug delivery platform can integrate with functional materials, to facilitate combinational therapy (e.g.photodynamic therapy, photothermal therapy, etc.) and maximize the synergistic therapeutic efficiency [16]. On the other hand, due to their similar formulation with parent bacteria membrane, BEVs display abundant pathogen-associated molecular patterns and bacterial membrane antigens, thus endowing unique immunogenicity as the self-adjuvant [17,18].Notably, by thanking the immunostimulatory capability of BEVs, these nanovesicles are recognized as powerful and novel components for vaccine development [19,20].Typically, the nanosized BEVs can internalize into the immune cells, subsequently inducing a series of therapeutic immune responses.The interaction of BEVs with immune systems can evoke both innate and adaptive immune responses, suggesting the possibility of BEVs to combat infections elicited by bacteria or viruses [21,22].Furthermore, BEV-based vaccines have also emerged as attractive platforms for antitumor immunotherapy through the activation of immune cells in tumor regions [21].Importantly, compared with the traditional vaccination approach, BEVs provide more flexible and universal platforms as the nanovaccine for a broad range of biomedical applications, suggesting their great potential to be developed as a new generation of nanopharmaceuticals [23]. In accordance with the promising aspects exhibited by BEVs, this review provides a systematic summary of BEV-based nanopharmaceuticals development for biomedical applications in recent years (Fig. 1).We start with a concise overview of BEVs structure and composition, particularly focusing on elucidating their diverse biogenesis mechanism originating from parent bacteria.Subsequently, we comprehensively demonstrate the interaction of BEVs with host cells to help understand the principles of BEV-based nanopharmaceuticals design.Furthermore, we provide an account of recent advances in BEV-based therapeutics and their biomedical applications, specifically elaborating their utility as drug delivery platforms capable of carrying a range of cargo, such as genetic tools, molecular therapeutics, and functional materials.Meanwhile, we also summarize the noteworthy BEV-based vaccination approaches as powerful platforms to combat various diseases (e.g.bacterial infection, viral infection, and cancer).Finally, we also discuss the current challenge in the development of BEV-based nanopharmaceuticals, aiming to provide meaningful insight for the improvement of novel approaches towards BEVs' potential clinical practice. Biogenesis mechanism of diverse bacteria-derived extracellular vesicles Bacterial extracellular vesicles (BEVs) are crucial biological components mediating the bacteria's cellular events, including nutrient acquisition, genetic information transfer, and mediation of interaction with host cells [7].All these functions suggest that BEVs exhibit great potential as novel nanopharmaceuticals to combat diverse diseases and regulate healthcare.As a fundamental biological process of living matter, BEVs are produced from parent bacteria by a spontaneous process without additional energy consumption [24].Thus, a detailed understanding of the structure, composition, biogenesis, and functions of BEVs boosts the development of these naturally-produced membrane entities for biomedical applications, especially as drug delivery platforms and vaccination strategies.Due to the different types of parent bacteria, including Gram-negative and Gram-positive strains, the diverse biogenesis mechanisms lead to the unique membrane structure and loaded contents of distinct BEV types.Typically, Gram-negative bacteria have a double-layered membrane structure, comprised of the outer membrane, periplasmic space, and cytoplasmic membrane, while Gram-positive bacteria have only one cytoplasmic membrane covered by a thick cell wall of peptidoglycan [25].The various formation mechanisms and characteristics of BEVs are attributed to their original bacteria as shown in Fig. 2. The extracellular vesicles derived from Gram-negative bacteria have two main models of their biogenesis mechanism, including blebbing of the outer membrane and explosive cell lysis.When the cell envelopes suffer from abnormal disturbances (e.g.hydrophobic compound interaction, instability of peptidoglycan biosynthesis, denaturation of membrane protein, etc.), the outer membrane will undergo the blebbing to produce the outer membrane vesicles (OMVs) [7,9].During this progress, the inner membrane remains intact to prevent loading the cytoplasmic cargo into the OMVs.The size of OMVs is around 20-250 nm with spherical morphology, containing abundant lipopolysaccharides (LPS), lipids, and membrane proteins [24,26].On the other hand, when the peptidoglycan layer of Gram-negative bacteria is weakened by autolysin, the inner membrane will subsequently protrude into the periplasm to produce the outer-inner membrane vesicles (OIMVs).Similarly, the explosive outer membrane vesicles (EOMVs) were produced by the model of explosive cell death model [7,27,28].The phage-derived endolysin destroys the peptidoglycan layer around the bacteria, inducing the cell explosion and the shattered membrane fragments fuse to generate the EOMV.OIMVs are comprised of two lipid bilayers, the outer membrane and inner membrane from the parent bacteria as well as the Fig. 1 Schematic illustration of developing bacterial extracellular vesicles as new-generation nanopharmaceuticals for biomedical applications, highlighting their unique advantages and addressing the potential challenges in BEV-based nanopharmaceuticals design membrane proteins, while the EOMV only has one lipid bilayer from the original outer membrane.Both EOMV and OIMV contain cytosolic cargo (e.g.small-molecule metabolites, genomic DNA, RNA, endolysin, virulence component, etc.) that differ from the OMVs [29]. Besides, in the specific Gram-positive bacteria, endolysin will initiate the bubbling cell death by hydrolysis of the thick peptidoglycan cell wall, potentiating the formation of cytoplasmic membrane vesicles (CMVs) [7].CMVs formation is attributed to the stressmediated bacteria lysis, peptidoglycan degradation by exogenous endolysins, and drug-induced suppression of cell wall biosynthesis.CMVs also contain the cytosolic cargo from Gram-positive bacteria, similar to the EOMVs and OIMVs [30].Notably, the different composition and structure of BEVs resulted in distinct biological functions in the physiological environment, especially for their interaction with the host immune system [31].Hence, the BEVs attracted great interest from academia and industries to be utilized as nanopharmaceuticals for biomedical applications, especially for drug delivery and vaccination to combat various diseases. Internalisation of BEVs into host cells As key messengers for microbiota-host communications, BEVs carry a wide range of cargoes such as proteins, DNA, and RNA.The majority of these "messages" are compartmentalized inside BEVs and are required to be released into the host cells to facilitate cellular events [32,33].For this to happen, BEVs need to enter host cells and ultimately release these cargoes untainted for them to fulfill their biological roles.Depending on their origination, the exterior of BEVs is decorated with ligands such as LPS, lipoproteins, and other virulent factors [34].These ligands play an important role in the internalization of BEVs, as their interaction with different receptors on the host cell triggers different internalization pathways [35].The internalization pathway taken by BEVs is also influenced by the size of BEVs [36], the type of host cells along with many other unknown factors [37].The internalization processes of BEVs by host cells remain underexplored, but a few major pathways have been proposed and studied (Fig. 3). The most common internalization pathway utilized by extracellular entities such as BEVs is phagocytosis, particularly for entry into phagocytotic immune cells such as neutrophils, dendritic cells, and macrophages [38].Notable examples of BEVs internalized via phagocytosis are Streptococcus pneumoniae [38] and Mycobacterium tuberculosis [39,40].Phagocytosis is initiated by phagocytic receptors, which trigger a signaling cascade leading to the rearrangement of lipid membranes and the actin cytoskeleton [40].Phagocytosis is initiated by phagocytic receptors, which triggers a signaling cascade leading to the rearrangement of lipid membranes and the actin cytoskeleton, ultimately resulting in the surrounding of the BEVs [41], eventually forming phagolysosomes in which the BEVs may be degraded to release their internal cargo [42]. BEVs are also found to enter non-phagocytotic cells such as epithelial cells, suggesting that many other nonphagocytotic pathways are possible [35].An example of such pathways is Clathrin-mediated endocytosis (CME) [43,44].The initiating ligands and relevant receptors are poorly understood, but the formation of clathrin-coated pits has been well studied.The first proteins that assemble at the site of BEV docking are clathrin, which is followed by an assortment of structural proteins to form a clathrin-coated pit.Dynamin2 is further recruited at the neck of the developing invagination, which undergoes GTP hydrolysis-dependent conformational changes to cut off the nascent intracellular vesicle [45].The BEVcarrying intracellular vesicles fuse with endosomes and eventually release their cargoes upon disintegration of the BEV lipid bilayer.This pathway is exclusively taken by BEVs of Lactiplantibacillus plantarum [46], where the uptake of the BEV was blocked upon treatment with CME inhibitor chlorpromazine.However, the internalization of BEVs may not rely solely on one pathway; instead, multiple pathways can be simultaneously utilized.As in the case of Borderella bronchiseptica [43], their internalization into AW264.7 cells was decreased upon the inhibition of either micropinocytosis with cytochalasin D and CME, suggesting that the BEVs rely on both micropinocytosis and CME for internalization. BEVs along with pathogens such as viruses and bacteria, can invade host cells through receptor triggeredinternalization pathways such as Caveolin-mediated endocytosis [47].This is a preferred pathway as the resulting intracellular caveolae are believed to not fuse with lysosomes [48,49] hence ensuring the survival of the invading pathogens.Caveolin-mediated endocytosis is initiated with the binding of ligands and virulent factors like folic acids [50], alkaline phosphatase [51], cholera toxin [51], and viruses like HIV1 [52].This is followed by oligomerization of caveolin proteins on lipid raft domains to form the flask-shaped invaginated caveolae [53].The caveolae pinch in a GTP-dependent manner similar to Clathrin-coated pits to form caveosomes, whose intracellular fate depends on their content [49].Caveosomes containing the SV40 virus were found to have neutral pH and do not fuse with lysosomes [54], while albumin-rich caveosomes were trafficked along the endosomal degradation pathway [55].It can be inferred that the interactions between cargo and caveolar components play a role in the destination of caveosomes [49].In the works of Franz G. Zingl, outer membrane protein (Omp) OmpU and OmpT were found to be essential for the predominant caveola-mediated internalization of V. cholerae BEVs [56].These BEVs protected the virulent factor cholera toxin (CT) from extracellular trypsin and successfully releasing them in HT29 cells.This reiterates the importance of surface ligands in the initiation of Caveolin-mediated endocytosis which allows successful delivery of cargoes into host cells. The endocytosis pathways mentioned earlier involve the coating of the entire BEV including its lipid bilayer, with transmembrane proteins like Caveolin to form intracellular vesicles.However, there are also pathways taken by BEVs in which only the internal cargo is internalized without the lipid bilayer of the BEV.One such pathway is Membrane fusion, which is a process triggered by the binding N-ethylmaleimide-sensitive factor (NSF)-attachment protein receptors (SNAREs) [57].This pathway requires the presence of Ca 2+ ions and it is observed that during fusion, the phospholipid bilayers of BEVs adhere and assimilate with host cell [57].This is well demonstrated by Bomberger et al., where the fluorescence of Rhodamine-R18 membrane-labelled P. aeruginosa BEVs were increased when treated to mammalian epithelial cells, showing a mixing of lipids between the two bilayers and eventual dilution of the previously quenched dye [58]. Another internalization pathway taken by BEVs is through Toll-like receptors (TLR), a family of cellular receptors that recognizes microbial molecules [59].TLRs constitute the primary strategy for the detection of xenogeneic substances, such as the detection of LPS of Gram-negative bacteria by TLR4 or Lipopeptides of Gram-positive bacterial cells via TLR2 [60].These TLRdetectable ligands are an integral part of BEVs.Upon binding of BEVs to TLRs, a cascade of signal events occurs, leading to the internalization of the BEV-TLR complex into the cell as endosomes [61], which may further develop into autophagosomes for BEVs disintegration and release of cargoes.For example, BEVs secreted by Staphylococcus Aureus containing immunostimulatory cargoes are internalized by lung epithelial A549 cells via TLR2 and induced autophagy [62], while BEVs of Gram-negative bacteria Bacteroides thetaiotaomicron were predicted in sillico and experimentally confirmed to be internalized via the TLR4 pathway [63]. Based on their origination and surface ligands, BEVs can interact with different receptors on the membrane of host cells and trigger a variety of internalization pathways for their entry.The internalized BEVs' phospholipid bilayers can be disintegrated in late endosomes or lysosomes to release their internal cargo for pathogenic or therapeutic effects.In some cases.The mere interaction of BEVs with host cell surface receptors is sufficient to trigger other cellular events such as immune responses without having to enter the host cell [64]. Interaction of BEVs with the host immune system As BEVs are isolated from bacteria that maybe originally pathogenic or probiotic, they retain the pathogen-associated molecular patterns (PAMPs) of their parent bacteria factors such as liposaccharides (LPS), peptidoglycan, or DNA [65].These immunostimulatory biomolecules are recognized by pathogen recognition receptors (PRRs), which can be found on epithelial [66] and immune cells [67], triggering the innate and adaptive immune response.The interactions between BEVs and different components of the immune system will be discussed in this section (Fig. 4). Upon an invasion of bacteria and their BEVs, epithelial cells serve as the frontline defenders as they are usually the first obstacles met by invading pathogens [68].Although epithelial cells are not strictly immune cells, they are armed with PPRs such as TLRs [69], nucleotide-binding oligomerization domain-like receptor (NOD) [70].Meanwhile, these cells are capable to release cytokines to stimulate innate and adaptive immune responses upon detection of BEVs.Indeed, the BEVs of Porphyromonas gingivalis [71] inherited several virulent factors such as LPS and gingipains (cysteine proteinases), inducing the production of pro-inflammatory cytokines, interleukin (IL)-6, and IL-8 in human gingival epithelial cells.Furthermore, these BEVs can travel in the bloodstream from their site of infection in the mouth to distal organs such as the lungs, showing that BEVs play a role in systemic Porphyromonas gingivalis infection.Similarly, the BEVs of Fusobacterium nucleatum stimulated the secretion of interleukin-8 (IL-8) and tumor necrosis factor (TNF) in colonic epithelial cells [72].These secretions were perturbated by the treatment of TLR4 inhibitors, suggesting that transmembrane TLR4 activation was required in this proinflammatory signaling.As most BEVs are internalized into host cells as mentioned earlier, they can also activate intracellular receptors in epithelial cells, such as NOD1.Gram-negative mucosal bacteria Helicobacter pylori, Pseudomonas aeruginosa, and Neisseria gonorrhea secreted BEVs containing peptidoglycans that can trigger intracellular receptor NOD1 cell signaling which subsequently release IL-8 and antimicrobial peptides (AMP) human-β-defensin 2 (hBD2) and hBD3 [73].This study shows that BEVs can be detected by intracellular receptors and epithelial cells can participate directly in immune response against BEVs other than mere recruitment of immune cells via release of cytokines. The first true members of the immune system to respond to bacterial infections and the release of BEVs are the neutrophils.They can be recruited to infection sites by cytokines such as CXCL1/IL-8, released by endothelial cells upon TLR4 interaction with E. Coli BEVs [74].In another instance, BEVs of Haemophilus influenzae induced airway epithelial cell secretion of IL-1β, which further induced Th17 cells to release IL-17 to recruit neutrophils [75].As part of the innate immune system, neutrophils are phagocytotic cells that internalize invading pathogens like bacteria [76].Although is it unclear whether neutrophils can internalize pure BEVs, current studies are showing E. Coli BEVs hybridized with nanoparticles can be internalized and hitchhike on neutrophils as a chemotaxi [77,78].Neutrophils via a whole arsenal of antimicrobial agents in cytosolic granules [79], eradicate invading pathogens effectively, but this pathway can be countered by BEVs of Porphyromonas gingivalis [80], which can degranulate neutrophils and secrete gingipains, which are proteases that will cleave antimicrobials agents such as myeloperoxidase (MPO) and LL-37, sustaining immuno-evasion.Other than phagocytosis of pathogens, neutrophils can also secrete cytokines and chemokines for further reinforcement of other immune cells to fight against infections [81].Upon triggering by the BEVs of N. meningitidis, neutrophils released tumor necrosis factor (TNF)-α, IL-1β, facilitating inflammation [82].Furthermore, macrophage inflammatory protein-α and macrophage inflammatory protein-β are also secreted, which recruits macrophages for immune reinforcement [83]. Macrophages are integral to the innate immune response against pathogens, actively phagocytosing them and simultaneously secrete cytokines and antimicrobial agents, for the enhancement of antimicrobial effects [84].The BEVs of E. Coli and S. Aureus were quickly taken up by macrophage cells in vitro [85,86].This increased the expression of cytokines (such as IL-1β, IL-6, IL-10) Fig. 4 The interactions of BEVs with various cellular components from the adaptive and innate immune response and costimulatory molecules (CD 86) in macrophages, effectively turning polarising them from M0 to the proinflammatory M1 phenotype. Not only are BEVs responded with a short-term immune response, but studies have also shown that BEVs are also capable of stimulating the adaptive immune response, leading to long-lasting resistance against such BEVs (Fig. 4).This is achieved via the maturation of dendritic cells, which present pathogenic antigens to T and B cells for their activation [87,88].The BEVs of Salmonella typhimurium stimulated the maturation of DCs with increased expression of MHC-II and CD-86 which subsequently led to the activation of T and B cells for immunity against live bacteria challenges [89].When bone marrowderived dendritic cells (BMDCs) are treated with the BEVs of periodontal pathogens Porphyromonas gingivalis and Tannerella forsythia, the expression of cytokines IL-1β, IL-6, IL-23, and IL-12p70 were triggered.This was not observed for Treponema denticola [90].It was elucidated to be due to the proteolytic capabilities of the BEVs, which degraded the secreted cytokines.By coculturing Naïve CD4 + T cells with such BEV-primed BMDCs, the T cells were differentiated differently, where P. gingivalis and T. denticola led to the induction of IL-17A + T cells whereas T. forsythia largely induced interferon (IFN)-γ + T cells with IL-17A + T cells as the minority.Treatment of BEVs of E coli in MC38-OVA and B16F10-OVA tumor mice models also recruited cancer antigen-specific CD8 + T and increased their expression of cytotoxic molecules such as granzyme B, TNFα, perforin, IFN-γ [91]. B cells are also key role players in developing adaptive immunity against pathogens, producing bactericidal antibodies, and forming memory cells for prolonged and acute responses against future infections [92].BEVs of Neisseria meningitidis were reported to stimulate B cells to produce cross-reactive antibodies that were bactericidal towards both Neisseria meningitidis and Neisseria gonorrhoeae [93].This was possible as the 2 pathogens display similar antigens (PorA/B and lipooligosaccharide) on the whole cell and their BEVs.Meanwhile, the BEVs of Neisseria lactamica OMVs could induce tonsillar B cells to produce polyclonal IgM and increase the proliferation of a subset of B cells, through a mechanism that is possible via a mitogenic ligand and B cell receptor interaction [94].Mice injected with BEVs of Salmonella typhimurium displayed high amounts of antigen-specific IgG produced by B cells and were protected against live Salmonella typhimurium challenge 14 days after injection, suggesting an established adaptive immunity [89].Studies on Neisseria meningitidis BEV-based vaccines show that memory B cells were activated and isolated upon vaccination [95,96]. The interactions between BEVs and various components of the immunity systems to elicit innate and adaptive responses entail the potential of using BEVs as vaccines to induce immunity against future infections.It is potentially safer to utilize BEVs as vaccines as compared to attenuated cells or whole cells as BEVs themselves are not propagative and thus will not lead to serious infections/adverse reactions, as opposed to when using live bacteria [97]. BEV delivery of gene therapy In recent years, a new form of pharmacological approach to treating disease is Gene therapy.Gene therapy involves the introduction of genetic materials such as DNA or RNAs and enzymes like nucleases or genome editing enzymes [98].These materials are introduced into host cells often for gene silencing (using miRNA, siRNA, and shRNA), gene introduction via plasmids, naked genetic materials, and gene editing using nucleases or clustered regulatory interspaced short tandem repeats (CRISPR)/ CRISPR-associated protein (Cas)-associated nucleases [99].Gene therapy has emerged as a potent treatment modality, with several genetic-based treatments demonstrating clinical success [100].Unfortunately, genetic materials and their associated enzymes are highly susceptible to degradation both ex vivo and in vivo, rendering them suitable for direct introduction into hosts [101].Furthermore, these materials lack targeting capabilities and can be internalized by non-target cells leading to adverse physiological effects [102].Thus, gene therapy is often actualized by the loading of these genetic materials into nanocarriers such as viral capsules, liposomes [13], exosomes [103], and synthetic nanoparticles [104,105].These nanocarriers provide protection of their genetic cargo from the physiological conditions and enable directed delivery to target sites [106,107].BEVs can also be potential carriers for gene therapy, as BEVs from E coli and H. pylori respectively have been found to contain genetic materials for the development of antibiotic resistance [108] and regulation of host immune responses [109].In comparison to the other nanocarriers, BEVs boast greater penetrative capabilities, ease of modification, and most importantly, the potential to be able to be industrially mass-produced [16].Indeed, several advances utilizing BEVs as a delivery platform for the administration of gene therapy have shown preliminary success. The exemplary targeting capabilities of BEVs are particularly highlighted in the study conducted by Han Liu et al., BEVs derived from probiotic E. Coli Nissle 1917 loaded with siRNA, were applied in the amelioration of osteoporosis (Fig. 5A) [110].The treatment of osteoporosis is traditionally achieved by hormonal drugs which are often plagued with side effects [111].Genetic therapy using siRNA to silence the SOST subsequent inhibition of the WNT pathway has previously been demonstrated to promote bone formation and ameliorate osteoporosis.But the short half-life and poor penetrations of these siRNAs have limited their success [111].To overcome these limitations, engineered E. Coli Nissle 1917 displaying fused C-X-C motif chemokine receptor 4 (CXCR4) on the outer surface and internally loaded with SOST Fig. 5 BEV as gene delivery platforms.A. Schematic of Bone targeting BEVs-hCXCR4-SOST iRNA (BEVs-CSs) for osteoporosis treatment.Reproduced with permission.[110] Copyright 2019, American Chemical Society B. Schematic of siRNA@M-/PTX-CA-OMVs for modulation of macrophage metabolism and tumor metastasis suppression.Reproduced with permission.[113] Copyright 2021, American Chemical Society C. Schematic of OMV tRNA−pre−miR−126 against breast cancer.Reproduced with permission.[117] Copyright 2022, Elsevier.D. Schematic of BEV-delivered DNA plasmids as vaccines.Reproduced with permission.[119] Copyright 2023, American Society for Microbiology.E. BEV delivered CRISPR-Cas 9 for dendritic cell-targeted gene editing.Reproduced with permission.[120] Copyright 2023, American Chemical Society siRNA were developed.The fusion of hCXR4 with ClyA membrane protein facilitated the surface expression of hCXR4, endowing the BEVs with bone-targeting capabilities.This modification successfully escorted the SOST siRNA to the femur bones of mice.Mice treated with these BEVs exhibited higher bone mass and improved microarchitecture.This study on bioengineered-BEVbased genetic therapy demonstrated that BEVs can be readily modified to incorporate non-native targeting capabilities, thereby serving as an effective delivery platform for siRNA therapies. As previously mentioned, BEVs possess immunogenic properties due to the Pathogen-Associated Molecular Patterns (PAMPs) expressed on their surfaces.This characteristic enables them to be readily recognized and taken up by macrophages compared to other exosomes [112].This property is exploited in the works of Qin Guo et al., for the targeting of Tumour Associated Macrophages (TAM) and tumor metastasis suppression [113].In metastatic tumors, upregulation of Redd1 inhibits macrophage glycolysis, which is initiated by tumor cell signaling.Combining tumor cell killing with the Redd1 shutdown could effectively inhibit solid tumor metastasis.Therefore, a nanocarrier system utilizing E. Coli BL21, capable of pH-responsive release of Paclitaxel and delivery of Redd1 siRNA, was developed (Fig. 5B).Upon treatment of the nanocarrier system to tumors, surface-anchored Paclitaxel would be released initially upon reaching the tumor site, exerting cytotoxic effects on tumor cells, and stopping tumor cell-initiated Redd1 upregulation.The Redd1 siRNA-carrying nanocarrier would be sequentially uptake by TAMs to restore their glycolysis levels and polarise them into a tumor progression-inhibiting phenotype.This study demonstrates how the affinity of BEVs to immune cells was utilized for combinational gene therapy with small organic molecules and inspires future work of using BEVs for immune cell targeting. MicroRNA (miRNA)-mediated gene therapy has emerged as a promising tool to combat cancers through the regulation of target genes in tumor cells [114].Recent development have utilized pre-miRNA instead, which are shorter precursors of miRNA that can be processed into mature miRNA for RNA interferences [115].However, these approaches are still plagued by synthetic difficulties, degradation by intracellular nucleases, and poor loading into nanocarriers which are often toxic [116].Inspired by these challenges, Cui et al. developed a msbB mutated E. Coli BL21 derived BEVs carrying tRNA Lys -pre-miR-126 (Fig. 5C) [117].The unique cargo is a pre-miRNA that is disguised with a "tRNA scaffold", which is more stable and can be amplified into mature miRNAs in host cells [118].The highly biocompatible BEVs targeted tumor cells in mice specifically with the AS1411 aptamer on their surfaces and lowered the expression of CXCR4, effectively lowering tumor proliferation.This study shows that BEVs can be bioengineered to be superior and safe nanocarriers that can accommodate non-conventional genetic cargoes. In addition to delivering various forms of RNAs for gene therapy, BEVs can also serve as carriers for DNA, acting as adjuvants for vaccines.In the works of Qiong Liu et al., DNA was delivered in the form of eukaryotic expression plasmid coding for cytokines IL17A or INF-γ in H. pylori BEVs (Fig. 5D) [119].The recombinant BEVs were used in conjunction with UreB and whole inactivated cells as the vaccine antigen.Mice injected with the BEV adjuvants had higher levels of anti-H.pylori IgG antibodies and a more lasting expression of IL-17A and IFN-γ than control groups.When challenged with H. pylori infection, mice immunized with the recombinant BEVs adjuvants showed lower H. pylori colonization than wild-type H. pylori and Chlorea Toxin as adjuvants.Overall, the recombinant BEVs were able to induce stronger humoral, and mucosal immune responses and protected hosts from H. pylori infections.This study serves as an example demonstrating that BEVs can effectively deliver DNA cargo, acting as adjuvants for vaccine development. In addition to gene silencing or gene introduction, gene editing through Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR)-Cas9 has proven to be a powerful method for permanently knocking out genes [121].However, guide RNAs and CRISPR-Cas 9 enzymes used in CRISPR-Cas-9 gene therapy are susceptible to degradation by nucleases and their poor penetration and selectivity into target cells have limited their application as a viable treatment modality [122,123].To fully draw upon the prowess of CRISPR-Cas 9 gene editing, Min Li et al. developed a novel bacterial nanomedicine (BNM) based on polymer-lipid hybrid nanoparticles and BEVs from attenuated Salmonella to deliver CRISPR-Cas 9 to Dendritic Cells [120] (Fig. 5E).The BEV-derived LPS conferred Dendritic Cell (DC) targeting capabilities to the Biomimetic Nanoparticles (BNMs) through TLR4-PAMPs interactions.BNMs with higher BEV to NP ratios exhibited increased uptake by DCs, consequently leading to elevated expression of costimulatory molecules CD80, CD86, and CD40 in these DCs.The CRISPR-Cas9 system delivered by these BNMs successfully knocked out the YTHDF1 gene in DCs, leading to CD8 + T cell-mediated tumor inhibition in MC38 tumor-bearing mouse models.The targeted delivery of sensitive RNAs and CRISPR-Cas9 specifically to dendritic cells (DCs) for DC activation and tumor inhibition underscores the significant potential of BEVs as a versatile delivery platform for effective gene therapies. Therapeutic molecular cargo The use of small molecules and biologics for the treatment of diseases has always been the mainstream modality since the dawn of modern medicine [124].Despite their extensive use as therapeutics against diseases, they also suffer limitations as well, such as non-selectivity leading to adverse side effects and susceptibility to metabolism in vivo [125,126].As mediators between bacterial interspecies communication or bacteria-host interkingdom interaction, BEVs have intrinsic targeting capabilities that can bring chemical/biological messengers to the intended destination [127].These properties can be hijacked by loading small molecules or biological drugs into specially engineered BEVs, in which they can be delivered specifically to the site of infection/disease.This results in the accumulation of therapeutic molecules at the target site, enhancing the therapeutic effect and reducing adverse effects due to off-target interactions [124].Moreover, the loading of drugs into BEVs protects them from metabolism or degradation during their circulation to the target site.Consequently, this enhances the effective drug concentration at the target site and mitigates toxicity issues associated with drug metabolism [128].Thus, there have been many advances utilizing BEV-based drug delivery platforms to overcome the limitations of small molecule and biological drugs [124,129]. The small size and interactive nature of host cells [130] of BEVs enable them to cross multiple membranes and physical barriers in the human body.An example of a membrane barrier is the blood-brain barrier (BBB), which divides the brain from the peripheral circulation, maintaining homeostasis and the brain microenvironment [131].Its highly lipophilic nature prevents 98% of drugs from reaching the brain and exerting their intended therapeutic effects [132].Neutrophils were reported to be able to pass through the BBB and their expression of TLRs facilitates the interaction and uptake of BEVs [133].Based on these desirable properties of neutrophils, Pan et al. loaded pioglitazone E. Coli BEVs with (OMV@PGZ), which would hitchhike onto neutrophils to cross the blood-brain barrier to enhance Ischemic Stroke Therapy (Fig. 6A) [134].Neutrophils can engulf OMV@PGZ within 90 min, effectively cross the BBB, and release the intracellular OMV@PGZ after arriving at the ischemic area, where excess reactive oxygen species (ROS) induces the disintegration of neutrophils to form neutrophil extracellular traps (NETs).The release of PGZ in ischemic sites activates peroxisome proliferator-activated receptor (PPARγ), which ultimately exerts neuroprotective effect in transient middle cerebral artery occlusion (tMCAO) mice models.In this study, the unique immunogenicity between BEVs and neutrophils was taken advantage of to deliver small molecular drugs across the BBB. BEVs have unique targeting abilities that can be utilized to load and deliver therapeutic drugs to disease sites.Chemotherapy drugs such as Doxorubicin are used as frontline drugs in the treatment of various cancers [135].However, the non-selective nature of Doxorubicin (DOX) has resulted in severe adverse effects such as myelosuppression and immunosuppression, greatly reducing their application in the clinical setting [136,137].Previous studies have attempted to address selectivity issues by loading DOX into liposomes.However, these liposomal themselves are plagued with toxicity issues as well [138].BEVs are superior alternatives to liposomes as they are highly biocompatible, and their outstanding immunogenicity may induce a local immune response which may augment the anti-tumor effects of DOX [139].This is exemplified in the works of Kuerban et al., who loaded DOX into attenuated Klebsiella pneumonia BEVs (DOX-OMV) as a treatment for non-small-cell lung cancer (NSCLC) (Fig. 6B) [140].Compared to free DOX, DOX-OMVs displayed superior pharmacokinetic properties, higher uptake, and a more potent cytotoxic effect in A549 cells.Interestingly, the empty BEV carrier exhibited a more potent antitumor effect than free DOX and DOX-liposomes, which was suggested to be contributed by the BEV-induced accumulation of macrophages at the tumor site.In another example, Zhuang et al. constructed the E. Coli-derived OMVs by encapsulating the inhibitors (UNC2025) of myeloid-epithelial-reproductive tyrosine kinase (MerTK) to block the efferocytosis of apoptotic tumor cells [141].Subsequently, the released tumorassociated antigens will be covalently linked to the OMVs and delivered to the lymph nodes, evoking the maturation of dendritic cells and boosting the immunotherapy combating the xenografted, metastatic as well as recurrent tumor models of mice.This study demonstrates that other than merely serving as a drug delivery platform, BEVs can also serve as an immune-response stimulant for synergistic anticancer therapy. With their bacterial homing capabilities and established use as a therapeutic chassis, BEVs could potentially revolutionize the approach to overcoming bacterial antibiotic resistance.By delivering antibiotics directly to infection sites, BEVs enable high doses to be administered without causing systemic toxicities [142].In the works of Weiwei Huang et al., they discovered that stress-growing A. baumanni in medium spiked with sub-Minimal Inhibitory Concentrations (MIC) of levofloxacin elevated the production of BEVs which contains high levels of levofloxacin (Fig. 6C) [143].This was deduced to be a form of drug efflux mechanism for antibiotic resistance.This phenomenon was utilized to generate levofloxacin-enriched BEVs, safeguarding the cargo from harsh environmental conditions.More importantly, levofloxacin-enriched BEVs exhibited superior bactericidal effects compared to equivalent amounts of free levofloxacin in a mouse intestinal Enterotoxigenic Escherichia coli (ETEC) infection model.Additionally, treatment with levofloxacin-enriched BEVs resulted in reduced adverse effects.This example illustrates how BEVs can contribute to combating antibiotic-resistant bacteria by delivering high doses of antibiotics while minimizing unwanted side effects.In addition to delivering high doses of antibiotics to combat bacterial antibiotic resistance, repurposing currently available antimicrobials is also a viable strategy to address antibiotic resistance [144].One such example is rifampicin (Rif ).Rif is an antibiotic traditionally employed to treat Gram-positive S. aureus infections [144].However, it is ineffective against Gram-negative bacteria due to its inability to permeate through the double membrane structure of Gram-negative bacteria [145].This deadlock is broken in the research conducted by Shuang Wu et al., in which Rif is loaded into mesoporous silica nanoparticles, which are then coated with BEVs derived from Gram-negative E. Coli (Fig. 6D) [146].The coating of BEV conferred the biomimetic nanosystem with Gram Negative bacteria homing capabilities, resulting in preferential uptake by E. Coli even in the presence of S. aureus, and achieving unprecedented permeation of Rifampicin (Rif ) into E. Coli bacterial cells.Furthermore, the antibacterial effects of the nanosystem were superior both in vitro and in an intraperitoneal infection mouse mode.In short, the same-type homing capabilities of BEVs have been utilized to deliver impermeable antibiotics to gram-negative bacteria. Other than small molecules, BEVs can also carry biologics as therapeutic cargoes, which can protect them from degradation by proteases or physiological conditions during circulation to the target site [147].This is embodied in the works of Wenjing Zai, in which the enzyme Catalase was loaded into E. Coli BEVs for hypoxia relief and enhancing radiotherapy of tumors [148] (Fig. 6E).The effectiveness of radiotherapy is greatly limited by hypoxic conditions in tumors, and this can be potentially reversed by the delivery of oxygen-evolving Catalase [149,150].Catalase-containing BEVs from hydrogen peroxide-stressed E. Coli cells were isolated and their catalytic activities were retained even under treatment with proteases.The BEVs not only effectively increased the oxygen levels in tumor cells and enhanced the effects of radiotherapy but also induced an immune response in CT26 tumor-bearing mice.Overall, the BEVs provided hypoxia relief in tumor cells, leading to subsequent synergistic radio and immune therapy.This study illustrates the capacity of BEVs to accommodate a diverse array of cargoes, including biologics, protecting them from protease degradation while also eliciting an immune response for synergistic treatments. Functional agents for combinational therapy Taking advantage of bacteria-derived extracellular vesicles, the delivery platform could not only prolong the blood circulation time of drugs but also accumulate therapeutic cargo in the targeting lesion region [129,151].In recent decades, with the witness of the remarkable development of nanotechnology and function material [106], the application of BEVs-based drug delivery was further expanded to integrate with multimodality therapy against diverse diseases (e.g.bacteria infection, cancer, etc.) by loading the nanomaterial with naturally-produced membrane vesicles.So far, the emerging advances in functional materials shine the light to explore biomedical applications, that could respond to external stimulation to produce cytotoxic substances (e.g.reactive oxygen species, heat, etc.) and achieve the therapeutic target [152][153][154].The thriving development of nanosized function materials also boosts the establishment of novel strategies, like photodynamic (PDT) [155,156] and photothermal therapy [157,158], for the treatment of various health conditions, including tumor eradication and elimination of bacterial infections.However, these biomedical nanomaterials (e.g.gold nanoparticles, mesoporous silica nanoparticles, metal-organic frame, polymers, etc.) [159,160] as external components always lead to offtarget accumulation in healthy tissue, poor solubility in the physiological environment, and concerned biocompatibility.Thus, OMV has become an ideal cargo delivery platform to encapsulate the functional nanoagents to formulate "biomimetic nanoparticles" [161][162][163][164]. Inheriting the virtues of the OMVs, the delivery platform could active the immunotherapy within the lesion site by the naturally-presented adjuvants on the OMVs, while the complementary therapeutics loaded in the delivery platform boost the synergistic and enhanced therapeutic efficacy compared with parent components alone [165]. Particularly, photothermal therapy was established as a powerful strategy to supplement the immunotherapy within the lesion site by OMV-based nanopharmaceuticals integrated with functional agents.Chen et al. have developed the Hybrid Eukaryotic-Prokaryotic Nanoplatform (PI@EPV) encapsulating the poly (lactic-coglycolic acid)-indocyanine green (PLGA-ICG) to boost the synergistic antitumor effect (Fig. 7A) [166].By fusing melanoma cytomembrane vesicles (CMVs) and Salmonella-derived outer membrane vesicles (OMVs), the PI@ EPV nanoplatform could efficiently localize inside the tumor site, stimulating the antitumor immune response, including both the dendritic cell maturation and activation of cytotoxic T lymphocytes.Moreover, the localized photothermal agent ICG could efficiently transfer the near-infrared stimulation into hyperthermia to initiate the immunogenic cell death, subsequently producing tumor-associated antigens to augment the immunotherapy efficacy by PI@EPV.Similarly, different photothermal agents have been employed in the OMVs to construct integrated nanopharmaceuticals with improved cytotoxic immune response.Chen et al. have wrapped the outer membrane vesicles from Salmonella VNP20009 onto the mesoporous polydopamine nanoparticles, which is the core component to mediate the photothermal response (Fig. 7B, C) [167].The successful construction of integrated nanocomposite (MPD@DMV) led to passive localization in the tumor site and the synergistic tumor regression in the B16F10 melanoma mice model, attributing to the T cell infiltration and significant release of antitumoral cytokines.Notably, the intravenous injection of MPD@DMV could activate better long-term immune response compared with intertumoral administration, potentiating further clinical translation for tumor vaccination. Moreover, the drug delivery platform based on OMVs could also encapsulate the functional material for combinational therapy against the bacteria infection, benefiting from OMV's bacterial-specific targeting ability.Recently, Wei et al. have developed the targeted biomimetic delivery platform to transport the metalloantibiotics for eradicating multidrug-resistant bacteria (e.g. A. baumannii) [168].The OMV derived from E. Coli was genetically modified to anchor the targeting antibody fragment, realizing the specific recognition of A. baumannii.After the systematic screening, the metal complex Zn(Bq) 2 was selected as the efficient killing component loaded in the metal-organic frame (zeolitic imidazolate framework-8) to maintain stability and enhance the loading efficiency in the OMV-based delivery platform.Importantly, the photosensitizer (Chlorine6) was co-loaded into the delivery platform to construct the final nanomedicine named ZnBq/Ce6@ZIF-8@OMV, suggesting the synergistic bacteria eradication by intense ROS production during the photodynamic treatment (Fig. 7D).Such an OMV-based delivery platform shows great potential for disruption of A. baumannii-infected biofilm and acceleration of mice meningitis recovery (Fig. 7E). Notably, the integration with functional material and OMV-based cargo delivery platform not only enhances the therapeutic performance by combinational modality but also promotes the stability and biosafety of nanopharmaceuticals in complicated physiological environments.During the circulation of OMV in the blood, it could build up complicated interactions with neutrophils, endothelial cells, and other immune-related cells to stimulate the systematic inflammatory response.However, the overreacted immune response also becomes a potential concern of OMV to appropriate application in the clinical trial.To maintain the sufficient immunogenicity and intact compound of OMV, the biomineralization by abiotic material on the OMV surface could be the proper strategy to enhance the bioavailability and therapeutic efficiency of the BEVs-based drug delivery system [170].In particular, Chen et al. genetically modified E. Coli to harvest the melanin-rich OMVs for targeted photothermal immunotherapy in the tumor region [171].More importantly, to improve the systemic biosafety of OMVs, the nanopharmaceuticals were further functionalized by the calcium phosphate as the outer layer, alleviating the overreacted inflammatory response and liver damage with intravenous administration.The biomineralized OMVs exhibited an outstanding antitumor immune response combined with photothermal efficacy, potentiating the effective suppression of tumor progression and recurrence.Similarly, Ban et al. have developed engineered E. Coli-derived outer membrane vesicles encapsulating the oncolytic adenoviruses to mediate the tumor cell autophagy for systematic immunotherapy (Fig. 7F) [169].The OMVs were genetically engineered to express the pyranose oxidase on the surface and further modified with the biomineralization of calcium phosphate to prevent elimination by the innate immune system and promote the biosafety of nanocomposites.After internalization in the tumor cells, the pyranose oxidase could boost the production of H 2 O 2 to initialize the tumor autophagy and autophagosome formation, subsequently promoting viral replication and finally leading to cell death.Such OMV-based nanocomposite illustrated the Fig. 7 BEVs-based delivery platform loading functional materials for combinational therapy.A. Schematic illustration of Hybrid Eukaryotic-Prokaryotic Nanoplatform (PI@EPV) encapsulating the PLGA-indocyanine green to boost the synergistic antitumor effect.Reproduced with permission [166] Copyright 2020, John Wiley and Sons.B. Scheme of construction route for OMV-coated polydopamine nanoparticle (MOD@DMV) to augment antitumoral photothermal-immunotherapy; C. TEM images of MOD@DMV morphology and Characterization of the photothermal performance.Reproduced with permission [167].Copyright 2023, American Chemical Society.D. Illustration of the targeted biomimetic delivery platform (ZnBq/Ce6@ZIF-8@OMV) to transport the metalloantibiotics for eradicating multidrug-resistant bacteria; E. In vivo antibacterial effect of ZnBq/Ce6@ZIF-8@OMV eliminating bacteria in the brain tissue and promoting the survival of infected mice.Reproduced with permission [168].Copyright 2024, The American Association of the Advancement of Science.F. Schematic presentation of biomineralized E. Coli-derived outer membrane vesicles encapsulating the oncolytic adenoviruses to mediate the tumor cell autophagy for systematic immunotherapy.Reproduced with permission.[169] Copyright 2023, Springer Nature cascade-amplified immunotherapy and successful attenuation of TC-1-hCD46 xenograft tumor growth. BEVs as nanovaccines Vaccines are therapeutic formulations designed to elicit a host immune response, enabling the host organism to develop long-term immunity against the actual pathogen in the future [172].Vaccines should engage with immune cells of the innate response, typically characterized by the secretion of pro-inflammatory cytokines and recruitment of neutrophils or macrophages [173,174].To ensure long-term immunity and protection against the intended pathogen, vaccines must also trigger the adaptive immune response, usually hallmarked by the activation or maturation of APCs, T cells, and B cells, and the secretion of antigen-specific antibodies [175].Some current vaccines developed are based on attenuated or inactivated pathogens such as bacteria or viruses [172], which carry PAMPS and antigens to activate the immune without being virulent.However, their safety remains a topic of debate [176].As derivatives of bacteria, BEVs also contain PAMPS which allows them to interact with immune cells just like whole bacterial cells, suggesting their potential role as vaccines themselves or as adjuvants to assist in boosting immune responses.Their non-propagative nature is considered safer than using live bacterial vaccines [176] and with recombinant technology, BEVs can be engineered to modulate their immunogenicity or even express foreign antigens for cross-species immunity.The feasibility of BEVs as vaccination platforms is demonstrated by the limited success of a pioneering Neisseria meningitidis BEV-based vaccine [177].This has inspired many recent advances in BEVs-based nanovaccines against a wide range of pathogens and diseases. Bacterial infections Benefited by their ease of engineering, BEVs are allowed to express non-native proteins or protein conjugates in high numbers via recombinant technologies [21].This concept was effectively implemented in the research conducted by Li et al., where engineered BEVs of Salmonella enterica expressing surface lipoprotein-SaoA were developed as a vaccine against Streptococcus suis infection (Fig. 8A) [178].SaoA is a surface-anchored protein that is highly conserved in different S. suis species which has great potential to be developed as a vaccine antigen.SaoA was conjugated to the C terminus of surface membrane Lipoproteins (Lpp) and expressed on the surface of BEVs of S. suis for optimal immunogenicity.Mice injected with BEVs with SaoA-Lpp conjugates have higher levels of cytokines, anti-SaoA antibodies in serum, and lower bacterial counts in their blood and brain tissues, as compared to other control groups.When subjected to a 50% lethal dose (LD50) of S. suis serotype 2, 100% of mice receiving BEVs with SaoA-Lpp conjugates survived whereas PBS-treated mice died within 3 days.This study demonstrates the potential of BEVs being able to be engineered such that inaccessible proteins can be recombinantly expressed on the surface of BEVs to maximize their interaction with host immunity cells and improve antigen-specific antibody responses [179]. The development of BEV-based vaccines often involves genetic engineering and altering the expression of PAMPS found on the outer surface to modulate their immunogenicity or to incorporate foreign antigens for cross-immunity.This is an often expensive and tedious process [180].The alteration of PAMPS on bacteria and BEVs can also be achieved by cultivating them in modified mediums.For example, Baker et al. demonstrated that Burkholderia.Pseudomallei cultivated in a medium deprived of iron and zinc would express virulent factors Type three secretion system (T3SS-3) and type six secretion system, (T6SS-1) [181].These virulent proteins allow B. Pseudomallei to reside in macrophages and survive in hosts [182].Based on the hypothesis that BEV vaccines expressing these virulent factors would induce a more effective immune response, Sarah et al. developed a B. pseudomallei BEV-based vaccine that was enriched with T3SS-3 and T6SS-1 (M9-OMV) (Fig. 8B).Although M9-OMV was enriched with virulent factors, they exhibited no toxicity to living cells and protected mice against virulent B. pseudomallei alongside with live attenuated B. pseudomallei vaccines (Bp-82).M9-OMVs outperformed Bp-82 in terms of eliciting immune responses by inducing higher levels of IgG in serum, and actively engaging with T cell and dendritic cells.This study promises an effective BEV-based vaccine against B. Pseudomallei which outperforms live attenuated bacteria vaccines, without possessing the inherent safety risks of replicating live vaccines.Furthermore, the engineering of such a vaccine is achieved simply by modifying the cultivation conditions of the parent bacteria without genetic engineering. The development of successful antibacterial vaccines is often difficult due to the multiple serotypes of these pathogenic bacteria [183].A broad-spectrum vaccine offering protection against most pathogenic serotypes is highly demanded [184].By truncating the Outer Membrane Proteins (OMPs) of Salmonella Typhimurium χ3761 and their BEVs, Yuxuan Chen et al. managed to develop a broad-spectrum vaccine that provided crossprotection against various Salmonella and Avian Pathogenic Escherichia coli O78 (APEC O78) in mice and chickens [185] (Fig. 8C).The study was based on the theory that the deletion of major OMPs in BEVs may affect the expression of conserved OMPs and therefore affect the cross-protection of such BEV vaccines.ΔompAΔompCΔompD BEVs nanovaccines protected mice against all Salmonella strains and APEC O78 challenges.Reproduced with permission [185].Copyright 2020, Frontiers D. Schematic of Hybridization of BEVs with Au nanoparticles results in higher homogeneity in size and increased immune response.Reproduced with permission.[190] Copyright 2022, Wiley-VCH.E. Schematic of hybridization of BEVs with macrophage membranes for increased biocompatibility and modulated immune response.Reproduced with permission.[192] Copyright 2024, Elsevier Inc In mice, the ΔompCΔompD Salmonella Typhimurium UK-1 BEVs induced higher IgG and IgA levels, but immunization with ΔompAΔompCΔompD BEVs protected mice against all Salmonella strains, Shingella and APEC O78 challenges.Similarly in chicken models, ΔompAΔompCΔompD immunized chickens were found to survive S. Enteritidis and APEC O78.These suggest that ΔompAΔompCΔompD BEVs could be a feasible broad-spectrum vaccine against Colibacillosis-causing bacteria tested in the study.In another example, a wellestablished Yersinia pseudotuberculosis-based nanovaccine platform was genetically engineered to highly express PspA and developed into a broad-spectrum vaccine (OMV-PspA) against influenza-mediated secondary Streptococcus pneumoniae (Spn) infections [186].OMV-PspA was able to induce the highest levels of anti-PspA IgG2a/IgG1 and IgG2b/IgG1 than control groups and inducing T cell responses.The OMV-PspA also protected mice against Spn D39 and Spn A66.1 challenges after initial CA04 (H1N1) challenges after 205 days post-vaccination.Through the engineering of BEVs, broad-spectrum vaccines that provide cross-species immunity can be developed owing to the similarities in the expressions of PAMPs across different bacterial species and their BEVs. Previous attempts to develop BEV-based vaccines have faced challenges due to the heterogenicity in size, composition, and internal cargoes of extracted BEVs [187].On the other hand, nanoparticles such as citrate-stabilized gold nanoparticles (AuNPs), have very narrow size distributions and are consistent in their size [188].Furthermore, AuNPs have been reported to have superior affinity to immune cells, making them suitable carrier candidates for vaccine development [189].In the works of Elisabet Bjanes et al., AuNPs were coated with the BEVs of A. baumannii to develop a hybrid nanovaccine against A. baumannii pneumonia and sepsis [190] (Fig. 8D).The hybridized nanovaccine (Ab-NP) exhibited a high degree of uniformity in size, unlike the crude BEVs extracted from A. baumannii.It induced higher levels of immunoglobulin G (IgG), increased percentage of B cells, and expression of activation markers in dendritic cells.This indicated that the homogeneity of AB-NPs compared to Ab-OMV contributed to the enhancement of Antigen Presenting Cells.Ab-NPs and postvaccination serums protected rabbits for up to 6 months in sepsis infection and intratracheal pneumonia models.This study demonstrated the compatibility of BEVs with nanoparticles and the importance of particle size homogeneity in the performance of BEV-based vaccines.It has been wellreported that BEVs participate in the pathogenicity of bacterial invasion by transmitting virulent factors such as LPS, which can result in inflammatory responses [191].Thus, it is possible that BEV-based vaccine platforms can cause over-immune stimulation, hyperinflammation, or damage to host tissues.In order to mitigate these shortcomings of BEV-based vaccines, BEVs of Klebsiella pneumoniae were hybridized with alveolar macrophage membrane to form an intratracheal vaccine (HMV) [192] (Fig. 8E).When administered to mice, pure BEVs damaged lung epithelial cells while HMVs did not affect the growth of epithelial cells and induced lower levels of cytokines as compared to BEVs.Mice immunized with HMV had high levels of IgM and IgA in serum and survived subsequent K. pneumoniae challenges.This study demonstrates the versatility of BEVs to be hybridized with mammalian membranes to increase their biocompatibility and modulate their immunogenicity. Viral infection The COVID-19 outbreak has prompted urgent efforts to develop a vaccine against SARS-CoV-2 [193].While mRNA vaccines like Moderna's have seen widespread administration [194], they are hampered by challenges related to instability and transfection efficiency [195,196].BEVs can express and stabilize recombinant antigens and their innate immunogenic properties allow them to act as adjuvants to enhance immune responses [197].With the intention of developing a BEV-based vaccine against SARS-CoV-2, Liu et al. hybridized SARS-CoV-2 spike protein displaying cell membrane vesicles with the BEVs of Salmonella typhimurium to form virusmimetic hybrid membrane-derived vesicles (HMVs) (Fig. 9A) [198].The spike proteins are characteristic of SARS-CoV-2 and are involved in the pathogenicity of the virus [199], qualifying them as ideal antigens for vaccine development.The spike proteins were initially recombinantly expressed on mammalian cells to preserve their native three-dimensional structure [200], while BEV components serve as adjuvant to boost the immune response.The HMVs were quickly internalized by DCs and increased MHC expression.Mice immunized with HMV exhibited a biased Th2-mediated humoral response and induced active T cell response with an increased expression of IFN-g and IL-6.This suggests that HMVs could serve as potential candidates for a BEV-based vaccine against SARS-CoV-2.Successful vaccines are intended for intramuscular administration and inducing systemic immunity.However, intranasal vaccines can induce local mucosal immunity and prevent further transmission of diseases [201].Consequently, an intranasal vaccination strategy based on Neisseria meningitidis BEVs carrying D614G spike protein (mC-Spike) of SARS-CoV-2 was developed [201].The nanovaccine (OMV-mC-Spike) was composed of LPS-bound HexaPro Spike-mCRAMP conjugate, in which mCRAMP contributed to binding to LPS on the BEV surface while HexaPro Spike [202] served as a previously optimized antigen of SARS-CoV-2.Intranasal administration of OMV-mC-Spike to mice induced higher levels of IgG and IgA titers in serum, nasal washes, and lungs.Hamster models subjected to the viral challenge after immunizing with OMV-mC-Spike displayed the lowest viral loads and reduced lower respiratory tract diseases afterward.The versatility of BEVs to accommodate different cargoes and antigens allows the rapid development of BEV-based vaccines against novel diseases. The influenza virus is an RNA-based virus, and its frequent mutations lead to changes in antigenicity, complicating the development of a cross-subtype vaccine [202,203].Doo-Jin Kim's group has previously developed BEVbased vaccines using E. Coli BEVs, wherein the LPS is modified with a lipid A 40-phosphatase (fmOMV) [204].The adaptive immune response triggered by fmOMV was investigated recently (Fig. 9B) [205].Mice injected with fmOMVs exhibited high levels of HA-specific antibodies, as well as HA-and NP-specific IgA and IgG antibodies, which suggest a cross-immunity against influenza mutants.When challenged with several strains of influenza, fmOMV immunized mice displayed pronounced IFN-γ -producing T cells and were protected against all tested strains 18 weeks post-vaccination.This study demonstrates the immunogenicity of modified BEVs alone, without viral antigen can be utilized as vaccines against influenza. Human-immunodeficiency virus (HIV), Human Papilloma Virus (HPV), and Hepatitis C (HCV) are sexually transmitted viruses that cause diseases such as AIDS, Cervical Cancer, and Liver Cancer [206].These diseases are chronic or lifelong, posing a significant global health challenge that requires urgent attention and solutions [207].Furthermore, there are currently only medications available to manage the symptoms of these diseases, with no cure currently available [208].Vaccination is the only effective method to prevent the spread of these diseases but so far, only approved vaccinations against HPV are available [208,209].As highly immunogenic carriers and versatile platforms for the expression of foreign proteins, BEVs may be used for the development of antiviral vaccines as they can act as adjuvants or express viral antigens via recombinant technology.Earlier attempts at developing BEV-based vaccines against these viruses and diseases have been made [210,211].The quest for an effective HIV vaccine has been an arduous journey spanning over 40 years [212].Prototypes developed thus far often suffered from adverse effects and exhibited weak targeting capabilities [213].A recent endeavor to develop a BEV-based HIV vaccine involved incorporating the HIV-1 envelope membrane-proximal external region (MPER) into the outer membrane protein OmpF as a construct in E. Coli Nissle 1917 and their BEVs (Fig. 9C) [214].The MPER-OmpF decorated BEVs were antigenic to MPER-binding HIV-1 gp41 (2F5) monoclonal antibody, suggesting its potential as a viable vaccine candidate awaiting further studies. Human papilloma viruses (HPV) are non-enveloped viruses consisting of more than 200 subtypes, 15 highrisk subtypes known to cause Cervical Cancer [215].Current clinically applied HPV vaccines are based on virus-like particles composed of the L1 protein.However, they are limited by their type restriction, instability, and high production costs [216].Prior studies using HPV L2 proteins provided wide strain protection as the L2 protein sequence contains a major cross-neutralization epitope and induced broadly neutralizing anti-HPV antibodies [216,217].A recent advance in HPV vaccine development involved the construction of a BEV-based vaccine displaying an L2 protein polytope that is made up of amino acid sequences from 8 HPV serotypes (Fig. 9D) [216].The E. Coli BL21(DE3)∆60 BEV-based vaccine induced L2-Specific IgG Titers in immunized mice and neutralizing titers in in vitro pseudovirus neutralization assay.A laboratory-scale production process was also conducted to demonstrate the scalability of the production of this BEV-based vaccine.The limitation of current HPV vaccines is that they only protect against HPV naïve individuals and provide no therapeutic value to HPV-positive or patients with cervical cancer.A tentative therapeutic vaccine was prototyped by Chen et al., in which the HPV-associated peptide E7 was recombinantly expressed in E. Coli BEVs to form a nanovaccine (Fig. 9E) [216].E7p-OMV protected the internally loaded E7 peptides from proteases and when injected into mice, displayed tumor suppressive effects.This was found to be mediated by increased antigen expression in DCs, specific Th1 (CD4 + IFN-γ +) responses, and CTL (CD8 + IFN-γ +) responses.This study has the potential to pave the way for future developments in therapies for HPV-associated cancers, which are currently lacking and in high demand. Cirrhosis and hepatocellular carcinoma are caused by Hepatitis C (HCV), and currently, there are no clinically licensed vaccines available for prevention against HCV infections [218].Prior attempts to develop an HCV vaccine involved the truncation of protein NS3 but were faced with a weak invocation of the immune response [219].An adjuvant may be required to boost the immune responses of such vaccines.A recent study developed a Neisseria meningitidis serogroup B BEV-based vaccine platform with recombinant truncated core1-118 (rCore) and NS31095-1384 (rNS3) fusion protein (rC/N) against HCV (Fig. 9F) [220].The rC/N-BEV vaccine induced higher IgG1 and IgG2a levels, indicating a Th1/cellular response and outperforms combinations of rC/N with other adjuvants.This study demonstrates the superior adjuvant properties of BEVs and their potential as a platform for the development of future HCV vaccines. Cancer Notably in recent decades, the development of vaccines has been recognized as a new weapon to combat tumors and potentiate the expansion of oncologic armamentarium [221].Among the emerging nanovaccines and pharmaceuticals, bacteria and their derived extracellular vesicles attract great interest to be applied as the new generation of cancer vaccines to achieve efficient immunotherapy [26,222,223].Particularly, bacteria localized in the tumor microenvironment could build up the precise interaction between cancer cells, tumor-infiltrating immune cells, and other overexpressed biomarkers (e.g.cytokines, chemokines), remodeling the immune-suppressed microenvironment in the tumor site.The abundance of pathogen-associated molecular patterns in the bacteria demonstrates the desirable immunogenicity to activate the systemic immune response [224].However, injection of intact bacteria containing intracellular contents (e.g., endotoxins, genetic material, etc.) into the patients always leads to severe safety concerns and potential side effects, thus the improvement of bacterialinspired vaccines is highly demanded [225,226].The bacterial extracellular vesicles, especially outer membrane vesicles (OMV), show great promise to maintain the immunogenicity for in situ immune activation with considerable biosafety compared with weakened bacteria.Moreover, the small size of OMV (20-250 nm) also benefits the lymphatic drainage and long-term accumulation of antigens by enhanced permeability and retention (EPR) effect, thus improving the immunotherapy specificity [227,228].Particularly, Kim et al. first reported that using E. Coli OMV as cancer immunotherapeutics, which could efficiently localize in tumors due to their nano-size, inducing the production of interferon-γ and CXCL10 cytokines for tumor regression [227].Besides, thanks to the hollow structure of OMVs for encapsulating the immunoadjuvant or antigens payload into the BEVs, the bacterial-derived nanopharmaceuticals could further reshape the suppressed immune environment in situ as antigen sources and boost the therapeutic efficiency, potentiating their great promise as a novel nanoplatform for the cancer vaccine's development. Benefiting from the advanced gene engineering technologies and molecular biological methods (e.g.DNA transfection, CRISPR-cas9, Gene sequencing) [229], diverse engineered bacteria have been developed to originally produce the antigens or cytotoxic protein compounds, flexibly modify the payload in the extracted BEVs or functionalize the nanopharmaceuticals for versatile tumor vaccination.For instance, to directly deliver the therapeutic cargo to the tumor site, Chiang et al. have genetically tailored probiotics E. Coli Nissle 1917 to secret the therapeutic OMVs loaded with small cytotoxic protein (HlyE) [230].Oral administration of engineered bacteria illustrated effective tumor colonization.In response to the arabinose metabolism, HlyE-loaded OMVs contained personalized antigens to boost the dendric cell uptake, suggesting the significant tumor regression in the xenograft colorectal tumor model. Moreover, genetic modification of bacteria could be leveraged to fuse the functional protein onto the cytolysin A (ClyA), which naturally existed on the membrane of BEVs, thus directly expressing antigens on the vesicle's surface.The fused protein (e.g.enzyme, antibody, RNAbinding proteins, fluorescent protein, etc.) on the membrane could also endow the diverse function to enhance the therapeutic performance of BEVs-based cancer vaccines.Particularly, Cheng et al. have developed a versatile OMV-based vaccine platform by fusing the diverse protein catchers onto ClyA, which allows the OMV vaccine to present multiple and distinct tumor antigens on the surface (Fig. 10A) [231].Using genetic engineering, the flexible OMV vaccine could fuse the target antigens onto the ClyA and activate the T cell for in situ antitumoral immune response.Importantly, the OMV vaccine has applied a Plug-and-Display system comprising diverse tag & catcher protein pairs, including the SpyTag (SpT)/ SpyCatcher (SpC) pair and the SnoopTag (SnT)/Snoop-Catcher (SnC) pair by fusing to the ClyA.By employing Plug-and-Display fused protein, different antigens could rapidly and simultaneously bind to OMV, realizing synergistic antitumor immunity, while the nanovaccine could efficiently abrogate lung melanoma metastasis and suppress colorectal tumor growth.Similarly, Li et al. have improved the "Plug-and-Display" strategy to deliver the mRNA antigens as the vaccine for antitumor immunotherapy (Fig. 10B) [232].The RNA-binding protein, L7Ae, was fused onto ClyA and matched the binding sequence, box C/D was inserted into mRNA cargo for efficient loading of antigen on the OMV surface.Besides, the lysosomal escape protein, listeriolysin O was also fused with ClyA to improve the mRNA delivery efficiency in the dendritic cells (DCs), while the fused protein endowing the escape of nanovaccine from lysosome and cargo accumulation in the cytoplasm.The OMV-based mRNA vaccine could activate the innate immunity and subsequent cross-presentation in DCs, suggesting the efficient regression of melanoma and MC38 colon cancer model (Fig. 10C).Moreover, the genetic fusion on ClyA could also present the targeting ligand to improve the precise tumor targeting of OMV-based vaccines.Adriani et al. have genetically engineered the high-affinity anti-EGFR ligand, the single-chain variable fragments originated from panitumumab antibody, on the ClyA protein to construct the bioengineered OMVs for specific tumor accumulation, potentiating the application of these immunotherapy agents in different types of tumors [233]. To further improve the antitumor specificity of BEVbased vaccines, antigen-expressed cancer cell membrane particles, and extracellular vesicles have been employed to fuse with bacteria-derived vaccines to form hybridized nanopharmaceuticals [234].Typically, the pure cytoplasmic membrane extracted from bacteria reserved abundant PAMPs to mobilize the immune system.L Chen et al. have hybridized the E. Coli cytoplasmic membrane and autologous tumor membrane to construct the fused membrane platform (HM-NPs), realizing the colocalization of antigens and adjuvants in the dendritic cell [235].The bacterial cytoplasmic membrane was pretreated with lysozyme to remove the LPS and other cell wall components to prevent potential cytotoxicity.Such hybridized nanovaccine illustrates significant antitumor immune activation and diverse tumor ablation in colon, breast, and melanoma cancer mouse models.Benefiting from the fusion approach of BEVs and tumor EVs, recently, Tong et al. employed the outer membrane vesicle from Akkermansia muciniphila (Akk-OMV) as the self-immunoadjuvant to hybridize with antigen-rich tumor-derived exosomes for cancer vaccine construction (Fig. 10D) [236].Besides, the Lipo@HEV formulation is supplemented with PD-L1 trap plasmid cargo to facilitate gene therapy for immune checkpoint inhibition, synergistically combating tumor growth by the hybridized OMV-based vaccine. Different from conventional tumor vaccines inducing an adaptive immune response, recent advances in OMV-based vaccines try to improve immunogenicity by interfering with the immune-related signaling pathway.For instance, the long-term innate immune memory, known as trained immunity, which is referred to the procedure of modifying innate immune cells along with their hematopoietic progenitors to enhance nonspecific innate immunity [237,238], particularly in reaction to subsequent infection or vaccination.The OMV-based vaccine could facilitate the trained immunity-related signaling pathway by the presence of 10E) [239].Following the subsequent calcium phosphate mineralization and loading of granulocyte-macrophage colony-stimulating factor (GM-CSF), the final OMV vaccine (OMV-SIRPα@ CaP/GM-CSF) targets tumor-associated macrophages (TAMs) in the bone marrow to train the progenitor cells and monocytes for long term immunity.OMV-SIRPα@ CaP/GM-CSF also illustrated efficient tumor regression by trained immunity in both MC38 tumor and B16-F10 tumor models with distinct T-cell-mediated immune responses, suggesting the different therapeutic mechanisms for vaccination formulation.Recent developments of BEV-based vaccination to enhance immunoreactivity were also considered to facilitate the cGAS-STING pathway in the dendritic cell to heighten the DC maturation.Zhang et al. constructed the interfacial nanocloak on the B. fragilis-derived extracellular vesicles by coating the biocompatible manganese oxide [240].Once the nanocloaked BEVs internalized into the dendritic cell, the nanocloak layer will dissolve in the lysosome and release the Mn 2+ to mobilize the cGAS-STING pathway for boosted maturation of DC, illustrating the amplified immunotherapeutic ability in breast cancer model. Advantages of BEVs Extracellular vesicles are key mediators of intercellular communication, shuttling biological and chemical messengers throughout the host.Extracellular Vesicles are produced by bacteria (BEVs) and mammalian cells as well, known as mammalian extracellular vesicles (MEVs).BEVs and MEVs have many similarities in common, such as their hollow nanostructure comprised of lipid bilayer membrane, stability in physiological environments, and their intrinsic targeting abilities attributed to their membrane proteome.These advantages have been exploited by researchers for the development of extracellular vesicle-based delivery platforms for biomedical applications.Notably, BEVs have several notable characteristics when compared to MEVs, suggesting BEV's potential as promising nanopharmaceuticals, such as stability in the living system, penetrative capabilities, as well as ease of engineering and industrial production.One of the advantages of BEVs is their high yield and cost-effective production.As derivatives of bacteria, BEV production benefits from the advantages of industrial-scale bacteria cultivation.The short doubling time of bacteria allows for high cell density liquid cultures, which rely on cheap and readily available liquid medium, ensuring cost-effectivity [241].Furthermore, hyper-vesiculating mutants of H. pylori and E coli have been developed to further increase the yields of BEVs [242,243].On the other hand, the production of MEVs often suffers from long culturing periods and low yields of exosomes [244].Potentially, BEVs can serve as economical drug delivery agents with wide applicability across various therapeutic applications. The lipid membranes of BEVs are also highly stable and resistant to a wide variety of enzymatic activities such as proteases and nucleases [245,246].This characteristic makes BEVs suitable vehicles for biomolecular-based cargoes that are sensitive to temperatures and enzymatic activities, such as proteins, enzymes, or genetic materials [245,[247][248][249].By storing such sensitive biomolecules in BEVs, they can be delivered intact to their target site without degradation or loss of function, addressing a common issue that limits clinical implementation [250].Additionally, BEVs can function as stabilizing "packages" allowing long-term storage of biomolecules at elevated temperatures.For example, when phosphotriesterase (PTE) was packaged into E. Coli BEVs survived storage at 37 °C for 14 days and multiple freeze-thaw cycles, preserving higher enzymatic activity compared to free PTE [245].This ability suggests that heat-sensitive cargoes can be transported in BEVs without the need for expensive logistical processes (such as cryostorage) for extended periods [251], unlike SARS-COV -mRNA vaccines encapsulated in liposomes [252].The effects of storage temperature on the quality of MEVs and their interior cargo remains inconclusive.Generally, MEVs may be less robust and unstable than BEVs, as the storage of MEVs requires specialized buffers and storage temperature of 4-20 °C for optimal activity [253,254].The remarkable ability of BEVs to stabilize and protect cargoes from physiological and physical environments makes them ideal carriers for drug delivery. BEVs are excellent nanocarriers, as they have been reported to be able to penetrate the biological tissue barrier, such as the epithelial cell layer [252], target skeletal structures [110], and even bypass the blood-brain barrier (BBB) [255,256].This outstanding capability of BBB penetration addresses a major limitation in the progression of many promising neurological drug candidates targeting the Central Nervous System [257].Notably, BEVs also exhibit outstanding targeting capabilities which are attributed to their membrane composition and intrinsic protein expression.The similarities in the membrane composition allow BEVs to have the affinity to parental bacteria whole cells while certain antigens can allow them to target bacteria from other species or strains [8,258,259].Synergizing with their penetrative ability across tissue and cellular barriers, BEVs can realize precise biomedical strategies for bacterial infections.Furthermore, BEVs play a key role in the interaction between the host system and bacteria, by internalizing into mammalian cells through various pathways such as endocytosis, membrane fusion, and receptor-mediated signaling [35], potentiating the immune system activation for therapy.However, due to the difference in origin and membrane composition, MEVs lack bacterial cell targeting capabilities and are limited to only host or mammalian cells, restricting their strategic application in bacterial-induced diseases.Therefore, benefiting from their unobstructed bypass through various barriers, affinity to homotypic bacterial cells, and mediating the interaction between host cells and bacteria, BEVs can be employed as carriers to potentially revive previously unsuccessful therapeutics plagued with distribution, permeation, and stability issues in vivo [260]. Notably, owing to the mature genetic manipulation of bacteria, the expression of surface proteins, and functional sites on their membrane, the exterior of BEVs can be easily engineered for cargo loading or the regulation of their interactions with target cells [261,262].BEVs can be modified by cultivation in modified mediums, or even hybridization with other forms of nanoparticles and nanovesicles as exemplified in this review.This allows the development of facile BEV-based nanopharmaceuticals with diverse biomedical applications.Furthermore, isolated BEVs can also be further decorated with functional ligands via protein-ligand interactions [263], protein-protein interaction [264], or potentially bioconjugate reactions.Overall, these engineering strategies result in BEVs which are highly precise delivery platforms with improved therapeutic efficiencies to combat extensive biomedical challenges.As MEVs also originate from cells and have a similar membrane structure as BEVs, they can be similarly modified as BEVS, but the genetic modification of mammalian cells is generally more difficult as compared to bacteria, greatly limiting their scope of application. Importantly, the advantage of BEVs is their immunogenic properties and ability to engage with the immune system.Other than simply serving as nanocarriers for drug delivery, BEVs can also serve as functional nanoagents that can stimulate immune responses which may augment the therapeutic effects of their cargo [140].Furthermore, the immunogenicity of BEVs allows them to be utilized as vaccines to stimulate long-term immune response or act as adjuvants in combinational vaccines against cancers, and bacterial and viral infections [129,265].MEVs which are derived from mammalian cells have limited immunogenicity and lack exogenous antigen expression, impeding their immune response for therapy.MEVs have also been capitalized to develop vaccine platforms, but they have a narrower scope when compared to BEVs as they are mainly focused on antitumoral applications [266].The unique immunogenicity of BEVs qualifies them to be a competitive platform for the development of functional nanopharmaceuticals. Conclusion and further perspective In this review, we comprehensively summarize the development of BEV-based nanopharmaceuticals to facilitate disparate biomedical applications in recent years.Amongst the unique advantages and functions of BEVs, we demonstrated the tremendous potential of applying these naturally occurring nanovesicles to establish a myriad of innovative therapeutic strategies for new-generation pharmaceuticals evolution, especially in the versatile bioactive cargo delivery and powerful vaccination approaches. Taking the unique merits of nanovesicles, the structural stability, ease of cargo loading, promising penetration across physiological barriers, and specific targeting capabilities endow the BEVs application to be beneficial in delivery for a wide range of therapeutics.Particularly, the naturally occurring membrane structure of BEVs enables targeting delivery of therapeutic genetic tools (e.g., siRNA, DNA, CRISPR-Cas9, etc.) in the physiological, preserving stability and bioactivity, thus enhancing gene therapy.Leveraging the ease of genetic modification in parent bacteria, protein cargo, such as enzymes and antigens, can also be directly expressed within BEVs, optimizing loading efficiency compared to conventional delivery platforms.Furthermore, advancements in nanotechnology and material science have witnessed the integration of functional materials with BEV delivery platforms to achieve combinational therapy, thereby further improving the synergistic therapeutic efficacy. However, several considerations regarding the optimization of BEV-based delivery platforms are still necessary to be taken care to propel the BEV nanopharmaceuticals into clinical translation.The current isolation of BEVs relies heavily on ultracentrifugation, which raises the concern of a time-consuming and energy-intensive nature [21].The established technique cannot fully separate the BEVs from the lysis of parent bacteria, resulting in low purity of nanovesicles and making BEVs identification challenging [267].Several affinity-based purification techniques (e.g.magnetic bead-mediated adsorption, antibody-based selection, etc.) [268,269] resulting in high yield and purity of mammalian extracellular vesicles have been proposed and can be potentially translated for BEV isolation to increase their yield and achieve higher purities.Besides, the larger-scale production of BEVs from living bacteria may lead to inconsistencies in size and overall composition from batch to batch. Enhancements in standard harvesting procedures for BEVs and the culturing of parent bacteria are imperative. Notably, by thanking the abundant display of native PAMPs and immunostimulatory antigens, BEVs exhibit unique immunogenicity making them promising vaccine platforms, while the self-adjuvating properties of BEVs stimulate host immune responses.BEV-based nanovaccines have demonstrated efficacy in preventing bacterial and viral infections, as well as combating tumors, highlighting their potential in addressing a wide range of diseases.Despite numerous promising achievements, the clinical practice of the BEV-based vaccination method remains controversial.It's crucial to recognize that PAMPs play a significant and intrinsic role in the pathogenicity of bacteria.Thus, the balance between biosafety and immunostimulatory of BEVs-based vaccines should be carefully considered.The inadvertent introduction of virulent factors (e.g.LPS and virulent protein) in extracted BEVs can lead to excessive immune stimulation, inflammatory responses, reactogenicity, or other adverse effects [259,270,271].Moving forward, it is imperative to elucidate the pathogenicity of harmful components inherited by BEVs.Further investigation is necessary to develop improved methods for isolating and purifying BEVs, ensuring the removal of detrimental bacterial components that may pose a threat to the host before implementation, thereby minimizing adverse effects [272,273].This is especially critical when administering BEVs to immunocompromised individuals [274]. In conclusion, we believe that bacterial extracellular vesicles have emerged as a new era of innovative nanopharmaceuticals attributed to their outstanding advantages and attractive functions.Continued studies will undoubtedly explore the vast potential of BEVs as an indispensable and influential tool to boost biomedical applications, paving the way for their clinical translation and revolution of nanomedicine. Fig. 2 Fig. 2 Biogenesis mechanism, Composition, and Classification of Bacterial extracellular vesicles derived from Gram-positive or Gram-negative Bacteria Fig. 3 Fig. 3 Major internalization pathways taken by BEVs into host cells.Depending on their nano size, the type of recipient host cell, and the ligand-receptor interactions triggered, BEVs are internalized via caveolin and clathrin-mediated endocytosis, phagocytosis, toll-like receptors, and membrane fusion Fig. 6 Fig. 6 ( Fig.6 BEVs used as delivery platforms for therapeutic molecular cargo A. Schematic of OMV@PGZ Hitchhiking neutrophils for Enhanced Ischemic Stroke Therapy.Reproduced with permission[134] Copyright 2023, John Wiley and Sons B. Schematic of BEV delivery of Doxorubicin for chemo-immunotherapy.Reproduced with permission[140] Copyright 2020, Elsevier Inc. C. Schematic of Levofloxacin loaded BEVs against antibiotic-resistant bacteria.Reproduced with permission[143] Copyright 2020, Elsevier Inc. D. Mesoporous silica BEV hybrid nanosystem delivery of rifampicin to overcome antibiotic resistance.Reproduced with permission[146].Copyright 2021, Wiley-VCH E. Schematic of BEV delivered Catalase relieves tumor hypoxia and enhances radiotherapy.Reprinted with permission[148].Copyright 2021, American Chemical Society (See figure on next page.) Fig. 10 A Fig. 10 A. Schematic illustration of OMV-based vaccine platform by fusing the diverse protein catchers (Spycatcher and Snoopcatcher) on ClyA for versatile antigen display, suggesting promising antitumor immunity in the MC38 tumor model.Reproduced with permission [231] Copyright 2021, Springer Nature.B. Scheme of OMV-based mRNA delivery system by fusing the RNA-binding protein L7Ae and endosomal escape-promoting protein LLO on OMVs surface; C. In vivo antitumor efficacy and long-term immune memory for metastasis inhibition by OMV-based mRNA delivery system.Reproduced with permission.[232] Copyright 2022, John Wiley and Sons.D. Schematic presentation of fabricated Lipo@HEV by membrane fusion for synergistic cancer immunotherapy, supplemented by targeting delivery of PD-L1 trap plasmid.Reproduced with permission.[236] Copyright 2023, Elsevier.E. Schematic overview of OMV vaccine (OMV-SIRPα@CaP/GM-CSF) formation and biomineralization to enhance safe circulation in the blood, stimulating the trained immunity for distinct tumor regression.Reproduced with permission.[239] Copyright 2023, John Wiley and Sons
v3-fos-license
2023-01-15T15:13:44.018Z
2020-10-06T00:00:00.000
255802604
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s12870-020-02666-z", "pdf_hash": "c0798160e72ba70909d6ed13db130ea5447ecbb9", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42815", "s2fieldsofstudy": [ "Biology", "Agricultural and Food Sciences" ], "sha1": "c0798160e72ba70909d6ed13db130ea5447ecbb9", "year": 2020 }
pes2o/s2orc
Comprehensive analysis of AGPase genes uncovers their potential roles in starch biosynthesis in lotus seed Starch in the lotus seed contains a high proportion of amylose, which endows lotus seed a promising property in the development of hypoglycemic and low-glycemic index functional food. Currently, improving starch content is one of the major goals for seed-lotus breeding. ADP-glucose pyrophosphorylase (AGPase) plays an essential role in regulating starch biosynthesis in plants, but little is known about its characterization in lotus. We describe the nutritional compositions of lotus seed among 30 varieties with starch as a major component. Comparative transcriptome analysis showed that AGPase genes were differentially expressed in two varieties (CA and JX) with significant different starch content. Seven putative AGPase genes were identified in the lotus genome (Nelumbo nucifera Gaertn.), which could be grouped into two subfamilies. Selective pressure analysis indicated that purifying selection acted as a vital force in the evolution of AGPase genes. Expression analysis revealed that lotus AGPase genes have varying expression patterns, with NnAGPL2a and NnAGPS1a as the most predominantly expressed, especially in seed and rhizome. NnAGPL2a and NnAGPS1a were co-expressed with a number of starch and sucrose metabolism pathway related genes, and their expressions were accompanied by increased AGPase activity and starch content in lotus seed. Seven AGPase genes were characterized in lotus, with NnAGPL2a and NnAGPS1a, as the key genes involved in starch biosynthesis in lotus seed. These results considerably extend our understanding on lotus AGPase genes and provide theoretical basis for breeding new lotus varieties with high-starch content. employed in the production of low-glycemic index food. In addition, lotus seed is rich in protein, vitamins, essential amino acids, and a variety of bioactive components with important nutritional and medicinal value. As a non-structural carbohydrate, starch represents the most significant form of carbohydrate storage in plants. It is composed of two polymers of glucose, amylose and amylopectin, which have different molecular structures, the former is composed of unbranched chains of glucose monomers, while the latter is a branched polysaccharide [4]. Starch can be accumulated in photosynthetic and non-photosynthetic tissues through a highly complex processes [5]. The biosynthesis of starch begins in leaves and then transported to other storage organs in the form of disaccharides for starch synthesis. Sucrose transported from leaves to storage organs is successively catalyzed by sucrose synthase (SuSy), UDPglucose pyrophosphorylase (UDPase) and phosphoglucomutase (PGM) to produce glucose-6-phosphate, which is then transported into amyloplast for starch synthesis, which is catalyzed by PGM, ADP-glucose pyrophosphorylase (AGPase), starch synthase (SS) and starch branching enzyme (SBE) [6]. Furthermore, other important enzymes such as AGPase [7,8], SS [9][10][11] and SBE [12,13] have been identified and successfully applied in molecular breeding. Currently, the mechanism of starch biosynthesis process has been clarified plant species, such as Arabidopsis, potato, rice and corn [5,[14][15][16]. However, little is still known about the mechanism of starch biosynthesis in lotus seed. Previous studies have shown that starch biosynthesis process plays an important role in the development of lotus seed [2,17,18]. Starch is rapidly accumulated in cotyledons from 9 to 20 days after pollination (DAP) in lotus [2,17]. Proteomic analysis showed that AGPase was highly abundant at 25 DAP, and starch phosphorylase (StarchP) was significantly accumulated in mature lotus seed [17]. Comparative transcriptome between JX ('Jianxuan 17', a seed-lotus variety) with high starch content and CA ('China Antique') with low starch content showed that some enzyme encoding genes involved in starch biosynthesis, such as AGPase, soluble starch synthase (SSS) and SBE were up-regulated during lotus seed development [2]. AGPase plays an essential role in regulating glycogen and starch biosynthesis in bacteria and plants, respectively [19]. AGPase (EC:2.7.7.27) catalyzes the initial and major limiting step in the starch biosynthetic pathway, converting glucose-1-posphate (Glc1P) and ATP to ADP-glucose and pyrophosphate (PPi) [20][21][22]. AGPase is a heterotetramer composed of two small/catalytic subunits and two large/modulatory subunits [8,23]. There are two isoforms of AGPases, known as the cytosolic and plastidial isoforms, based on its cellular localization. In the grain endosperm of the grass family, ADP-glucose is formed in the cytosol by the cytosolic isoform, and imported into plastids by plastid envelope located ADPglucose transporter for starch biosynthesis. In contrast, ADP-glucose in most dicotyledonous plants is formed exclusively in the plastids by plastidial AGPase isoform [24]. Currently, AGPase genes have been identified in many species, such as rice, maize, Arabidopsis and barley [14,25,26]. Previous studies have shown that AGPase mutants exhibited a reduction in starch content, for example, in a near-starchlessArabidopsis TL25 mutant with an adg1 gene encoding small subunit structural gene of AGPase [22,27]. This deficiency in starch biosynthesis was also observed in a barley mutant, Risø 16, which has a large deletion within the coding region of the small subunit of the cytosolic AGPase [27]. To date, little information is available on the role of AGPase genes in lotus starch biosynthesis. With the development and improved potential commercial application of lotus seed starch, the work of breeding new lotus varieties with high-starch content is crucial. In this study, we measured the nutritional composition of lotus seed among 30 varieties, and quantified starch, soluble sugar, protein and polyphenol content. As a key rate-limiting enzyme in starch biosynthesis, AGPase encoding genes were systematically identified and characterized by phylogenetic, expression pattern and co-expression network analyses based on the completed lotus genome (Nelumbo nucifera Gaertn.) sequencing data. The dynamic change of starch content and AGPase activity in lotus seed was detected in our study. Furthermore, two promising genes for starch biosynthesis, NnAGPL2a and NnAGPS1a, were identified in lotus seed. This study establishes a foundation for the understanding of starch biosynthesis pathway in lotus, and offer theoretical basis for molecular breeding of new lotus varieties with high-starch content. Results The nutritional compositions of lotus seed Lotus seed reaches mass maturity in about 30 days after pollination across four developmental stages, with organ formation at 1-3 DAP, cell expansion at 4-9 DAP, material accumulation at 10-25 DAP, and dormancy at 26-30 DAP. Every stage is accompanied by morphological changes, such as seed size and color (Fig. 1a). Seeds of 30 seed-lotus varieties were collected at 15 DAP and 30 DAP to determine the nutritional components, including total starch, amylose, amylopectin, protein, soluble sugar and polyphenol (Additional file 1: Table S1). For seeds collected at 15 DAP, total starch ranged from 12.79-43.60% with an average of 28.20%, amylose ranged from 5.67-22.58% with an average of 14.41%, amylopectin ranged from 5.68-21.33% with an average of 13.80%, protein ranged from 2.43-8.52% with an average of 5.55%, soluble sugar ranged from 8.66-23.43% with an average of 14.39%, and polyphenols ranged from 0.54-2.31% with an average of 1.15% ( Fig. 1b -g). For seeds collected at 30 DAP, total starch ranged from 36.67-55.28% with an average of 47.12%, amylose ranged from 19.57-35.80% with an average of 26.58%, amylopectin ranged from 12.96-26.96% with an average of 20.54%, protein ranged from 9.7-17.53% with an average of 13.32%, soluble sugar ranged from 3.39-16.11% with an average of 7.56%, and polyphenols ranged from 0.77-2.01% with an average of 1.32% ( Fig. 1b -g). The contents of total starch, amylose, amylopectin and protein in 30 DAP lotus seeds were significantly higher than those in 15 DAP seeds (ANOVA, P ≤ 0.01), while the soluble sugar content in 15 DAP lotus seeds was significantly higher than that in 30 DAP lotus seeds (P ≤ 0.01), in contrast, no obvious change in the content of polyphenols was observed. In addition, there was no significant difference between amylose and amylopectin content in 15 DAP lotus seeds, while amylose content was significantly higher than the amylopectin content in 30 DAP lotus seeds (P ≤ 0.01). We found great differences in the nutritional composition among the 30 lotus varieties. For example, the coefficient of variation (CV%) for starch of seeds at 15 DAP, soluble sugar of seeds at 30 DAP and protein of seeds at 15 DAP was 29.10, 52.45 and 28.90, respectively (Additional file 1: Table S1). The AGPase genes are differentially expressed during lotus seed development The comparative transcriptome analysis was performed to explore the differences in the molecular mechanism for seed development between CA and JX [2]. A total of 4416 and 6916 differentially expressed genes (DEGs) were identified in CA and JX from 9 DAP to 15 DAP, respectively, with 2895 common DEGs (Additional file 2: Table S2). KEGG analysis showed that these common DEGs were mainly involved in 17 pathways (corrected P value ≤0.05), including metabolic pathways, starch and sucrose metabolism and flavonoid biosynthesis (Fig. 2a). We found 44 genes involved in starch and sucrose metabolism pathway, and their expression patterns showed that 23 genes, including AGPase (NNU_05331, NNU_ 20629, NNU_06174), granule-bound starch synthase (NNU_04661) and 1,4-alpha-glucan-branching enzyme (NNU_25320, NNU_23975) were simultaneously upregulated in CA and JX (Fig. 2b). Gene annotation showed that NNU_05331 and NNU_06174 encode the large subunit of AGPase, and NNU_20629 encode the small subunit. Interestingly, NNU_05331 and NNU_ 20629 were differentially expressed in CA and JX. The expression of NNU_05331 in JX was 16.90 and 27.13 fold higher than in CA, at 12 DAP and 15 DAP, respectively. Whilst, the expression of NNU_20629 in JX was 4.49 and 4.45 fold higher than in CA, at 12 DAP and 15 DAP, respectively. Previous study showed that JX could biosynthesize more starch than CA at 12 DAP and 15 DAP [2]. Therefore, it is tempting to speculate that the differences in AGPase expression could help explain the differences in starch accumulation in these two varieties. Identification of AGPase subunit genes in lotus genome After a Blast search in the sacred lotus genome (Nelumbo nucifera Gaertn.), a total of seven AGPase genes were identified, including three NnAGPL1, NnAGPL2a, NnAGPL2b genes encoding the large subunit, and four NnAGPS1a, NnAGPS1b, NnAGPS2a, NnAGPS2b genes encoding the small subunit (Table 1; Additional file 3: Table S3). Four genes NnAGPL1, NnAGPL2b, NnAGPS1a and NnAGPS2a were distributed on Megascaffold_1, while NnAGPL2a, NnAGPS2b and NnAGPS1b were distributed on Megascaf-fold_6, Megascaffold_11 and Megascaffold_96, respectively. The full length of the AGPase subunit proteins ranged from 311 to 614 amino acids, while the molecular weights (Mw) of the large and small subunits ranged from 58.6 to 67.75 and from 35.21 to 65.2 kDa, respectively. The isoelectric points (pI) of the AGPase subunit proteins ranged from 6.18 to 9.01. Five conserved motifs were identified in six lotus AGPase genes, except for NnAGPS2b which only has two conserved motifs (Fig. 3a). Gene structure analysis showed that the large subunit contains 14 to 17 exons, while the small subunit contains 5 to 9 exons (Fig. 3b). In addition, sequence alignment of AGPase proteins against the PSS (the potato small subunit gene) revealed that the critical amino acids for catalysis (Arginine, R and Lycine, K) are in both AGPase subunit proteins. Five proteins including NnAGPL1, NnAGPL2a, NnAGPL2b, NnAGPS1a and NnAGPS1b had the same conserved binding site (Lycine, K) for Glc-1-P (Fig. 3c, d). Evolutionary analysis of AGPase subunit genes Protein sequence analysis showed that the amino acid homology of the large subunit genes was 50.46-75.36%, while for the small subunit genes was 17.69-78.75% (Fig. 4a), with NnAGPL2a-NnAGPL2b and NnAGPS1a-NnAGPS1b pairs showing the highest homology amino acid sequence for the large and small subunits, respectively. In contrast, lowest homology was observed between NnAGPS2b and any of the other six identified AGPase gene members. The non-synonymous (dN) and synonymous (dS) substitution rates of homologous gene pairs were calculated to explore evolutionary dynamics and selection pressures between lotus and other plants, which including Arabidopsis, rice and maize. All dN/ dS < 1 was observed, indicating that purifying selection is acting on AGPase genes (Fig. 4b). A phylogenetic tree was constructed to gain insights into the evolutionary relationships of AGPase genes. All AGPase genes could be grouped into two subfamilies, and genes showed obvious differentiation between the large subunit and the small subunit (Fig. 4c). Compared to rice and maize, a closer evolutionary relationship of AGPase genes was detected between lotus and Arabidopsis. NnAGPL2a and NnAGPS1a are predominantly expressed in lotus The expression patterns of lotus AGPase genes were investigated in different tissues of the cultivar JX, including root, leaf, petiole, flower, stalk, the stolon stage rhizome (rhizome 1) and the swelling stage rhizome (rhizome 2) by Real-time PCR. The expression of five genes were detected in at least one tissue except for NnAGPL2b and NnAGPS2b (Fig. 5a). NnAGPL1 was highly expressed in leaf, and also found in petiole and rhizome 1, but not expressed in flower and rhizome 2. NnAGPL2a, NnAGPS1a, NnAGPS1b and NnAGPS2a were expressed in all tissues, with up-regulation of NnAGPL2a and NnAGPS1a in rhizome 2 than in rhizome 1. In addition, NnAGPL2a and NnAGPS1a showed overall higher expression abundances than other genes. The expression patterns of lotus AGPase genes were also investigated during lotus seed development. For the large subunit genes, NnAGPL2a was continuously upregulated from 9 DAP to 21 DAP, and NnAGPL2b showed an obvious expression change at 24 DAP (Fig. 5b). However, the expression abundances of NnAGPL1 in developing seed was hardly detected. For the small subunit genes, three genes showed expression abundance except for NnAGPS2b (Fig. 5b). NnAGPS1a and NnAGPS2a were induced during seed development, however, NnAGPS1b showed no obvious change in trend. In addition, NnAGPS1a and NnAGPS2a showed similar expression patterns with NnAGPL2a and NnAGPL2b, respectively. Based on the above results, we speculate that NnAG-PL2a and NnAGPS1a are the predominantly expressed genes in AGPase large subunit and small subunit, The ratio of non-synonymous to synonymous substitutions (dN/dS) between lotus AGPase genes and the homologous in Arabidopsis, rice and maize. c Phylogenetic tree of AGPase proteins respectively, especially in response to the development process in lotus seed and rhizome. Co-expression network analysis of NnAGPL2a and NnAGPS1a Co-expression network analysis is a powerful method for predicting gene function [28]. The predominantly expressed genes during lotus development, NnAGPL2a and NnAGPS1a were selected for this study. As a result, a total of 408 and 444 genes were co-expressed with NnAGPL2a and NnAGPS1a, respectively (|PCC| ≥ 0.9) ( Fig. 6a; Additional file 4: Table S4), with 359 commonly co-expressed genes identified. KEGG analysis showed that eight pathways were enriched among these coexpressed genes (corrected P value ≤0.05), including the biosynthesis of secondary metabolites, starch and sucrose metabolism and carbon metabolism (Fig. 6b). Fourteen genes were involved in starch and sucrose metabolism, including granule-bound starch synthase (NNU_04661), alpha-1,4 glucan phosphorylase (NNU_ 04529) and 1,4-alpha-glucan-branching enzyme (NNU_ 23975, NNU_25320) (Additional file 5: Figure S1). Five genes were selected to verify the expression patterns by qRT-PCR, and all were significantly up-regulated from 9 DAP to 18 DAP and with similar expression pattern to NnAGPL2a and NnAGPS1a (Figs. 5b; 6c). Thus, it is likely that the role of NnAGPL2a and NnAGPS1a could be linked with these co-expressed genes to contribute to starch biosynthesis in lotus seed. Sequence variation of NnAGPL2a and NnAGPS1a between CA and JX Significant differences in seed size and starch content were detected between CA and JX varieties [2]. In order to investigate whether NnAGPL2a and NnAGPS1a have sequence variation in different lotus varieties, we cloned the coding sequences (CDS) and promoter regions of these two genes from CA and JX, respectively (Additional file 6: Figure S2; Additional file 7: Table S5). For NnAGPL2a, three nonsynonymous mutation and four synonymous mutation were identified in the CDS between CA and JX, the nucleotide polymorphism of C 117 / G 117 , T 269 /G 269 and G 903 /A 903 resulted in Asn 39 (Fig. 7b). Greater sequence conservation was detected in NnAGPS1a than in NnAGPL2a between CA and JX. A single SNP in the CDS and two in the promoter region were identified ( Fig. 7; Additional file 6: Figure S2). The nucleotide polymorphism of T 37 /C 37 in CDS resulted in Cys 13 /Arg 13 , while the SNP at position − 409 bp (A/T) in CA caused a change in ARE cis-element (AAAC CA→AAACCT) (Fig. 7b). AGPase activity is increased during starch biosynthesis in lotus seed Due to its potential role in regulating starch synthesis in plants, the dynamic change in starch content and AGPase activity in lotus seed of JX was detected in our study. The starch content and enzyme activity consistently increased from 9 DAP to 21 DAP (Fig. 7c, d). Starch synthesis peaked during 12DAP to 15DAP, then accumulation slowed until 21 DAP. JX had a higher starch content than CA, which is consistent with previous studies (Fig. 7c). Compared with starch biosynthesis, the enzyme activity showed a steady increasing trend until 21DAP, and then rapid decreased at 24 DAP (Fig. 7d). In addition, JX had a higher enzyme activity than CA at 18 DAP and 21 DAP in lotus seed (Fig. 7d). Discussion Lotus seed is a product of sexual reproduction, and is largely consumed across Asia due to its rich in nutritional constituents. Previous studies on lotus seed has mainly focused on the processing technology, the structure and physicochemical properties of starch and the functional analysis of nutritional components, while the research progress on molecular mechanism of lotus seed development and quality traits has been slow [29][30][31][32]. In this study, we determined the content of key components in 30 seed-lotus varieties, these data will provide an important information on the scientific evaluation of lotus seed quality and the screening of germplasm resources for seed-lotus breeding. Starch can be divided into three categories as low (< 20% amylose), medium (21-25%) and high (> 26%) based on amylose content [33]. Unlike most cereal crops, the starch in lotus seed belongs to natural high amylose, and some varieties contain more than 32% amylose, such as 'Jianxuan 35' (35.80%), 'WBG_S1' (32.78%) and 'Honghua Jian Lian' (32.44%). The starch in lotus seed can be easily retrograded to produce resistant starch, which has the potential to develop hypoglycemic functional food and low glycemic index food [34][35][36]. Our results indicate that the starch content in lotus seed still needs to be improved, with some main cultivars having their Fig. 6 The co-expression network analysis of NnAGPL2a and NnAGPS1a in lotus seed. a Visualization of co-expression network of NnAGPL2a and NnAGPS1a. The magenta solid circles represent genes that are involved in starch and sucrose metabolism. b Functional enrichment analysis of the two NnAGPL2a and NnAGPS1a commonly co-expressed genes. c Expression analysis of five co-expressed genes by qRT-PCR. Bars represent means ± standard error (n = 3) content lower than the average level of 47.12%, such as 'Taikong 3' (40.00%) and 'Jingguang 1' (42.66%). Previous studies have shown that understanding the mechanism of starch biosynthesis can successfully provide the molecular basis for breeding new varieties with high-starch content in crops, such as rice, corn and wheat [37][38][39][40]. Although the research on the mechanism of starch biosynthesis in lotus seed has attracted the attention of plant breeders, little is still known on the topic [2,17,18]. Previous studies have shown that starch accumulation is accompanied with a high mRNA and protein abundance of AGPase in lotus seed, indicating that AGPase plays an important role in this process [2,17], thus warranting our study to analyze the mechanism of starch biosynthesis in lotus seed. Here, we exploited the available genome data to systematically identify seven AGPase subunit genes in lotus. The number of AGPase genes in lotus was similar to those found in other plants, for example, seven in rice, eight in maize and six in Arabidopsis [14]. High variation was observed in small AGPase subunit genes than in large subunit genes, which is not consistent with previous reports that showed high amino acid sequence conservation of small subunit genes than large subunit genes [41]. The AGPase genes could be distinctly divided into two subfamilies in plants, indicating that large and small AGPase subunit genes have different evolutionary patterns (Fig. 4c). The lotus AGPase genes showed closer evolutionary relationship with genes in Arabidopsis than with rice and maize, thus supporting the conclusion that sacred lotus is a basal eudicot [1]. Selective pressure analysis showed purifying selection is acting as a primary force in the evolution of AGPase genes, which could suggest that the functions of AGPase homologous genes in different plant species are conserved. The spatio-gene expression patterns can partly be used to predict gene function. In lotus, AGPase genes showed varied expression patterns in different tissues except for NnAGPS2b, suggesting that AGPase might play an important role in regulating the growth of lotus, especially in the starch synthesis tissues, such as seed and rhizome (Fig. 5). Two homologous gene pairs, NnAGPL2a -NnAGPL2b and NnAGPS1a -NnAGPS1b, showed an inconsistent expression patterns, which could suggest that functional differentiation of AGPase homologous genes has occurred in lotus. As a key enzyme in starch biosynthesis pathway, the regulatory properties of AGPase synergistically interact between the two subunits [7]. It is noteworthy that NnAGPL2a and NnAGPS1a are the predominantly expressed genes in lotus seed, suggesting these two are the key genes that might be involved in the regulation of starch biosynthesis and seed development. Differential expression of these two genes has been linked with difference in starch accumulation between CA and JX [2]. Phylogenetic analysis revealed that NnAGPS1a and NnAGPL2a are homologous to AtAPS1 and AtAPL2, respectively (Fig. 4c). AtAPS1 is the main small subunit in Arabidopsis predominantly regulating starch biosynthesis, with its mutant showing low starch phenotype, while AtAPL2 is the minor regulatory subunit in leaf with catalytic activity that could contribute to ADP-glucose synthesis in planta [7,22]. We speculate that NnAGPL2a and NnAGPS1a are likely involved role in the regulation of starch biosynthesis in lotus seed, however, further studies to define the biological function and molecular mechanisms of these two genes are needed. Domestication and breeding lead significant genetic changes in most crops, and identification of favored haplotypes can be used in molecular breeding [42]. In wheat, the haplotypes of two AGPase genes, TaAGP-S1-7A and TaAGP-L-1B, which are associated with thousand kernel weight (TKW), and the favored haplotypes underwent strong positive selection [37]. Here, we identified sequence variations in NnAGPL2a and NnAGPS1a between CA and JX. No variation was detected in the catalysis and Glc-1-P binding site, thus it would be worthy to determine whether variations in the CDS could affect the function of AGPase (Fig. 7a). The observed differential expression of NnAGPL2a and NnAGPS1a between CA and JX during lotus seed development, could be associated with the variation in the promoter region of these two genes (Fig. 7b). For example, the G-box mutation in NnAGPL2a promoter may affect the binding and regulatory activity of some environmental stress responsive transcription factors in JX, such as MYC and bZIP genes [43]. In addition, G-box is also closely associated with regulation of starch biosynthesis process, for example, a previous study showed that G-box plays a role in regulation of SBE and AGPase gene expression [44,45]. Overall, these results partly reveal the genetic differentiation of NnAGPL2a and NnAGPS1a, and provide an important reference for identifying favored haplotype at the population level, which could be used in molecular breeding. Rhizome is an important lotus storage organ, and starch is one of its most abundant components accounting for 10-20% fresh weight [46]. Previous studies have shown that AGPase gene is involved in the starch biosynthesis process in rhizome [3,46]. Our data showed a higher AGPase enzyme activity at the initial swelling stage in comparison to stolon or late swelling stages in the rhizome of ZO ('Zhou Ou'), which is a rhizome-lotus cultivar. Since NnAGPL2a and NnAGPS1a were the predominantly expressed and upregulated genes during the rhizome swelling stage in two cultivars, JX and ZO ( Fig. 5a; Additional file 8: Figure S3). We speculate that these two AGPase genes also regulate the starch biosynthesis in rhizome. Our results showed that the expression patterns of NnAGPL2a and NnAGPS1a are accompanied with the changes in AGPase activity in lotus seed and rhizome (Figs. 5b; 7d; Additional file 8: Figure S3b). Therefore, enhancing the activity of AGPase by genetic engineering these two genes are likely to be a feasible way to increase the starch content in lotus seed and rhizome. Similar research strategies have been successfully applied in crops, such as rice, corn and wheat [47][48][49][50]. Transgenic rice plants expressing the potato AGPase large subunit UpReg1 gene exhibited elevated photosynthetic capacity and starch levels in leaves, and increased seed biomass [47]. Overexpression of AGPase large subunit TaAGPL1 significantly enhanced AGPase activity and the rate of starch accumulation in wheat grains [48]. Here, we provide important gene resources for future genetic improvement of starch accumulation in lotus varieties. Conclusions Improving the starch content is currently one of the major goals for seed-lotus breeding. In this study, we have revealed the nutritional composition including starch, soluble sugar, protein and polyphenols in the seed of 30 lotus varieties. These data provide important information on the scientific evaluation of lotus seed quality and for screening germplasm resources for seed-lotus breeding. Comparative transcriptome analysis showed that AGPase genes were differentially expressed in two varieties with significantly different starch content. Seven AGPase genes were characterized in lotus (Nelumbo nucifera Gaertn.), and their sequence conservation, evolution, expression pattern and co-expression network were analyzed. The expression patterns of NnAGPL2a and NnAGPS1a AGPase genes were accompanied by the increase in starch content and enhanced AGPase activity in lotus seed, thus are to be the key genes regulating starch biosynthesis in lotus seed (Fig. 8). This study presents a starting point for functional evaluation of AGPase genes in starch biosynthesis in lotus, and provides theoretical basis for breeding new lotus varieties with high-starch content. Transcriptome analysis of lotus seed To investigate the regulatory mechanism of starch accumulation during lotus seed development, the public transcriptome database corresponding to expression abundances in two lotus varieties CA and JX at various stages (9 DAP, 12 DAP, 15 DAP) were obtained from NCBI (https://www.ncbi.nlm.nih.gov/sra?term=SRP12 7765). The identification of differentially expressed genes (DEGs) was performed as previously described [2]. Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis was implemented by KOBAS 3.0 [51]. The heatmap analysis of gene expression was visualized using TBtools [52]. For gene co-expression analysis, pearson correlation coefficient (PCC) was calculated based on the public transcriptome database to measure the co-expression relationships between two AGPase genes (NnAGPL2a and NnAGPS1a) and other genes [28]. Genes with |PCC| ≥ 0.9 were used for constructing co-expression network, and the network was visualized by Cytoscape software (3.4.0). Identification of AGPase genes from the sacred lotus genome To identify AGPase subunit genes from the lotus genome, we obtained all the predicted protein sequences from lotus genome (Nelumbo nucifera Gaertn.) using gene annotation gff3 file [1]. The fully characterized AGPase subunit protein sequences of Arabidopsis, rice and maize were download from Tair (https://www.arabidopsis.org/index.jsp) and NCBI website (https://www. ncbi.nlm.nih.gov/). The protein BLAST program was performed to identify the lotus AGPase subunit genes. After removing the redundant sequences, the NTP_ transferase domains of each lotus AGPase subunit protein amino acid sequences were analyzed using SMART online program (http://smart.embl-heidelberg.de/smart/ set_mode.cgi? NORMAL = 1). The predicted molecular weight (Mw) and isoelectric points (pI) of AGPase subunit proteins were calculated using the ExPASy portal (http://web.expasy.org/protparam/). The complete amino acid sequences were analyzed using MEME software (http://meme-suite.org/index.html) to discover the conserved motifs of AGPase genes. Multiple sequence alignments of gene were carried out using ClustalX (ver.1.83) software with default settings. Phylogenetic analysis of AGPase genes in lotus AGPase subunit protein sequences from four plant species including lotus, Arabidopsis, rice and maize were used for the phylogenetic analysis. The tree was constructed using MEGA7 software with Neighbour-Joining method as previously described [53,54]. The homologous pairs of AGPase subunit genes between lotus and the three species were identified using BLASTP program. The homologous gene pairs were subsequently used to calculate non-synonymous (dN) and synonymous (dS) substitution rates to explore the evolutionary dynamics of AGPase genes. The dN/dS was calculated by Maximum Likelihood (PAML) yn00 program with the GMYN method [55]. RNA extraction and qRT-PCR analysis The lotus cultivars were grown in the experimental field at Wuhan Botanical Garden (N30°30′, E114°31′), Wuhan, China. Tissues and seeds were collected at different development stages, immediately frozen in liquid nitrogen, and stored at − 80°C for later use. Total RNA was extracted using the Plant Total RNA Isolation Kit (Beijing Zoman Biotechnology Co., Ltd., Beijing, China). High-quality RNAs were reverse transcribed to cDNA using TransScript One-Step gDNA Removal and cDNA Synthesis SuperMix (Lot#M31212, Beijing TransGen Biotech Co., Ltd., Beijing, China). The qRT-PCR experiments were performed using StepOnePlus Real-time PCR System (Applied Biosystems, USA), and the relative gene expression level was calculated and normalized using NnACTIN (Gene ID NNU_24864) as the internal standard. Gene-specific primers for qRT-PCR were designed according to the gene coding sequences using Primer Premier 5.0 software and synthesized commercially (TIANYI HUIYUAN, Wuhan, China). The primers used for qRT-PCR are listed in Additional file 9: Table S6. Cloning NnAGPL2a and NnAGPS1a genes and their promoter regions Gene and promoter specific primers (Additional file 9: Table S6) of NnAGPL2a and NnAGPS1a were designed based on the published lotus genome (Nelumbo nucifera Gaertn.) sequence database [1]. The coding sequence (CDS) of NnAGPL2a and NnAGPS1a were amplified using cDNA from 18 DAP seed of CA and JX as the template, the promoter region of these two genes was amplified using genomic DNA from CA and JX. The corresponding PCR fragments were cloned into pDONR ZEO for sequencing (TIANYI HUIYUAN, Wuhan, China). Comparison of the CDS and promoter sequences from JX and CA clones was done by ClustalX (ver.1.83) software with default settings. The ciselements were analyzed using PlantCARE program [56]. Assay of AGPase activity The ADP-Glucose Pyrophosphorylase activity was detected using ADPG Pyrophosphorylase (AGP) Assay Kit according to the manufacturer's instructions (Cat#BC0430, Beijing Solarbio Science & Technology Co., Ltd., Beijing, China). Samples from seed tissue at different stages (12,15,18,21 and 24 DAP) and rhizome were collected and immediately frozen in liquid nitrogen, and stored at − 80°C until use. All AGPase activity assays were performed in triplicate with one of the replicates assayed using 0.1 g of fresh tissue for every stage. A specific enzyme unit was defined as the amount of the enzyme that catalyzes the conversion of one nmol of NADPH per minute, per gram of tissue under specified assay conditions. The absorption wavelength was set to 340 nm and detection performed with an Infinite M200 Luminometer (Tecan, Mannerdorf, Switzerland). Determination of nutritional components in lotus seed Thirty seed-lotus cultivars were kept and grown in the experimental field at Wuhan Botanical Garden (Wuhan, China) to analyze the nutritional composition of the seeds. Seeds were collected at 15 DAP and 30 DAP, and the collection of the materials complied with local and national guidelines. Subsequently, the germ was removed and the seeds were dried in an oven (DHG-9146A, Shanghai) at 100°C for 1 h, and then dried at 65°C to a constant weight. The dried seeds were ground into a powder and filtered by a 100-mesh sieve. The determination of nutritional components was performed at Wuhan ProNets Biotechnology Co,Ltd. (Wuhan, China). Three replicates were used for this assay. Soluble sugars and starch content were measured using anthrone-sulfuric acid colorimetry method. Approximately 0.1 g of seed powder was mixed thoroughly with 10 ml of 80% ethanol in a centrifuge tube, boiled in in a water bath at 80°C for 30 min and then centrifuged. The supernatant and pellet were used to determine the content of soluble sugar and starch content, respectively. A 2 ml volume of the supernatant was transferred to a new centrifuge tube, mixed with 0.5 ml anthrone-ethyl acetate (1 g anthrone dissolved in 50 ml of ethyl acetate) and 5 ml of concentrated sulfuric acid, then boiled in a water bath at 100°C for 1 min, and cooled at room temperature. The absorbance was measured at 630 nm with a spectrophotometer (TU-1810D, Beijing, China). For starch content, the pellet was transferred into a 50 ml volumetric flask, mixed with 20 ml distilled water, and boiled in a water bath at 100°C for 30 min. After boiling, 2 ml 9.2 mol/L perchloric acid were added. And extracted for 15 min, then cooled at room temperature. Finally, the tube was centrifuged and the supernatant was used to determine the content of starch by anthrone colorimetric assay. The content of amylose was measured using iodine colorimetry [57]. Approximately 0.1 g of seed powder was mixed with 9 ml 1 mol/L NaOH solution in a centrifuge tube, boiled in a water bath at 100°C for 10 min, after using distilled water for cooling capacity to 100 ml, 5 ml fluid was transferred to a new centrifuge tube, and 50 ml distilled water, 1 ml of 1 mol/L acetic acid and 1 ml iodine reagent (0.2 g iodine, 20 g potassium iodide dissolved in 100 ml distilled water) were added in turns. The absorbance was measured at 630 nm with a spectrophotometer. Protein content was measured using coomassie brilliant blue method [58]. About 0.02-0.05 g seed powder was mixed with distilled water and ground to a homogenate, centrifuged at 4000 rpm/min for 10 min, then 1 ml of the supernatant was transferred to a new centrifuge tube, mixed with 5 ml of coomassie brilliant blue reagent (0.1 g coomassie bright blue dissolved in 50 ml 90% ethanol, added 100 ml 85% (W/V) phosphoric acid, and topped up to 1 L volume using distilled water). The absorbance was measured at 595 nm with a spectrophotometer. Polyphenol content was measured using folin-ciocalteu colorimetry as previously described [59]. Approximately 0.02-0.05 g seed powder was mixed with 10 ml of 60% ethanol and hydrochloric acid (the final concentration is 0.024%) in a centrifuge tube, and boiled in a water bath at 75°C for 50 min. The absorbance was measured at 765 nm with a spectrophotometer.
v3-fos-license
2021-10-26T13:20:55.970Z
2021-10-26T00:00:00.000
239770556
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2021.663287/pdf", "pdf_hash": "227e3642c8b3e7bb18033674bfdebcc1b9e5c2cc", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42816", "s2fieldsofstudy": [ "Biology" ], "sha1": "227e3642c8b3e7bb18033674bfdebcc1b9e5c2cc", "year": 2021 }
pes2o/s2orc
An Eight Year Experience of Autologous Oocyte Vitrification for Infertile Patients Owing to Unavailability of Sperm on Oocyte Retrieval Day Objective: The objective of this study was to provide a descriptive analysis of the clinical outcomes achieved in oocyte vitrification in cases where sperm was unavailable on oocyte retrieval day, and to identify predictors of oocyte survival. Methods: This retrospective cohort study used data from a university-affiliated reproductive medical center. There were 321 cycles in which some of, or all oocytes were vitrified owing to the unavailability of sperm between March 2009 and October 2017. A descriptive analysis of the clinical outcomes including both fresh embryo transfers and cryopreserved embryo transfers was provided. The ability of an individual parameter to forecast oocyte survival per thawing cycle was assessed by binary logistic regression analysis. The cumulative probability of live birth (CPLB) was estimated by using the Kaplan-Meier method according to the total number of oocytes thawed in consecutive procedures. Results: The average survival rate was 83.13%. High-quality embryo rate and blastocyst rate decreased significantly decreased significantly in vitrification oocyte group compared to fresh control oocytes. The comparison of sibling oocytes in part-oocyte-vitrified cycles shows fewer high-quality embryos developed in the vitrified group. The live birth rate per warmed-oocyte was 4.3%. Reasons for lack of sperm availability on oocyte retrieval day and serum cholesterol levels were found to be associated with oocyte survival rate in the present study. Kaplan-Meier analysis showed no significant difference in CPLB between patients ≤35 vs. >35 years. Conclusions: Oocyte vitrification is an indispensable and effective alternative when sperm are not available on oocyte retrieval day. The present study provided evidence that oocytes from infertile couples were more likely to suffer oocyte/embryo vitrification injury. Clinicians need to take this into account when advising patients in similar situations. Further studies will be necessary to clarify the correlation between serum metabolism parameters and human oocyte survival after vitrification. INTRODUCTION Oocyte freezing is no longer considered an experimental method by the American Society for Reproductive Medicine (1). Oocyte vitrification is gradually becoming a useful adjunct to routine in vitro fertilization (IVF) in various clinical scenarios such as the unavailability of sperm at the time of egg retrieval (2)(3)(4), and for couples who do not wish to cryopreserve supernumerary embryos in cases where plenty of oocytes are retrieved (5). Another indication for oocyte vitrification that has now become a reality is the establishment of donor oocyte banks (6)(7)(8). Oocyte cryopreservation for deferring child-bearing and fertility preservation in cancer patients has also entered clinical practice (9)(10)(11)(12). Reports of donor oocyte vitrification have so far been encouraging. In a sibling cohort study of recipient cycles, similar embryo development has been shown from fresh vs. vitrified oocytes (13). Several well-controlled studies involving donor oocytes have shown that clinical outcomes with vitrified oocytes are comparable to those with fresh oocytes (7,(14)(15)(16). A large study of a donation program reported by Cobo et al. has demonstrated comparable obstetric and perinatal outcomes from vitrified vs. fresh oocytes (17). These results have confirmed the further application of oocyte vitrification in assisted reproduction treatment for medical indications. Although oocyte vitrification has been demonstrated as a successful and stable technique in donor programs, these results might provide overly optimistic evidence for oocyte vitrification where there are medical indications in infertile patients. Different oocyte sources may have varying inherent qualities that affect vitrification outcomes (9,18). However, reports related to autologous oocyte vitrification in infertile patients are few and inconsistent (12,19). A study of sibling oocytes from 44 patients undergoing IVF showed reduced rates of fertilization and embryo development after oocyte vitrification (19). Another study included 128 autologous vitrified/warmed oocyte cycles from IVF cycles and demonstrated significantly higher implantation rates (43 vs. 35%) and clinical pregnancy (57 vs. 44%) with vitrified-warmed compared to fresh oocytes (12). This study aims to describe the outcomes we have achieved in our 8-year experience of oocyte vitrification owing to unavailable sperm on oocyte retrieval day. Analyses were performed to find relevant factors regarding oocyte survivability. This relatively large data set adds to the limited information currently available regarding the clinical application of vitrified autologous oocytes for medical indications. MATERIALS AND METHODS The ethics committee at the Center for Reproductive Medicine, Shandong University approved this clinical application. Couples chose oocyte cryopreservation because of the unavailability of sperm at the time of oocyte retrieval as an alternative to using donor semen. The control group consisted of age and Body Mass Index (BMI)-matched patients, who were undergoing intracytoplasmic sperm injection (ICSI) treatment for male factor infertility (Figure 1). (20). Oocyte vitrification was performed at room temperature (RT). The oocytes were equilibrated in ES for 5-10 min until they recovered their shape, and then they were placed into the VS for 1 min. Finally, the vitrified oocytes were placed on a CryoLoop (Hampton Research, Laguna Niguel, CA, USA) and immediately immersed in liquid nitrogen. No more than four oocytes were loaded onto each CryoLoop. Oocyte warming was performed at RT, except for the first step. The CryoLoop with the vitrified oocytes was taken out of the liquid nitrogen and immediately placed in 1.0 mol/L sucrose in a M-199 + 20% SPS solution at 37 • C for 1.5-2.0 min. Next, the oocytes were placed in 0.5 mol/L sucrose in an M-199 + 20% SPS solution for 3 min at RT, after which they were transferred into another M-199 solution with 0.25 mol/L sucrose for 3 min. Finally, they were washed in M-199 + 20% SPS for 5-10 min while the stage was warmed slowly. After warming, the surviving oocytes were cultured for 2 h in G-IVF (Vitrolife, Göteborg, Sweden) in an incubator at 37 • C, 6% CO 2 before being inseminated using ICSI (20). Embryo transfer was performed on Day 2 or 3 depending on embryo quality or quantity. No more than three embryos were included in each transfer. The supernumerary embryos were cultured into blastocysts, and high-quality blastocysts were vitrified. Endometrial Preparation and Pregnancy Assessment All patients used hormone replacement therapy as the endometrial preparation protocol, which has been described in a previous study (21). In short, 4-8 mg of oral estradiol valerate (Progynova, Bayer, Germany) was administered daily for at least 10 days starting on Day 2-5 of the menstrual cycle. When the endometrial thickness reached ≥8 mm, oral progesterone (Dydrogesterone, Solvay, the Netherlands) 20 mg twice daily plus vaginal micronized progesterone (Utrogestan, Besins Manufacturing Belgium) 200 mg once daily was initiated on the day of oocyte warming. Clinical pregnancy was determined as the presence of a gestational sac identified by vaginal or abdominal ultrasound 4-5 weeks after embryo transfer (ET). Gestational age, birth weight, and congenital malformation outcomes were followed-up. Statistical Analysis The main outcome measurements were survival rate and the cumulative live birth rate (including live birth from fresh ETs and subsequent cryo-ETs) per warming cycle. The secondary outcome measures included laboratory outcomes of vitrifiedwarmed oocytes, implantation, clinical pregnancy rates, and the delivery rate per fresh embryo transfer and vitrified embryo transfer, as well as gestational age, birth weight, and congenital malformation outcomes. The difference in means and prevalence among the groups were analyzed by Student's t-test for continuous data and Chi square for categorical data. A P-value < 0.05 was considered statistically significant. A binary logistic regression model was performed to identify predictable parameters of oocyte survival per thawing cycle. The oocyte-to-baby rate was calculated by dividing the number of live births by the total number of oocytes consumed × 100. The cumulative probability of live birth (CPLB) ). Data were also obtained from age and BMI matched controls undergoing fresh ICSI cycles for severe male factor with autologous oocytes. Similar fertilization rates were shown between the vitrified and fresh groups, while the high-quality embryo rate and blastocyst rate decreased significantly in the vitrified group ( Table 2). Table 3 shows information from the two groups divided by median survival rate (91.67%). Serum total cholesterol in the ≥91.67% survival group was higher. The blood glucose level was also higher in the ≥91.67% survival group. There were more cycles owing to absolute male factors included in the ≥91.67% survival group. And finally, the preservation time was longer in the <91.67% survival group. Table 4 gives a comparison between the two oocyte vitrification groups divided by different reasons for lack of sperm availability on oocyte retrieval day. The relative male factor group (owing to an inability to provide an ejaculated sample through masturbation or unexpected absence of partner) presented a higher serum triglyceride level. Oocyte survival rate was higher in the absolute male factor group (owing to unavailable or insufficient sperm from an ejaculated sample or surgical collection). Table 5 shows the comparison of sibling oocytes between fresh and vitrified groups in part-oocyte-vitrified cycles. A total of 67 cycles had a portion of oocytes vitrified because of male factors. Forty-one cases were inseminated with the husband's sperm in both fresh oocytes and vitrified oocytes. No significant difference was found between the vitrification and fresh groups of sibling oocytes in fertilization rate (66.93 vs. 59.77%), but fewer highquality embryos developed in the vitrified group (27.68 vs. 53.46%). Table 6 shows the clinical outcomes according to different sperm sources after oocyte warming in all-oocytevitrified cycles. Among 254 cycles, the warmed oocytes were fertilized with the husbands' sperm, testicular sperm aspiration/percutaneous epididymal sperm aspiration (PESA/TESA) sperm, and frozen donor sperm in 150, 46, and 58 cycles, respectively. The frozen donor sperm group showed a better high-quality-embryo rate than the husbands' sperm and the PESA/TESA sperm group (40.94 vs. 32.76%; 40.94 vs. 31.95%). There were 81 vitrified embryo transfer cycles, including 53 double frozen transfers (i.e., vitrified oocyte and vitrified embryo) cycles and 28 triple frozen (i.e., vitrified oocyte, frozen sperm, and vitrified embryo) transfer cycles, which yielded 22 and 11 neonates, respectively. The delivery rate per transfer, gestational age, birth weight, and congenital malformation outcomes were similar among groups. One hundred and forty-two babies were born as a result of 262 fresh ETs and 81 subsequent cryo-ETs. The cumulative live birth rate per warming cycle was 41.40%. The oocyteto-baby rate was 4.3%. At the end of the present study, 110 blastocysts remained cryopreserved from the oocyte warming cycles included in this work. Assuming that the delivery rates are maintained with this cohort, a rough estimation after their use could yield an outcome of 36 additional babies, which would enhance the oocyte-to-baby rate to 5.4%. The Kaplan-Meier analysis showed no significantly different CPLB between patients ≤35 vs. >35 years (Log-rank (Mantel-Cox); P = 0.231; Breslow (generalized-Wilcoxon); P = 0.458; and Tarone-Ware; P = 0.388). The CPLB improved when more oocytes were warmed and the curve for older patients reached a plateau earlier (with 15 oocytes) than those for young women (with 23 oocytes) (Figure 2). A binary logistic regression model was performed to find predictable parameters of oocyte survival per thawing cycle. Several parameters were introduced into the initial model as predictors, including age, BMI, metabolic indicators (including triglyceride, total cholesterol, high density lipoprotei, etc.), basic hormones, infertility years, polycystic ovary syndrome (PCOS)/non-PCOS, endometriosis/nonendometriosis, ovarian stimulation protocols, reason for lack of sperm availability, vitrification kits, and storage duration. As shown by the odds ratio (OR), the effect of reason for lack of sperm availability was acknowledged, and the effect of serum TC on survival was suggested (Supplementary Table 2). Different superscripts in the same row indicate statistical differences (P < 0.05). DISCUSSION Given that oocyte cryopreservation techniques have changed from slow freezing to vitrification according to the safety and efficacy of reports over the past decade (22), oocyte vitrification has been gradually introduced into assisted reproduction treatment in various clinical scenarios. In particular, oocyte vitrification is becoming an indispensable alternative technique for couples who do not have sufficient available sperm at the time of egg retrieval (18). Our study represents the findings of the largest data set from a single center in China of vitrified autologous oocytes, which were obtained from couples who lacked available sperm at the time of egg retrieval. This report comprises 321 oocyte-vitrification-warming cycles. A total of 142 healthy babies born from fresh and frozen embryo transfer and the cumulative live birth rate per warming cycle was 44.24%. We found that oocyte survival was better in those couples with absolute male factors. Different oocyte sources, including cancer patients, women desiring fertility preservation, oocyte donors, or infertile patients, may exist with different inherent qualities that influence vitrification outcomes (3,18,21,23). The current study added more information, which is not optimistic, regarding oocyte vitrification in infertile patients. Inconsistent with previous reports regarding recipient oocyte vitrification cycles (13,17,24), in the present study, the high-quality embryo rate and blastocyst rate in vitrified oocytes decreased significantly compared with fresh oocytes ( Table 2). Similar outcomes were found in the sibling oocyte comparison from part-oocyte-vitrified cycles ( Table 5). Further evidence in the present study was from the comparison between groups with different reasons for the lack of sperm availability. Survival rate was significantly higher in the absolute male factor group; this may be because the women in this group were relatively "fertile" and might have had higher quality oocytes ( Table 4). In the all-oocyte-vitrified cycles, which were divided into three groups depending on the sperm resource used after oocyte warming, the donor sperm group showed better embryo quality compared with the husband sperm groups. The explanation here may be that both oocytes and sperm in this group were from relatively "fertile" individuals ( Table 6). Literature also reports similar results; that vitrification could damage oocyte potential from infertile women in egg-sharing or autologous oocyte vitrification programs (25,26). All these outcomes demonstrate that oocytes from infertile women are more vulnerable to vitrification injury and might not easily survive the vitrification-warming procedure. In order to obtain more referential information for clinical work, we tried to find some useful predictors of successful outcome following oocyte vitrification. Age was firstly taken into consideration. However, in the present study, no significant difference was discovered between the two age groups (≤35 vs. >35 years) in survival rate, fertilization rate, and highquality embryo rate (Supplementary Table 1). These results were confirmed by the Kaplan-Meier analysis of CPLB according to different age groups. No significant difference was observed between the two age groups (Figure 1). This result was inconsistent with previous studies (7,9,13,17), most probably owing to the small sample size and characteristics of the older patients involved in the present work. Only 50 (15.68%) patients >35 years old were included, because most advanced age couples are more inclined to choose donor sperm in cases of unavailable sperm on oocyte retrieval day. The average number of retrieved oocytes in this older age group was 11.46 (95% CI 9.70-13.22), which indicated a better ovarian reserve of these patients than their peers and further explained the insignificant difference between the two age groups. However, we observed that the older patients' curve reached a plateau earlier than for younger women, which agreed with other studies (9,13,17). The average survival rate in the present study was 83.13% (95% CI 81.81-86.35%), comparable to the published data range from 68.6 to 96.8% (12,(26)(27)(28)(29) in the literature. We compared two groups divided by the median survival rate (91.67%). Statistical differences were found in the serum total cholesterol, the proportion of different reasons for lack of sperm availability, and preservation time. The outcomes were partially consistent with multiple logistic regression analysis. As shown by the adjusted OR, the effect of the reason for lack of sperm availability was reassuring. Another parameter entered in the model was serum TC, which had not previously been analyzed in human oocyte vitrification studies. A higher serum TC level was found to be favorable for oocyte survivability after vitrification. Cholesterol is known to be the major non-polar lipid in mammalian cell membranes (30). Modulation of plasma membrane cholesterol to increase post-cryopreservation survival is currently a new topic in mammalian oocyte vitrification (31)(32)(33). Large prospective research studies are needed to confirm whether serum lipid levels or other metabolic parameters, are relevant to oocyte survivability after vitrification. Oocyte vitrification efficiency could be defined as the route to a live birth using the lowest number of vitrified oocytes. Although we have obtained a cumulative live birth rate per warming cycle of 44.24%, the oocyte-to-baby rate was only 4.3% in the present study. About one third of couples (36.76%) had successfully taken babies home. Other studies addressing oocyte vitrification for medical indications have reported quite different outcomes of oocyte-to-baby rate. Kara et al. reported the live-birth rate per mature oocyte was 3.0% in an oocyte cryopreservation group (<35 years old) (25). Doyle et al. estimated live birth per warmed oocyte as 6.5% (including predicted live birth from remaining cryopreserved blastocysts) (12). The data herein provides more information for clinicians to advise patients faced with the situation of unavailable sperm on oocyte retrieval day. The outcomes of live delivery, including gestational age, birth weight, and live birth congenital defects were compared with the fresh control group, and no significant differences were noted. The limited data we collected showed that double vitrification (oocyte and embryo vitrification) or triple-cryopreservation (oocyte/embryo vitrification and sperm cryopreservation) had no adverse effects on perinatal outcomes. The population in our study was very special and it is impossible to carry out a prospectively randomized controlled trial in such situations for ethical and legal reasons. The long duration of this retrospective study might have added some variations that may have affected the presented data. However, we have a relatively stable laboratory team with experienced technicians trained in oocyte vitrification. Furthermore, we included stimulation protocols and vitrification kit parameters that changed through time in the regression model as potential confounders. Another drawback was the relatively limited sample size, for the incidence rate was only 0.3-0.5% in all IVF/ICSI cycles during data collection years in our hospital. Finally, the couples in the present study mostly exhibited severe male factors, which could influence subsequent embryo development and pregnancy outcomes. Therefore, the outcome results might not represent the entire medical indications for oocyte preservation. CONCLUSIONS AND PERSPECTIVES Oocyte vitrification is an indispensable and effective alternative when there is a lack of available sperm on oocyte retrieval day. The present study showed that oocyte survival was better in couples with absolute male factors and this suggested that oocytes from infertile women were more likely to suffer from vitrification injury. Further studies will be necessary to clarify the correlation between serum metabolism parameters and human oocyte survival after vitrification. Our study has preliminarily contributed to the important question for clinical practice of how to distinguish the female population who have oocytes with better survivability after vitrification. We hope more data from autologous oocyte vitrification studies with a largescale and controlled variable design could add to and clarify our results. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
v3-fos-license
2020-12-31T09:05:54.500Z
2020-01-01T00:00:00.000
234902918
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.oatext.com/pdf/GIMCI-5-190.pdf", "pdf_hash": "09b5f3f45bc5d60313a790470c3c27d7ead10d2d", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42817", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "92847f19ef5d286b2ecb41c6c22903fc01f26572", "year": 2020 }
pes2o/s2orc
The mast cell Mast cells are immune cells that are arguably the most closely associated cell type with allergic reactions. They were originally discovered by Frederick von Recklinghausen in 1863 and named by Paul Ehrlich in 1877 [1]. They originate in the bone marrow, but unlike other types of cells such as basophils they mature while circulating in the body, usually when reaching a specific tissue site. Mast cells are a common “participate” not only in allergic reactions but also in wound healing, immune tolerance and importantly in defense against pathogens [2]. As mentioned, they are similar to the basophil in terms of function and appearance. Both cells express CD34. Mast cells are present in most tissues are very prevalent near the skin surface and mucosa and conjunctiva. Allergic reactions of the eye may be the most reported types of allergic reaction, simply because they are the most “visible” to both the victim and to others and may in rare cases lead to blindness. Introduction Mast cells are immune cells that are arguably the most closely associated cell type with allergic reactions. They were originally discovered by Frederick von Recklinghausen in 1863 and named by Paul Ehrlich in 1877 [1]. They originate in the bone marrow, but unlike other types of cells such as basophils they mature while circulating in the body, usually when reaching a specific tissue site. Mast cells are a common "participate" not only in allergic reactions but also in wound healing, immune tolerance and importantly in defense against pathogens [2]. As mentioned, they are similar to the basophil in terms of function and appearance. Both cells express CD34. Mast cells are present in most tissues are very prevalent near the skin surface and mucosa and conjunctiva. Allergic reactions of the eye may be the most reported types of allergic reaction, simply because they are the most "visible" to both the victim and to others and may in rare cases lead to blindness. Histamine release Histamine is involved in immune reactions and consists of an imidazole ring attached to an ethylamine chain. It is produced by mast cells (and basophils) in response to pathogens of all kinds, including food intolerance [3,4]. Mast cells (and basophils) display as part of the cell surface the FceRI high-affinity receptor for IgE. It is a tetramer consisting of an alpha chain, a tetraspan beta chain and two di-sulfide linked gamma chains. Anaphylaxis may be triggered when an allergen (or other stimulant) interacts with IgE antibodies which may be located on the surface of the mast cell [5]. This triggers de-granulation with the subsequent release of inflammatory mediators including histamine. When histamine is released in response to a pathogen interacting with mast cell surface receptors which have interacted and become sensitized by IgE, itching, redness and vasodilation occur quickly, often within seconds [6]. Vasodilation may result in lower the blood pressure. It also increases blood vessel permeability. The "flare and wheel" reaction on the skin's surface are signs of histamine release as it relates to the depolarization of nerve endings. This sign in maximal for about 30 minutes and resolves in about an hour. Insect bites are characterized by a noticeable bump and redness which occurs within seconds of mast cell interaction with an insect allergen. Anaphylaxis is the systemic reaction to allergens and may be life threatening [5,6]. It is caused by the body or system wide degranulation of mast cells leading to vasodilation caused by histamine release. Structure Mast cells are similar to basophils in terms of structure [7]. They are both granulated and contain heparin, but as mentioned mast cells will mature while circulating in the body. The mast cell nucleus is round. CD 34 is expressed as in the basophil. They mature in the presence of cytokines such as nerve growth factor (NGF). Granules in mast cells may be stained by Toluidine Blue and appear purple post-stain. In tissues mast cell progenitor cells express the IgE receptorFceRI, just as mature mast cells do. Migration of these immature cells is dictated by the presence of inflammation and thus inflammatory mediators. Mast cells are especially prevalent around blood vessels. They are common in the skin, lungs and digestive tract, and especially important in the eye and nasal cavities [8]. Clinical significance Activated mast cells play an essential and well-established role in allergy and inflammation processes. The presence of allergens will cause mast cell to degranulate by in an essentially irreversible linkage to IgE receptors. Degranulation of mast cells leads to the presence of inflammatory mediators in the immediate microenvironment of the cell. Mast cells are in relatively high numbers associated with the skin. They may also occur in the intestinal submucosa and various connective tissue [9]. As a result of incoming inflammatory cells to a given area of the skin more cells are recruited and thus the reaction continues. This condition may be treated with cyclosporin or methotrexate. Chronic urticaria (hives) is a condition of the skin and may be caused by an allergic reaction to a food or drug [10]. It is triggered by de-granulation of mast cells. As intra-cell inflammatory mediators are released, the acute inflammatory response mediated by lymphocyte and granulocyte induced hypersensitivity reaction. As a result, more cells are recruited with the subsequent release of more soluble inflammatory mediators. The end results are redness, welts, bumps or raised lessons on the skin surface that will cause an uncontrollable itch. Treatment is with antihistamines and in some cases steroids [10]. Another role that mast cells have is in the pathology and development of multiple sclerosis (MS) [11]. MS is a progressive disease which causes lessons in the brain and spinal cord. Signs and symptoms include incontinence, motor dysfunction and vision problems. Further there is memory loss, and a general slowness to process information or to perceive when visual information is incorrect. Mast cells release cytokines which recruit other immune cells such as T cells to a given area. Mast cells have the ability to disrupt the blood brain barrier that will allow these T cells to infiltrate the brain and interfere with myelin basic protein. Mast cells can damage myelin. As a result, tryptase is released, which in turn acting as a degenerative feedback mechanism to further stimulate inflammation including the further recruitment of immune cells. Mast cells have a role in the development of the autoimmune disease rheumatoid arthritis (RA) [13]. RA is a systemic disease that affects about 1% of the population. Most of the major cell types have a role in this disease. There has been shown to be a link between the presence of mast cells and the development of RA in synovial tissues. They produce inflammatory mediators. The activation of mast cells through the IgG immune complex can initiate inflammation through the release of IL-1. TNF-alpha can induce fibroblasts to produce stem cell factor, which serves as a feedback recruitment loop for mast cells which in turn produce more inflammatory mediators and thus continues the disease in the affected areas [13].
v3-fos-license
2020-01-16T09:09:49.618Z
2020-01-01T00:00:00.000
214437520
{ "extfieldsofstudy": [ "Physics", "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1441/1/012098/pdf", "pdf_hash": "c1c4e01aab4fcda26e5f4729956284599cbdcba3", "pdf_src": "IOP", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42819", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "sha1": "5b04f92eea658dd483140d1c6eb8246e03218564", "year": 2020 }
pes2o/s2orc
The experimental method for testing relaxation of pneumatic elements with a rubber-cord casing The method for the experimental study of the viscoelastic properties of pneumatic elements with a rubber-cord casing is proposed. Based tests of relaxation in which the height of the pneumatic element is kept constant (a practically incompressible fluid is used as the working medium). The implementation of the method is illustrated by the example of the rubber-cord casing of the balloon type of model N-50. The value of the absorption coefficient (relative hysteresis) is established, which in order of magnitude coincides with the value of the absorption coefficient of tread rubbers given in literary sources. In the process of holding the height of the pneumatic element at a constant level, increments of the force of the pneumatic element and overpressure are proportional to each other with a proportionality coefficient depending on the value of the internal volume and height of the air element. The comparative analysis of the features of the operation of pneumatic elements in cases where the working medium is a gas or liquid is done. The results obtained are of interest in the development of vibration protection and vibration isolation systems for technical objects, the design of which includes air springs, air damping shock absorbers and rubber-hydraulic vibration mounts. Introduction In systems of vibration protection and vibration isolation of various technical objects, pneumatic elements (PE) with rubber-cord casings are widely used, which have a number of undeniable advantages. According to available quantitative estimates, [1] 92% rubber-cord composite consists of rubber (around to the equator of the balloon-type shell of the model N-50), although almost the entire applied load is absorbed by the cord fibre. On the other hand, according to the data of cyclic tests of tread rubber with a strain range of about 5 ... 6% at frequencies of 1 ... 10 Hz, the absorption coefficient (relative hysteresis) is of the order of 0.53 ... 0.62 [2]. In other words, the dissipated energy spent on heat generation is approximately 53 ... 62% of the work expended during loading. On the basis of this, the question naturally arises of how strongly the viscoelasticity of the rubber-cord casing material affects the operation of the pneumatic element. Experimental data of this kind are not given in the scientific and technical literature [3][4][5][6][7]. In mechanics of a deformable solid, the process of change deformation in time at constant stress is called creep, and the process of change in stress at constant strain is called relaxation [8]. Experimental studies of creep and relaxation processes under uniaxial tension (compression) form the basis of different versions of the technical theory of creep. Following the terminology of the mechanics of a deformable solid, the process of changing force of the pneumatic element (and overpressure) in time of the at a constant height (coordinate) of the pneumatic element is appropriate to call relaxation, and the process of changing the height of the pneumatic element in time of at constant force -creep. Relaxation and creep tests are a convenient tool for the experimental determination of the viscoelastic characteristics of pneumatic elements with rubber-cord casing. At the same time, a certain gas or liquid (usually air or water) can be used as a IOP Publishing doi:10.1088/1742-6596/1441/1/012098 2 working environment in the tests. Due to the great compressibility of the air with its use, it is possible to experimentally obtain the static (elastic or equilibrium) isobar characteristics of the pneumatic element with high accuracy. On the other hand, the indicated advantage of air as a working medium is a disadvantage in determining the viscous properties pneumatic element with rubber-cord casing. Small changes in the internal volume of the pneumatic element during relaxation to equilibrium are accompanied by such insignificant changes in excess pressure that they cannot be measured accurately enough. On the contrary, due to the weak compressibility of the water, even small changes in the internal volume of the pneumatic element during relaxation to equilibrium lead to significant changes in water pressure, which can be measured even with conventional manometers with high accuracy. The experimental data established in this case allow obtaining reliable results when determining the viscous characteristics of a pneumatic element (as well as elastic characteristics, but with less accuracy than in the case of air as a working medium). Thus in experimental studies of rubber elements with a rubber-cord sheath, it is advisable to use the combined method, when the elastic characteristics are determined from tests of a pneumatic element with some gas as a working environment (for example, air), and viscous characteristics are found from tests of a pneumatic element with some liquid as a working environment (for example water). In the study of relaxation the rubber-cord casing of the balloon type of the model N-50 was used, which is commercially available from the leading domestic company FSUE «FRPC «Progress» [9]. Experimental method 2.1. Object of study The pneumatic element with a rubber-cord of casing he balloon type of the model N-50 has an average height (coordinate) x = 112 mm, the maximum compression and rebound stroke is x ∆ = ± 40 mm (from the average position of the pneumatic element). Under normal operating conditions, the working medium is air supplied to the internal cavity in the middle position of the pneumatic element under overpressure u p = 4 bar. The shell maintains its operability at a temperature from -45°C to + 50°C. Working environment. Boiled water was used as the working environment. At a test temperature of 20°C, the volume of water is associated with overpressure u p by the equation of state where θ β = 4.525⋅10 -10 Pa -1 is the coefficient of isothermal compressibility of water [10], 0 V is the experimentally determined volume of water poured into the pneumatic element at zero overpressure in a fixed (arbitrarily specified) position of the pneumatic element with coordinate 0 x . Testing and measuring tools The experimental stand (figure 1) is integrated into the Instron series 8805 servo-hydraulic testing machine, the built-in sensors of which are automatically recorded and transmitted to the computer for recording the force and movement of the pneumatic element. The Dynacell load sensor with integrated accelerometer compensates for the inertial load caused by heavy grips and fixtures with a relative measurement error of 0.5%. The error of the displacement sensor is 0.02 mm. Instron's servohydraulic machine software (Bluehill 3, WaveMatrix) allows for quasi-static and dynamic testing using virtually any technique with control of load (up to 100 kN) and displacement (up to 150 mm). Pressure measurement is carried out by the ZET 7112-I-Pressure-CAN intelligent overpressure sensor with a CAN interface with a relative measurement error of 0.1%, a sensitivity threshold (the minimum value by which two sequentially measured values are distinguished) 1 Pa and the maximum total data recording frequency of 12 kHz. The position of the piston of the hydraulic cylinder and the geometric parameters of the shell shape (equatorial diameter, effective shell height and the height of the air element along the flanges) are measured with two calipers with an accuracy of 0.01 mm. The volume of liquid being poured is measured by a 500 ml measuring cylinder with an error of 2.5 ml. Test methodology After installation in Instron grips, the pneumatic element is displayed at the initial height between the flanges 0 x . By means of Bluehill 3 software, this position is assigned as the reference point for the movements of the upper grip. Then the pneumatic element is filled with a measured volume of liquid 0 V and sealed. The movements of the upper grip are controlled via a personal computer using the Bluehill 3 software using the three-stage program shown in figure 2. Figure 2. Relaxation test program At each of the three stages, the capture for a period of time t ∆ = 40 ... 60 min (longer exposure time is needed in tests with a greater load of the pneumatic element, so that the relaxation process to equilibrium is closer to its completion) is maintained in a fixed position, which corresponds to the x and constant values of the speeds of movement of the upper grip between the steps, the same in absolute value. The values of the initial height 0 x , fluid volume 0 V , and displacement 13 s at the first and third stages are determining for a separate test and subsequent processing of experimental data, the auxiliary displacement value at the second stage was assigned from the condition Here it was important to ensure that the overpressure during displacement does not reach the pressure of the destruction of the pneumatic element. Throughout the test, the force at the upper grip, the displacement of the upper grip and the overpressure using Bluehill 3 and ZETlab software were automatically recorded on a computer (with a given frequency of data acquisition). Subsequently, the force on the upper grip was recalculated taking into account the weight of the upper flange and the parts rigidly connected with it, equal to 56.3 N. The experiment is repeated for all initial heights and displacements of interest 13 s , 2 s . Experiment Results We illustrate the results of the study using the test example with the initial height of the pneumatic element 0 x = 155 mm, the displacement of the first and third steps 13 s = 60 mm, the displacement of the second stage 2 s = 68 mm. The volume of liquid poured 0 V = 3.709 liter. The exposure time t ∆ = 60 min, the speed of capture between the steps of 0.5 mm / s. 2) at a fixed height of the pneumatic element, the equilibrium value of the force of the pneumatic element (and overpressure) is unique, it is located exactly somewhere between the corresponding (above) values for the first and third stages, probably coinciding (or maybe not) with the average value (between first and third steps), equal to 31.1 kN (and 9.7 bar). If proceed from the fact that the rubber-cord composite in addition to its elastic properties has plastic properties (by the type of dry friction between solids), then the first option will be legitimate. On the contrary, if we assume that the rubber-cord composite has elastic and viscous properties (like internal friction in a liquid), then the second option should be recognized as legitimate. Moreover, the effective viscosity of the rubber-cord composite is so large that, according to preliminary rough estimates, the time to reach the equilibrium state (in the test) is many orders of magnitude greater than the age of the Universe, which is about 14 billion years. Consider the increment of the force of the pneumatic element and overpressure from the point in time 0 t -the beginning of the exposure of the height of the pneumatic element constant: Based on the time dependencies in figure 4 and assuming without loss of generality 0 0 = t , it is possible to construct graphs of the dependence between the increments (2), which are presented in figure 5. As you can see, the functional relationship between the increment of the force of the pneumatic element and the increment of excess pressure is almost linear. In other words, at each of the three steps, the increments ) (t P ∆ and ) (t p u ∆ are proportional to each other with a proportionality coefficient equal to the slope of the corresponding segments of straight lines starting from the origin (in figure 4, marked with a dot). For the first and third stages, the indicated proportionality coefficients coincide with high accuracy, and for the third stage it is noticeably larger in magnitude. This fact is of great importance in the mathematical modeling of the viscoelastic properties of rubber-cord casing that are part of pneumatic elements. The maximum absolute deviation of the internal volume of the pneumatic element from its initial value 0 V = 3.709 liter is an extremely small value: 3.778 milliliters, which allows us to speak about the constancy of the internal volume of the pneumatic element with an accuracy of 0.102%. In the case of air used as the working medium of the pneumatic element, such small changes in volume lead to the same small (0.102% for ideal gas) increments of overpressure (and the force of the pneumatic element), which can be neglected with sufficient accuracy for practice General conclusions The rubber-cord composite consists of reinforcing cords and a rubber matrix. Cord nylon fabrics, widely used in the manufacture of rubber-cord cases, have pronounced elastic properties and have an elongation at break of slightly less than 30% [11]. Rubber (and other elastomers) exhibit the viscoelastic properties of [12]. During the deformation of rubber, the internal viscous forces slightly dominate the internal elastic forces [2]. Therefore, if there is no slippage of the cord relative to the rubber matrix, the rubber cord composite is a viscoelastic material. If slippage occurs, then in addition to the viscoelastic properties, the rubber-cord composite will also have plastic properties such as dry friction. To increase the durability of rubber-cord cases, in their manufacture, special technological methods are used for the best adhesion of the cord threads and the rubber matrix. Therefore, from a technical point of view, the rubber-cord composite is a viscoelastic material. Further, the metal reinforcement of the pneumatic element can be considered undeformable with high accuracy, and the work of the forces of interaction between the reinforcement and the rubber-cord cases at their contact points is negligible due to the high adhesion of the coating rubber to the metal. From the foregoing, on the basis of the study, the following two conclusions can be drawn: 1) in the process of relaxation at a fixed height of the pneumatic element, the equilibrium values of the force of the air element and overpressure are unique; 2) even at very low deformation rates, the work of the internal viscous forces of the rubber-cord casing of the N-50 model is about 55.2% of the work spent during loading (the latter means the excess of the internal forces of viscous resistance over the internal elastic forces of the shell). According to the theorem on the change in kinetic energy, the rate of change of kinetic energy is equal to the sum of the powers of external and internal forces. If, for simplicity of analysis, we restrict ourselves to rather slow processes of changing the height of the pneumatic element (as in the tests) and conditionally set the absolute pressure in the environment surrounding the pneumatic element to zero, then the work of external forces applied to the pneumatic element assembly will be equal to the work of the internal forces of the pneumatic element taken with the opposite sign. In turn, taking into account the foregoing, the work of the internal forces of the pneumatic element will consist of the work of the internal forces of the rubber-cord shell and the work of the internal forces of the working environment. When the working environment in the pneumatic element is some practically incompressible fluid, as for example in rubber-hydraulic vibration mounts [13], the work of the internal forces of the working medium is zero. Therefore, the work of external forces coincides in modulus with the work of the internal forces of the rubber-cord case, in which the proportion of the work of internal viscous forces predominates. From this we conclude that in the mathematical modeling of rubber-hydraulic vibration mounts, it is necessary to take into account the inelasticity of the rubber-cord sheath material in order to ensure acceptable accuracy of engineering calculations. In most technical applications, [3][4][5][6][7] air is used as the working environment in the pneumatic element. In this case, the work of the internal forces of the working medium is nonzero and, as a rule, significantly exceeds the work of the internal forces of the rubber-cord casing. Indeed, with the traditional method of calculating pneumatic elements [4,5], it is assumed that the rubber-cord casing is absolutely flexible and its middle surface is inextensible. Therefore, the work of the internal forces of the rubber-cord shell is zero, and the work of external forces is completely determined by the work of the internal forces of the working medium, which have an overwhelming elastic character (the dynamic viscosity coefficient of air and other gases is extremely small). When comparing with experimentally obtained static (equilibrium) isobar power characteristics of [7] pneumatic elements, the traditional method shows a large error reaching 10 ... 50% for rubber-cord shells of balloon type [14] and 60 ... 90% for rubber-cord casing of sleeve type [15]. An acceptable agreement between the experimental and calculated data can be achieved by taking into account the elastic deformations of the rubber cord casing [14,15]. Therefore, the noted relative errors indicate the order of magnitude of the correction for the contribution of the internal elastic forces of the rubber-cord membrane to the force of the pneumatic element. On the other hand, as shown above, the internal elastic forces of the rubber-cord shell are smaller by an order of magnitude than the internal viscous forces. It follows that, depending on the design and size of the pneumatic element, the work of the internal viscous forces of the rubber-cord shell will be about 10% and higher from the work of external forces applied to the pneumatic element. Of course, this estimate is approximate, but it allows us to draw a qualitative conclusion about the need for additional theoretical and experimental studies of the viscous properties of pneumatic elements with a rubber-cord shell in order to increase the accuracy of calculations of vibration-proof and vibration-insulating systems, which include pneumatic and hydraulic elements with a rubber-cord casing. Conclusions The proposed experimental method is based on a three-stage test program for the relaxation of rubbercord casing of pneumatic elements using a fluid as a working medium. Due to this, the relaxation curves for the first and third stages of the test, at which the height of the pneumatic element has the same value, but different signs of the rate of change before retention, it is possible to more reliably estimate the values of the force of the pneumatic element and overpressure, which are achieved after
v3-fos-license
2024-02-11T16:27:52.778Z
2024-02-08T00:00:00.000
267606182
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1284787/pdf", "pdf_hash": "c19ca3e2f7e2e30d0a573236e3338919753ee29b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42822", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "f2f26c99f4d7972c78de150d1072c947ea508c0f", "year": 2024 }
pes2o/s2orc
The effects of cognitive-motor dual-task training on athletes’ cognition and motor performance Background Cognitive-Motor Dual Task (CMDT) training has been widely utilized in rehabilitation and sports practice. However, whether CMDT training can better enhance athletes’ cognitive-motor performance compared to traditional single-task (ST) training remains unclear. Method A systematic review that complied with PRISMA was carried out (Prospero registration number: CRD42023443594). The electronic databases used for the systematic literature search from the beginning through 13 June 2023, included Web of Science, Embase, PubMed, and the Cochrane Library. After obtaining the initial literature, two researchers independently assessed it based on inclusion and exclusion criteria. Finally, the included literature was analyzed to compare the differences between ST training and CMDT training. Results After screening 2,094 articles, we included 10 acute studies and 7 chronic studies. Conclusion This systematic review shows that athletes typically show a degradation of performance in CMDT situations as opposed to ST when evaluated transversally. However, this performance decline is notably reduced following longitudinal training in CMDT, indicating the effectiveness of sustained CMDT training in enhancing cognitive-motor performance under dual-task conditions. Our study provides new insights into the application of CMDT in the field of sports training. Practitioners can utilize CMDT to assess athletic skill levels or optimize cognitive-motor performance of athletes, taking into account the specific needs of each sport. Systematic review registration https://www.crd.york.ac.uk/prospero, identifier CRD42023443594. Introduction In the sphere of athletic development, it is argued that a training regimen which mirrors, to the highest degree possible, the demands inherent to actual competition yields the most substantial transfer effects on athletes' competitive performance (Murphy et al., 2016).Consequently, optimal training is posited to be that which converges with the reality of competition (Halouani et al., 2014;Murphy et al., 2016).The rapid advancement of modern competitive sports, along with the corresponding increase in competitive intensity among athletes, has given rise to this concept.Superior performances are often the emergent properties of a multifaceted matrix that intricately intertwines components such as rigorous training (Laursen and Jenkins, 2002;Smith, 2003;Sarmento et al., 2018), honed skills (Hrysomallis, 2011;Suchomel et al., 2016), and inherent talents (Smith, 2003;Breitbach et al., 2014;Varillas-Delgado et al., 2022).The progressive strides made in the fields of sports science and sports psychology have incrementally augmented our understanding of competition-centric training.Historically, the focus of inquiry gravitated predominantly toward the tangible, physical aspects of training, which included elements like fitness enhancement and technical skill refinement (Beattie et al., 2014;Wortman et al., 2021).However, the present-day narrative has witnessed a paradigmatic shift, with a surge in the number of researchers turning their investigative lens toward the pivotal role cognition plays within the sphere of athletic training (Broadbent et al., 2015;Slimani et al., 2016;Bühlmayer et al., 2017;Emirzeoğlu and Ülger, 2021).In the crucible of real-world competition, athletes are mandated to draw from a wellrounded skill set (Broadbent et al., 2015).This necessitates not only a sturdy foundation of physical robustness and technical prowess but also the ability to swiftly seize evanescent opportunities amidst complex athletic environments (Fuster et al., 2021).This dexterity enables athletes to execute a variety of technical maneuvers in a timely fashion, thereby optimizing their victory potential (Sabarit et al., 2020). Consider the paradigm of a basketball match.A point guard, tasked with both dribbling and scanning the court, must maintain a keen awareness of the positions of teammates and opponents.This situational awareness allows the point guard to distribute the ball optimally, entrusting it to the player with the greatest opportunity at a given moment, hence setting the stage for an offensive maneuver.This scenario exemplifies the characteristic features of dual-tasking (DT; Bronstein et al., 2019), a subject of growing interest in contemporary sports research.Furthermore, extending this concept to incorporate the notion of "incorporated/added DT" as proposed by Herold et al. (2018) provides a more nuanced understanding of DT in sports contexts.This approach, differentiating from the traditional DT framework, involves the intentional addition of an extra cognitive task alongside the primary motor activity.For instance, a point guard engaged in regular dribbling and court scanning might also be tasked with an additional memory or attention challenge.This integrated approach enables a more precise evaluation of the interplay and coordination between cognitive and motor tasks, offering a means to control and quantify cognitive load in real-time sports situations.The application of "incorporated/added DT" methodology not only mirrors the complex realities of sports competitions but also allows for a deeper exploration into how athletes maintain a balance between motor skills and situational awareness under varying cognitive demands.Insights gained from this perspective are crucial for developing training methods that enhance cognitive-motor coordination and overall athletic performance, particularly in sports that demand high levels of strategic thinking and quick decision-making. Traditional athletic training acknowledges the importance of periodized arrangement of individual training tasks, such as technique, physical fitness, tactics, and psychology, for optimizing athletes' performance to the maximum extent (Issurin, 2010;Hartmann et al., 2015).However, a fundamental difference exists between the actual demands faced by athletes who complete cognitive and motor tasks simultaneously in competitive scenarios and the training mode that involves sequentially completing technical and tactical exercises.This discrepancy may limit the transference effect of training.Therefore, researchers in sports science and psychology have gradually begun to pay attention to the cognitive-motor dual task (CMDT) training (Gabbett et al., 2011), which creatively combines specialized athletic techniques with cognitive tasks in the hopes of enhancing athletes' performance in actual competitions. In the field of Cognitive-Motor Dual-Task (CMDT) training, distinct streams of research have emerged, each focusing on different applications and outcomes.Athletic training research primarily seeks theoretical and methodological advancement for performance enhancement.In this domain, studies have explored how CMDT can be utilized for the simultaneous development of physical and cognitive skills in professional athletes, such as in the training routines of NBA players like Jeremy Lin, who performs dribbling and arithmetic tasks concurrently.On the other hand, athletic rehabilitation research has been more focused on using CMDT for post-injury recovery.Much of the current evidence for the benefits of CMDT training, surprisingly, did not originate from athletic training research but rather from the fields of athletic rehabilitation and athletic practice (Pang et al., 2018;Gallou-Guyot et al., 2020;Tuena et al., 2023).CMDT has shown promise in improving patients' neuro-muscular functions and motorcognitive abilities, aiding in the recovery of normal functions postinjury.This is evident in the improvement of physical functions and cognitive-motor performance in individuals with conditions like Parkinson's disease (Pereira-Pedro et al., 2022), stroke (Liu et al., 2017;Zhou et al., 2021), falls (Lord and Close, 2018).Furthermore, clinical research has demonstrated the significant contributions of CMDT in clinical risk assessment and prognostic evaluation.For instance, CMDT approaches combining walking and cognitive tasks are used to assess concussion risks in athletes or evaluate recovery statuses in concussion patients (Howell et al., 2017a,b). Despite the accumulation of substantial evidence supporting CMDT training in areas such as rehabilitation therapy, current studies on CMDT within the sports science community is still in its infancy (Moreira et al., 2021).The exploration of CMDT training in the field of sports training remains limited, and the mechanisms and temporal progression of CMDT adaptation are still not fully understood (Moreira et al., 2021).For instance, while existing evidence has affirmed the potential benefits of CMDT on motor-cognitive performance, some studies have pointed out that the execution of DT in open-skilled sports is subject to strict time constraints (Baumeister, 1984).This may lead to an excessive cognitive load on individuals in a short time, causing a drastic decline in overall performance.Moreover, although previous systematic reviews have discussed the impact of DT training on athletes, the literature includes an excess of single cognitive type DT (Moreira et al., 2021), which clearly does not align with task characteristics during sports competition.Finally, due to considerable heterogeneity in CMDT intervention strategies for athletes in different sports, the applicability of this method in the field of sports remains indeterminable.Thus, the objective of this article is to systematically evaluate the impact of CMDT on the cognitive functions and athletic performance of athletes, in the hopes of providing a theoretical foundation for subsequent research, and offering guidance for coaches and related practitioners in formulating and adjusting sports training plans.This systematic review is in alignment with the standards set by the "Preferred Reporting Items for Systematic Reviews and Meta-Analyses" (PRISMA; Moher et al., 2009; Prospero registration number: CRD42023443594), and the included literature is organized and analyzed in accordance with its requirements.Given the observed considerable heterogeneity in the methodologies and measurement methods of the included studies (please refer to the Supplementary Figures S1-S6), we are unable to conduct a meta-analysis. Search strategy This systematic review encompasses all literature available up to June 2023.Researchers Junyu Wu and Peng Qiu independently searched the PubMed, Web of Science (WOS), Embase, and Cochrane Library databases to find studies relevant to the topic.The search strategy was developed based on previous systematic reviews and was improved upon (Moreira et al., 2021).It is divided into the following parts: (1) dual task and its synonyms, (2) athletes and their synonyms, (3) athletic performance and its synonyms, (4) cognitive performance and its synonyms.Apart from the third and fourth components, which are joined by "OR, " the rest of the parts are interconnected by "AND, " constituting the search equation.The specific search string is as follows: "Cognitive motor" OR "dual task paradigm" OR "dual-task" OR "dual task" OR "double task" OR "multi-task" OR "divided attention" OR "secondary task" OR "second task" AND "athletes" OR "players" OR "player" OR "athlete" AND "working memory" OR "visual" OR "decision making" OR "gaze behavior" OR "attention" OR "athletic Performance" OR "athletic performances" OR "sports performance" OR "performance, sports." Criteria for inclusion and exclusion This systematic review adopts the PICO principles, as espoused by the Cochrane Collaboration, to establish the criteria for document inclusion.The established criteria are as follows: (1) Participants in the study comprise athletes at any competency level, emphasizing the universality of Cognitive-Motor Dual Tasking (CMDT).( 2) The study concurrently reports on athletes' performance under both single-task (ST) and dual-task (DT) environments.(3) At a minimum, either cognitive performance or athletic performance of the athletes is reported.Exclusion criteria dictate the removal of a document under any of the following circumstances: (1) The presence of biomechanical studies investigating conditions under both ST and DT.(2) The participants are injured, cognitively impaired, or physically handicapped.(3) Dual-tasking does not involve a motor task or a cognitive task but merely constitutes the pairing of two tasks of the same type. Data extraction Data were extracted based on the established inclusion criteria, with the final data comprising the following elements: (1) Fundamental bibliographic details, including author names, title, and the year of publication; (2) Sample size; (3) Characteristics of the participants, including age, gender, training history, and level of skill; (4) Types of intervention strategies, encompassing acute or training interventions, duration of intervention periods, frequency, volume of training, specific intervention methods, etc.; and (5) Outcome measures, including primary outcome indicators and associated results.In cases of missing data within the literature, we reached out to the authors through email to request the missing data.We used Web Plot Digitizer software (Version 4.0; E, United States) to extract result data (mean ± standard deviation) reported only in graphic form.Two researchers independently extracted the data using tables, then merged the data.In cases of disagreement, a third researcher was consulted for a final decision. Long-term studies are defined as those in which the intervention plan and period are clearly reported, with ST serving as the control group and CMDT as the experimental group.If there was no apparent CMDT plan and period, or if only a one-time report of ST and CMDT performances was provided, it was classified as an acute study.More accurately, acute studies only conduct transversal ST/ CMDT evaluation, not training.In the incorporated acute studies, if certain participants failed to fulfill the inclusion and exclusion criteria, we confined our data extraction solely to the healthy athletes who satisfied these criteria.If in the included long-term studies multiple tests were conducted at different time points before and after the intervention, we only extracted baseline data before the intervention and immediate data after the intervention.If in the included long-term studies, multiple CMDT groups were compared with a control group (CON), we selected only the CMDT group with the lowest difficulty to minimize the impact of CMDT difficulty on intervention effects. Risk of bias To minimize potential biases in our result, we rigorously controlled the quality of the included literature and conducted quality assessments independently by two researchers (Junyu Wu and Peng Qiu).For the assessment, we adopted a modified version of the Quality Index Scale (Downs and Black, 1998), which reduced the number of evaluation questions from the original 24 to 14.This modified scale has been recently utilized and widely applied in similar studies within the field of sports (Bujalance-Moreno et al., 2019).The key dimensions assessed by the scale include: (1) clarity of the objectives, (2) clarity of the description of the primary outcomes to be measured, (3) clarity of the description of participant characteristics, (4) clarity of the description of the primary results, (5) presence of random variability estimation in the primary results, (6) clarity in the reporting of specific p-values associated with the primary results, (7) representativeness of the selected participants, (8) implementation of blinding, (9) clarity in describing data mining if utilized for primary data, (10) accuracy of the outcome measures for the primary results, (11) appropriateness of statistical tests employed for the primary results, (12) allocation of subjects (experimental design, case-control, or cohort study), ( 13) random assignment of subjects to intervention groups, and ( 14) adjustment for confounding factors in the analysis of the main conclusions.Each question is typically answered in a "Yes/No" format, where each "Yes" response earns one point and a "No" response scores zero, thereby enabling the scoring of the overall quality of the study.The findings from the assessment of risk bias are detailed in Tables 1, 2. Results Figure 1 illustrates the flowchart detailing the literature retrieval process.As Figure 1 indicates, our search through the aforementioned four databases yielded 2,094 articles.Duplicate entries were eliminated using Endnote 9.1X, leaving a total of 1,833 articles.An initial screening, predicated on the examination of titles and abstracts, pinpointed documents that satisfied the inclusion and exclusion criteria, leading to the selection of 96 articles that necessitated a detailed review.Ultimately, 28 studies were incorporated into the review, with 21 studies examining the acute effects of ST and DT, and 7 studies evaluating long-term effects.Two independent researchers (Junyu Wu and Peng Qiu), conducted each step of the process.In instances of disagreement, a third researcher (Youqiang Li), jointly adjudicated on the inclusion of the document. Tables 1, 2 present the quality assessment results of the acute and chronic studies, respectively.According to Table 1, the highest quality score among the acute studies was 1, and the lowest was 0.75.According to Table 2, the highest quality score among the chronic studies was 0.92, and the lowest was 0.83.These result indicates that the articles included in our study demonstrate a moderate to high level of quality. Table 3 shows the cognitive-motor performance of subjects during the transversal ST and CMDT evaluation in each acute study (total 10 articles).The primary objectives of these studies can be categorized as follows: (1) To simulate a match or a critical part of a match (with a much higher cognitive load) using the CMDT in order to assess athletes' mastery of motor skills in this complex scenario.(2) Investigating the performance differences between high-level and low-level athletes in ST and CMDT situations, thereby demonstrating the superior sensitivity of CMDT acute assessments over ST.These two types of studies usually involve creating a situation highly similar to a particular sport, where athletes complete a primary sport-related task (such as tennis, volleyball, football, table tennis, soccer, fencing, etc.) while simultaneously undertaking a cognitive task (primarily auditory, visual, memory, or arithmetic tasks).Except for one sub-group in one study that reported superior DT performance under CMDT conditions compared to ST (the study of Amico and Schaefer, 2022 where high-level tennis players achieved a higher number of hits under DT conditions compared to ST), all acute studies reported superior performance under ST than DT, regardless of whether it is cognitive or motor performance. Table 4 presents the basic information of the long-term studies included in this review.This systematic review incorporated seven long-term studies related to the impact of ST and CMDT on the cognitive-motor performance of athletes.The purpose of all long-term studies was to improve the adaptability of athletes to CMDT, with the aim of enhancing the transfer effect of general cognitive ability or specific athletic ability, thereby improving the cognitive-motor performance of athletes.Generally speaking, all included studies reported a significant improvement in most indicators of cognitive-motor performance in athletes after CMDT training intervention, with only a few indicators showing no statistical difference in improvement compared to ST training. The seven studies were individually focused on various sports (football, rugby, basketball, badminton, beach volleyball), and as a result, the athletic tasks were formulated to reflect the particular skills demanded by each of these sports.In six out of the seven studies, cognitive tasks involved visual response tasks or 3D multi-target tracking tasks, and only one study implemented the U, unclear.Item 1, Is the hypothesis/aim/objective of the study clearly described?;Item 2, Are the main outcomes to be measured clearly described in the Introduction or Methods sections?;Item 3, Are the characteristics of the patients included in the study clearly described?;Item 6, Are the main findings of the study clearly described?;Item 7, Does the study provide estimates of the random variability in the data for the main outcomes?;Item 10, Have actual probability values been reported (e.g., 0.035 rather than < 0.05) for the main outcomes, except where the probability value is less than 0.001?; Item 12, Were those subjects who were prepared to participate representative of the entire population from which they were recruited?;Item 15, Was an attempt made to blind those measuring the main outcomes of the intervention?;Item 16, If any of the results of the study were based on "data dredging, " was this made clear?; Item 18, Were the statistical tests used to assess the main outcomes appropriate?; Item 20, Were the main outcome measures used accurate (valid and reliable)?;Item 22, Were study subjects in different intervention groups (trials and cohort studies), or were the cases and controls (case-control studies) recruited over the same period?;Item 23, Were study subjects randomized to intervention groups?; Item 25, Was there an adequate adjustment for confounding in the analyses from which the main findings were drawn? Discussion This systematic review amalgamates and analyzes relevant literature, revealing that that athletes typically experience a degradation in performance under CMDT compared to ST when assessed transversally.However, the implementation of long-term CMDT has been observed to augment cognitive-motor performance in athletes.Within the body of literature investigated in this review, acute CMDT studies are primarily employed to evaluate athletes' tactical skill levels.Conversely, long-term CMDT is treated as a supplementary training modality designed to induce positive adaptation in athletes through sustained stimuli, thereby bolstering cognitive-motor performance in specified contexts.These findings substantiate the long-term advantages of CMDT in the domain of athletic training.Based on the existing body of evidence, CMDT emerges as a potent adjunct training tool within the sphere of sports training, poised to enhance the cognitive-motor performance in athletes engaged in cognitively demanding sports.Additionally, these insights lay the groundwork for sports training professionals, including coaches and athletes, to acquire a more nuanced comprehension of the time-related dynamics and evolutionary trends in CMDT training.This knowledge will empower them to craft or refine training regimens to optimize athletes' performance. Our findings are consistent with those of previous studies, which concluded that transversal CMDT evaluation typically lead to a sharp decline in athletes' performance compared to ST (Moreira et al., 2021).However, as the athlete gradually acclimatizes to this unique stimulus, sustained exposure to CMDT ultimately leads to an improvement in their cognitive-motor performance.This abrupt reduction in performance in response to an acute CMDT can be accounted for by the cognitive load theory (Baumeister, 1984;Fuster et al., 2021).According to this theory, an individual's working memory capacity is finite.In this context, type 2 processing refers to slow, deliberate, and effortful cognitive activities, which are more resource-intensive and can only manage a limited amount of information within a specified period (Furley et al., 2015).In a CMDT scenario, when an ancillary task abruptly elevates the cognitive load, a "choking" effect ensues, ultimately resulting in a sharp decline in performance (Baumeister, 1984;Moher et al., 2009).This performance drop appears to be closely tied to the level of the athlete's training and the complexity of the CMDT.For example, studies have shown that athletes of higher competence deliver superior performance under CMDT conditions (Gabbett and Abernethy, 2012;Schaefer and Scornaienchi, 2020;Amico and Schaefer, 2022).Interestingly, in Amico et al. 's study, elite tennis players even hit the ball more in the DT than in the ST situation (Amico and Schaefer, 2022).According to DT effect model as described by Plummer et al. (2014), the exceptional performance observed in Amico 2022's study under CMDT conditions may be indicative of the elite athletes' ability to optimize task management and resource allocation, resulting in enhanced performance.Notably, Gabbett et al. (2011) even used a specialized CMDT test in rugby as a tool to assess the technical level of national-grade rugby athletes.While earlier studies suggested that athletes with a wealth of professional experience, attributed to their superior working memory capacity, can excel under CMDT conditions, recent studies indicate that a superior working memory capacity does not invariably lead to improved DT performance (Laurin and Finez, 2020).Although a majority of studies confirm the importance of working memory capacity in enhancing DT performance (Baumeister, 1984;Furley et al., 2015;Moreira et al., 2021), additional studies are necessary to unravel this intricate mechanism.Further, there is a discernible correlation between the complexity of CMDT and performance (Gabbett et al., 2011;Gabbett and Abernethy, 2012).In an assessment of this correlation, (Gabbett et al., 2011) compared the CMDT performance of national-level rugby players under 2 vs. 1, 3 vs. 2, and 4 vs. 3 passing scenarios, revealing a decline in performance as the offense-defense scenarios grew increasingly complex.Importantly, their series of studies have found that, under real match conditions, the frequency of utilizing these techniques in 2 vs. 1, 3 vs. 2, and 4 vs. 3 scenarios progressively decreases.Due to a high turnovers rate, athletes barely employ this technique in 4 vs. 3 situations.In actual sports competitions, the influence of acute Cognitive-Motor Dual-Task (CMDT) on the cognitive-motor performance of athletes is more intricate than initially apparent.It is not only subject to interference from the surge in cognitive load under DT conditions, but the physiological load on the athletes also impacts their performance (Schapschröer et al., 2016).As athletes grow increasingly fatigued, their cognitive function correspondingly declines, leading to a rise in decision-response time and error rate (Schapschröer et al., 2016).Conversely, when the cognitive load on an athlete surges, type 2 processing allocates more working memory to the cognitive task.The scattered attention subsequently results in a significant drop in the execution efficiency of the motor task, culminating in an overall performance decline (Baumeister, 1984).Given that the cognitivemotor performance of athletes on the field is influenced by the interplay of physiological load and cognitive load, we posit that it is necessary to introduce CMDT as a supplementary training regimen in sports that demand high cognitive loads, such as team ball games.This strategy will help athletes better manage the intricacies of performing simultaneous cognitive and motor tasks during competition, potentially leading to improved performance.Furthermore, an athlete's capability to swiftly and accurately interpret the dynamic elements of the game (Piras et al., 2014;Roca et al., 2018;Li et al., 2023), such as displacement direction and velocity of teammates, opponents, and objects like the ball, is pivotal.Rapidly adapting to these ever-changing spatial and temporal factors is a critical aspect of cognitive-motor coordination (Li et al., 2023).In team ball sports, for example, players must not only be cognizant of the present positions of others but also adept at predicting and responding to their potential trajectories and speeds.This heightened spatial-temporal awareness is essential for making strategic decisions and executing precise physical actions (Voyer and Jansen, 2017).Consequently, incorporating training elements in CMDT that emphasize skill development in perceiving and responding to these dynamic displacements is vital for optimizing cognitive and motor task performance.20.2 ± 2.9 21.9 ± 3.6 While previous studies have supported the potential benefits of long-term DT training for athletes, the systematic review of Moreira et al. (2021) did not specifically discuss the application of CMDT in the field of sports training.Considering the cognitive-motor demands and the interaction between physiological and cognitive loads in athletes' real-life competitive scenarios, we excluded all DT studies that focused on a single cognitive or motor task.The results remained consistent.However, among all the included long-term studies, only few studies quantified athletes' cognitive load and physiological load, and none of the studies objectively quantified physiological load using specific metrics.This lack of quantification of physiological load poses challenges in explaining the long-term effects of CMDT.Subsequent research should include pertinent measures to gauge load, enhancing the understanding of the sustained effects of CMDT. German Currently, the underlying mechanisms through which CMDT enhances cognitive-motor performance in athletes remain unclear.Prior research indicates that DT training enhances the evolution of perceptualcognitive strategies by augmenting attentional distribution and aiding in the discernment of crucial details relevant to the task (Bherer et al., 2008).For instance, Ducrocq et al. (2017) found that DT training significantly increased the duration of fixations, thereby providing more informative cues for tactical analysis and decision-making.Additionally, the Allocation and Scheduling Hypothesis (Strobach, 2020), as a classical theory explaining the long-term training effects of CMDT, offers another perspective on athletes' performance improvements following CMDT training.This hypothesis posits that CMDT training enhances the allocation and scheduling of cognitive resources in integrated tasks, thereby enhancing CMDT performance.For example, Fleddermann and Zentgraf (2018) observed improvements in sustained attention and processing speed, contributing to enhanced CMDT performance.Furthermore, recent studies by Lucia et al. (2021Lucia et al. ( , 2023)), utilizing eventrelated potentials in a series of investigations involving semi-professional adolescent basketball players, suggest that the potential mechanisms underlying the long-term effects of CMDT may involve enhanced anticipatory brain processing capabilities in the prefrontal cortex along with increased post-perceptual activity associated with decision-making.They propose that CMDT can cognitive functioning through neuroplasticity processes in the brain to achieve specific sport-related goals (Lucia et al., 2021). Although this systematic review provides new insights into the application of CMDT in sports training, it has several limitations.Firstly, the systematic review included studies that generally lacked detailed descriptions of key training variables.For instance, intensity, inter-set rest periods, cognitive load, and physiological load were often inadequately reported.Even when some studies described athletes' physiological and cognitive loads, the measurement methods were often subjective, lacking objective indicators.This makes it difficult to discern the relationship between load and adaptation while also reducing the practical applicability of research findings in real-world settings.Secondly, the majority of studies, especially those investigating acute effects, tended to be conducted in laboratory settings, which presents a challenge in simulating game elements as closely as possible.To promote the widespread adoption of CMDT training methods in sports, future studies should aim to conduct studies in sportsspecific environments.Previous studies have suggested that conducting small-sided games or game simulations on sports fields helps replicate real tactical and technical situations (Davids et al., 2013), which would be more meaningful in the context of sports training.Lastly, the notable variation in participant characteristics, In summary, future CMDT experimental research aiming to enhance athletes' cognitive-motor performance should be conducted as much as possible in real sports settings, with an emphasis on detailed reporting of key training variables to better facilitate optimal cognitive-motor performance in athletes. Conclusion This systematic review posits that athletes generally exhibit a decline in cognitive-motor performance when assessed transversally CMDT, as compared to ST.However, in contrast to ST training, athletes demonstrate a more pronounced improvement in cognitive-motor performance following prolonged CMDT training.Our study provides new insights into the application of CMDT in the field of sports training.Practitioners can utilize CMDT to assess athletic skill levels or optimize cognitivemotor performance of athletes, taking into account the specific needs of each sport.ST, single-task; DT, cognitive-motor dual-task; 3D-MOT, three-dimensional multiple object tracking; TOL, Tower of London; 5 kinds of basketball dribbling including crossover, double crossover, between legs, crossover + between legs, between legs + behind. task through auditory stimulation.Given the different demands for CMDT by different sports, coupled with significant variances in specific athletic tasks, considerable heterogeneity exists in the design of intervention methods in different studies.The methods for measuring outcome indicators also varied.In the arrangement of training plans, differences are present in key variables such as intervention duration, frequency, among different studies (including one study that did not report the duration of single interventions and weekly training frequency).Within the six long-term studies detailing the duration of individual interventions, the length of single training sessions fluctuated between 22 and 90 min.The predominant training frequency was set at twice a week, and the intervention periods extended from 5 to 10 weeks. FIGURE 1Flow chart of literature search steps. such as age, gender, and sports proficiency, combined with diverse methodological approaches in the field, has presented challenges in synthesizing research findings.To address this issue, future studies should focus on minimizing these differences by adopting more uniform and standardized methods in Cognitive-Motor Dual-Task (CMDT) training research. TABLE 1 Literature quality assessment of acute effects studies. TABLE 2 Literature quality assessment of chronic effects studies.Is the hypothesis/aim/objective of the study clearly described?;Item 2, Are the main outcomes to be measured clearly described in the Introduction or Methods sections?;Item 3, Are the characteristics of the patients included in the study clearly described?;Item 6, Are the main findings of the study clearly described?;Item 7, Does the study provide estimates of the random variability in the data for the main outcomes?;Item 10, Have actual probability values been reported (e.g., 0.035 rather than <0.05) for the main outcomes, except where the probability value is less than 0.001?; Item 12, Were those subjects who were prepared to participate representative of the entire population from which they were recruited?;Item 15, Was an attempt made to blind those measuring the main outcomes of the intervention?;Item 16, If any of the results of the study were based on "data dredging, " was this made clear?; Item 18, Were the statistical tests used to assess the main outcomes appropriate?; Item 20, Were the main outcome measures used accurate (valid and reliable)?;Item 22, Were study subjects in different intervention groups (trials and cohort studies), or were the cases and controls (case-control studies) recruited over the same period?;Item 23, Were study subjects randomized to intervention groups?; Item 25, Was there an adequate adjustment for confounding in the analyses from which the main findings were drawn? TABLE 3 Effects of acute CMDT on athletes' cognitive-motor performance. TABLE 3 ( Continued)In this Table,we have provided a concise summary of the statistical values pertaining to the main effects of task types.For a more comprehensive set of statistical details, we recommend referring to the original text. ST, single-task; DT, cognitive-motor dual-task; DPB, Dynamic postural balance; SPB, Static postural balance.For acute studies, a common approach is conducting mixed-design analysis of variance (ANOVA). TABLE 4 Effects of chronic CMDT on athletes' cognitive-motor performance.
v3-fos-license
2016-05-12T22:15:10.714Z
2015-03-08T00:00:00.000
2528666
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jhoonline.biomedcentral.com/track/pdf/10.1186/s13045-015-0117-5", "pdf_hash": "efb7e0ac9fae1e96562859b6a37b685da8d6b2fd", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42823", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "efb7e0ac9fae1e96562859b6a37b685da8d6b2fd", "year": 2015 }
pes2o/s2orc
Alpha-enolase promotes cell glycolysis, growth, migration, and invasion in non-small cell lung cancer through FAK-mediated PI3K/AKT pathway Background During tumor formation and expansion, increasing glucose metabolism is necessary for unrestricted growth of tumor cells. Expression of key glycolytic enzyme alpha-enolase (ENO1) is controversial and its modulatory mechanisms are still unclear in non-small cell lung cancer (NSCLC). Methods The expression of ENO1 was examined in NSCLC and non-cancerous lung tissues, NSCLC cell lines, and immortalized human bronchial epithelial cell (HBE) by quantitative real-time reverse transcription PCR (qRT-PCR), immunohistochemistry, and Western blot, respectively. The effects and modulatory mechanisms of ENO1 on cell glycolysis, growth, migration, invasion, and in vivo tumorigenesis and metastasis in nude mice were also analyzed. Results ENO1 expression was increased in NSCLC tissues in comparison to non-cancerous lung tissues. Similarly, NSCLC cell lines A549 and SPCA-1 also express higher ENO1 than HBE cell line in both mRNA and protein levels. Overexpressed ENO1 significantly elevated NSCLC cell glycolysis, proliferation, clone formation, migration, and invasion in vitro, as well as tumorigenesis and metastasis in vivo by regulating the expression of glycolysis, cell cycle, and epithelial-mesenchymal transition (EMT)-associated genes. Conversely, ENO1 knockdown reversed these effects. More importantly, our further study revealed that stably upregulated ENO1 activated FAK/PI3K/AKT and its downstream signals to regulate the glycolysis, cell cycle, and EMT-associated genes. Conclusion This study showed that ENO1 is responsible for NSCLC proliferation and metastasis; thus, ENO1 might serve as a potential molecular therapeutic target for NSCLC treatment. Electronic supplementary material The online version of this article (doi:10.1186/s13045-015-0117-5) contains supplementary material, which is available to authorized users. Introduction Lung cancer arises from the bronchial mucosal epithelium and it is the leading cause of cancer mortality worldwide. Non-small cell lung cancer (NSCLC) is the most commonly diagnosed type of lung cancer, accounting for approximately 85% of all cases. Although the continuous progress has been made for surgical resection, chemotherapy, and radiation therapy [1][2][3], prognoses have not significant improved. In recent years, molecular targeted therapy [4,5] has become the most prevalent approach. Therefore, the understanding of the molecular alterations in NSCLC and their pathways is significant for molecular targeted therapy. During tumor formation and expansion, increasing glucose metabolism is necessary for the unrestricted growth of tumor cells [6]. Distributed in a variety of tissues, αenolase (ENO1) was originally described as an enzyme responsible for the glycolytic pathway [7]. In addition to its glycolytic function, accumulating evidence has demonstrated that ENO1 is a multifunctional protein involved in several biological and pathophysiological processes depending on its cellular localization [8]. The molecular weight of ENO1 protein is 48 kDa. It is expressed in the cytoplasm and considered as an oncogene in tumor pathogenesis. However, another transcript of ENO1 can be translated into a 37-kDa c-Myc promoter-binding protein (MBP-1), which represses transcription and is localized in the nucleus [9][10][11]. Overexpression of ENO1 has been previously demonstrated in several types of tumors including NSCLC [12]. However, investigators have reported conflicting results. Some researchers have shown that the expression of ENO1 was upregulated in NSCLC tissues and was associated with poorer clinical outcomes [13,14]. On the contrary, Chang Y.S. et al. demonstrated that the levels of ENO1 protein were significantly decreased in NSCLC [15] and overexpression of ENO1 inhibited epithelial-mesenchymal transition (EMT) in the A549 cell line [16]. Therefore, neither expression nor the functional mechanisms of ENO1 in NSCLC have been clearly established. In order to further validate the role of ENO1 and its molecular basis in NSCLC, we analyzed the expression of ENO1 in human NSCLC tissues and cell lines, as well as its effects on cell glycolysis, growth, migration, and invasion in vitro and tumorigenicity and metastasis in vivo. Our study showed that ENO1 is overexpressed in NSCLC tissues, and upregulated ENO1 promotes cell glycolysis, proliferation, migration, invasion, and tumorigenicity via the FAK/PI3K/AKT pathway. This is the first report of the molecular mechanisms of ENO1 in NSCLC, even more indepth than our previous report of ENO1 in glioma [17]. ENO1 is highly expressed in NSCLC Quantitative real-time reverse transcription PCR (qRT-PCR) was used to measure the expression of ENO1 mRNA in 26 fresh primary NSCLC tissues (T), their corresponding para-cancer lung tissues (P), and their corresponding non-cancerous lung tissues (N). The ENO1 mRNA expression level was increased in NSCLC tissues in comparison to non-cancerous lung tissues (P < 0.05) ( Figure 1A). The expression levels and subcellular localization of ENO1 protein in 55 paraffin-embedded primary NSCLC specimens and 17 paraffin-embedded non-cancerous lung specimens were measured by immunohistochemical staining ( Figure 1B). Expression of ENO1 both in the cytoplasm and nucleus (MBP-1) were observed in NSCLC tissue, but as ENO1 is only known to localize in the cytoplasm, only this specific staining was evaluated. ENO1 protein was highly expressed in NSCLC tissues compared to non-cancerous lung samples (P = 0.019) (Table 1). Further, ENO1 was observed to express in the cytoplasm but not in the nucleus in NSCLC A549 and SPCA-1 cells by immunofluorescence assay (Figure 1C), and its upregulated expression levels in mRNA and protein were also found in both two cells compared to immortalized human bronchial epithelial cell line HBE ( Figure 1D). Stable ENO1-overexpressed and ENO1-suppressed NSCLC cells as well as transient ENO1-suppressed NSCLC cells were constructed Since ENO1 expression is higher in SPCA-1 than in A549 ( Figure 1D), we firstly used lentivirus-mediated fulllength ENO1-GFP (ENO1) to constitutively overexpress ENO1 in A549 cells in order to assess its role in NSCLC. The result showed that ENO1 expression was obviously upregulated in A549-ENO1 cells compared to its control PLV-Ctr cells, and the expression of MBP-1 was not observed ( Figure 2A). Further, three lentiviral short hairpin RNA (shRNA) vectors were used to specifically and stably knock down the expression of ENO1 in the SPCA-1 cell line, and the expression levels of ENO1 and MBP-1 were determined by qRT-PCR and Western blot. The result indicated that ENO1 expression was obviously downregulated in shENO1-B and shENO1-C cells compared to their respective control PLV-scrambled control shRNA (shCtr) cells ( Figure 2B). Similarly, the expression of MBP-1 was not observed in SPCA-1 cells ( Figure 2B). To further evaluate the functional significance of ENO1 on NSCLC, small-interfering RNA (siRNA) was used to transiently silence ENO1 in A549 and SPCA-1 cells, and the expression of ENO1 were validated by qRT-PCR and Western blot ( Figure 2C). ENO1 regulates the glycolysis in NSCLC cells To assess the glycolysis changes triggered by ENO1, we used Western blot to detect the expression of lactate dehydrogenase A (LDHA) in ENO1-overexpressed A549 cells and ENO1-suppressed SPCA-1 cells. We found that the protein level of LDHA was markedly increased in ENO1-overexpressed A549 cells. In contrast, the expression of LDHA was obviously decreased in ENO1-suppressed SPCA-1 cells. To further confirm our results, we examined the level of lactate production in ENO1-overexpressed A549 cells and ENO1-suppressed SPCA-1 cells. Consistent with the results of the Western blot, ENO1-overexpressed A549 cells produced a more amount of lactate compared to its control cells and untreated cells. Conversely, the production of lactate was significantly less in ENO1-suppressed SPCA-1 cells than in its control cells and untreated cells, suggesting the involvement of ENO1 in inducing the glycolysis of NSCLC ( Figure 2D). ENO1 promotes cell proliferation, clone formation in vitro, and tumorigenicity in vivo Next we assessed the effect of ENO1 expression on A549 cell growth in vitro. The growth curves determined by 3-(4, 5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assays showed that overexpressed ENO1 significantly elevated cell viability compared to its control cells and untreated cells. MTT assays also showed that transiently suppressed ENO1 significantly decreased cell viability in A549 cells ( Figure 3A). Colony formation assays showed that overexpressed ENO1 significantly increased cell proliferation compared to its control cells and untreated cells ( Figure 3C). On the contrary, suppressed ENO1 expression in SPCA-1 cells significantly inhibited cell viability ( Figure 3B) and clone formation ( Figure 3D). To confirm the growth effect of ENO1 in vivo, we performed an in vivo tumorigenesis study by inoculating A549 with or without ENO1 overexpression and SPCA-1 cells with or without ENO1 knockdown into nude mice. Mice were sacrificed 15 days after inoculation, with average tumor weights of 0.059 ± 0.016 vs 0.73 ± 0.12 g in PLV-Ctr vs A549-ENO1 group and 0.95 ± 0.13 vs 0.435 ± 0.051 g in PLV-shCtr vs shENO1-B group, respectively (P < 0.01) ( Figure 3E). These results suggest that ENO1 significantly promotes cell growth in vitro and in vivo. ENO1 promotes cell migration and invasion To examine the effect of ENO1 on cell migration and invasion, a transwell apparatus and Boyden chamber coated with Matrigel were used. After 10-h incubation, an elevated number of migrated cells were observed in A549-ENO1 compared to its control cells and untreated cells (P < 0.01) ( Figure 4A). On the contrary, stably suppressed ENO1 expression in SPCA-1 cells inhibited cell migration and invasion in both shENO1-B and shENO1-C cell groups compared to their respective control cells and untreated cells (P < 0.01) ( Figure 4C). Furthermore, similar results were also observed in siRNA-mediated suppression of ENO1 in NSCLC cells ( Figure 4B, D). To further assess the effect of ENO1 on NSCLC metastasis Chi-square test. in vivo, ENO1-overexpressed A549 cells, ENO1-suppressed SPCA-1 cells, and their control cells were independently injected into the spleens of nude mice. Fluorescence images showed that a large amount of intra-liver metastasis nodules was generated in the mice injected with A549-ENO1 cells, while a few small clusters were observed in A549 PLV-Ctr cells. In addition, a few small nodules were observed in SPCA-1 shENO1-B cells, while a variety of large clusters were observed in SPCA-1 PLV-shCtr cells. These are consistent with the hematoxylin and eosin (H&E)-stained liver sections ( Figure 4E). Similar to the results in vitro, ENO1 promotes the metastasis of NSCLC cells. ENO1 regulates the expression of cell cycle and EMT-associated genes in NSCLC To further study the mechanism by which ENO1 regulates cell proliferation, migration, and invasion, the protein levels of cell cycle and EMT-associated genes were examined in A549 and SPCA-1 cells with stably overexpressed or suppressed ENO1. In ENO1 stably overexpressing A549 cells, activation of p-Rb (ser 780) was increased as well as the elevated expression of cyclin D1, cyclin E1, and c-Myc. In contrast, the expression of p21 was inhibited. Stably knocking down endogenous ENO1 expression in SPCA-1 inhibited the activation of p-Rb (ser 780), and the expression of cyclin D1, cyclin E1, and c-Myc were decreased, while levels of p21 were upregulated ( Figure 5A). We also found that upregulated ENO1 expression elevated the expression of EMT marker genes including snail, vimentin, and N-cadherin, yet inhibited E-cadherin in A549 cells. Conversely, downregulated ENO1 expression in SPCA-1 cells inhibited the expression of these proteins and elevated E-cadherin expression ( Figure 5B). Similar changes in cell cycle regulators cyclin D1 and p21 as well as EMT-associated genes including E-cadherin, N-cadherin, and vimentin were observed in tumor tissues by IHC ( Figure 6A, B). However, stable downregulated or upregulated ENO1 did not induce any epithelial to mesenchymal morphology transition changes in A549 or SPCA-1 cells (Additional file 1: Figure S1). ENO1 regulates FAK-mediated PI3K/AKT pathway to promote cell glycolysis, proliferation, migration, and invasion PI3K/AKT has been reported to be a key signal pathway promoting cell proliferation and EMT and can be modulated by FAK [18]. We found that overexpression of ENO1 significantly increased levels of β-catenin and phosphorylated FAK, PI3K, and AKT, but not their total protein levels ( Figure 5C). Suppression of ENO1 had the opposite effect on the FAK/PI3K/AKT pathway. To further study the mechanism by which ENO1 regulates cell glycolysis, proliferation, migration, and invasion, ENO1-suppressed SPCA-1 cells were treated with human angiotensin II (Ang II) to induce the phosphorylation of FAK [19]. Ang II treatment reversed the effects of ENO1 knockdown on cell glycolysis, viability, migration, and invasion ( Figure 7A-C). We observed a consistent effect on the FAK/PI3K/AKT pathway after Ang II treatment of ENO1-suppressed SPCA-1 cells whereby levels of p-AKT, LDHA, cyclin D1, c-Myc, p21, and β-catenin were restored ( Figure 5D). These results implied that ENO1 is an upstream signal factor modulating the FAK/PI3K/AKT pathway in NSCLC, and ENO1 regulates FAK/PI3K/AKT pathway to promote cell glycolysis, proliferation, migration, and invasion. Discussion Upregulated expression of ENO1 has been detected in several cancers, such as glioblastoma [20], head and neck cancer [21], pancreatic cancer [22], and prostate cancer [23]. However, the role of ENO1 in NSCLC is still controversial [13][14][15][16], which needs to be further identified. In this study, we confirmed that the expression of ENO1 mRNA and protein was frequently overexpressed in NSCLC tissues compared to non-cancerous lung tissues as well as in NSCLC cells compared to HBE cells. These results are consistent with Chang et al.'s report supporting an oncogenic role for ENO1 in NSCLC [13], but not Chang's study [15]. In order to evaluate the function of ENO1 and eliminate the influence of MBP-1 on NSCLC, we firstly performed an immunofluorescence and observed that ENO1 was expressed in the cytoplasm but not in the nucleus (MBP-1) in A549 and SPCA-1 cells. Furthermore, we also found that MBP-1 was not expressed by Western blot assay in A549 and SPCA-1 cells. The abovementioned results suggested that both two cells could be used as welldefined models to evaluate the function of ENO1 on NSCLC. Further, stable ENO1-overexpressed A549 cells and stable ENO1-suppressed SPCA-1 cells as well as transient ENO1-suppressed A549 and SPCA-1 cells were respectively constructed, which was used to investigate the role of ENO1 in NSCLC. ENO1 was originally described as an enzyme responsible for the glycolytic pathway. To further assess the effect of ENO1 on NSCLC cells, we analyzed the glycolysis changes triggered by ENO1 and found that overexpressed and suppressed ENO1 respectively increased and decreased the production of lactate. These data suggested that ENO1 was involved in inducing glycolysis in NSCLC. The biological functions of ENO1 found in this study provide a mechanistic basis for the pathological and clinical observations. When we examined the key regulators of the glycolysis and cell cycle at the G1-S phase transition, we discovered that suppression of ENO1 inhibited the expression of LDHA, c-Myc, cyclin D1, p-Rb, and cyclin E1 while elevating the expression of p21, which promoted cell glycolysis and proliferation of NSCLC. EMT is regarded as a key event in tumor migration and invasion progression. In this study, we further examined the expression of EMT marker genes and found that knocking down ENO1 expression induced the protein levels of E-cadherin while suppressing the expression of snail, vimentin, and N-cadherin in NSCLC cells. These results are consistent with our previous report of ENO1 in glioma [17]. However, ENO1 overexpression did not lead to any changes from epithelial to mesenchymal transition in NSCLC cells. PI3K/AKT is a key signal mediator during carcinogenesis [35,36], and its activation induces glycolysis [37][38][39] and c-Myc-mediated cell cycle transition [40] and promotes the progression of EMT [37,41]. In addition, c-Myc has also been shown to regulate energy metabolism by regulating LDHA in tumor [42]. We hypothesized that oncogenic ENO1 functions through the PI3K/AKT pathway in NSCLC. We found that suppressed ENO1 significantly decreased the protein levels of β-catenin and phosphorylated PI3K and AKT, but not their total protein levels in SPCA-1 cells, which is similar to our previous report in glioma [17]. Interestingly, we examined the protein levels of FAK, an upstream signal factor of the PI3K/AKT pathway, and found that suppressed ENO1 significantly decreased levels of phosphorylated FAK, but not its total protein levels. We speculated that ENO1 regulates cell glycolysis, proliferation, migration, and invasion through FAK-mediated PI3K/AKT pathway in NSCLC. To further clarify the specific mechanism, Ang II, an activating agent of phosphorylated FAK [19], was used to treat ENO1-suppressed SPCA-1 cells. We observed that not only the production of lactate, cell viability, migration, and invasion was restored but also the expression levels of p-FAK, p-AKT, LDHA, cyclin D1, c-Myc, p21, and β-catenin were rescued. These results demonstrated that suppressed ENO1 inhibited cell glycolysis, proliferation, migration, and invasion by inactivating FAK-mediated PI3K/AKT pathway in NSCLC. Thus, ENO1 may be a potential therapeutic target for NSCLC treatment. Furthermore, nanotechnology has provided a good platform for cancer targeted therapy based on nanoparticle unique properties. Therefore, we wish to develop a nanoparticle formulation modified with tumortargeting single-chain antibody fragment (scFv) for systemic delivery of siRNA-ENO1 in the future [43], which may make it possible that ENO1 serves as a molecular therapeutic target for NSCLC treatment. Conclusions In summary, ENO1 is overexpressed in NSCLC, promoting cell glycolysis, proliferation, migration, invasion, and tumorigenesis by activating the FAK-mediated PI3K/AKT pathway and further modulating their downstream signal molecules. To our knowledge, this is the first report of the molecular mechanisms of ENO1 in NSCLC and even more in-depth than our previous report of ENO1 in glioma [17]. Our study demonstrates that ENO1 may be a potential therapeutic target for NSCLC treatment. Materials and methods Cell culture and sample collection chamber with 5% CO 2 at 37°C. Twenty-six (26) surgical resected fresh primary NSCLC tissues and paired paracancer lung tissues as well as non-cancerous lung tissues (5 cm away from tumor edge), 55 paraffin-embedded primary NSCLC specimens, and 17 paraffin-embedded non-cancerous lung specimens were obtained from the Third Affiliated Hospital of Kunming Medical University (Yunnan, China). Patients with a diagnosis of relapse and who had received preoperative radiation, chemotherapy, or biotherapy were excluded from the study to avoid any changes in tumor marker determination due to the effect of the treatment. The clinical processes were approved by the Ethics Committees of the Third Affiliated Hospital of Kunming Medical University, and patients provided informed consent. Demographic and clinical data were obtained from the patients' medical records. RNA isolation, RT-PCR, qRT-PCR, and primers Total RNA was extracted from the cell lines and lung tissues using Trizol (Takara, Shiga, Japan). RNA (1 μg) was reverse transcribed into cDNA, and cDNA was used as a template to amplify with specific primers for sense: 5′-TCAATGGCGGTTCTCATGCT-3′ and for antisense: 5′-GCAGCTCCAGGCCTTCTTTA-3′. ARF5 was used as an internal control with primers for sense: 5′-ATCTGTTTCACAGTCTGGGACG-3′ and for antisense: 5′-CCTGCTTGTTGGCAAATACC-3′. Experiments were performed according to the manufacturer's instructions (Takara, Shiga, Japan). PCR conditions were 95°C for 10 min to activate DNA polymerase, followed by 45 cycles of 95°C for 15 s, 60°C for 15 s, and 72°C for 15 s. Specificity of amplification products was determined by melting curve analysis. The qRT-PCR reactions for each sample were repeated three times. Independent experiments were done in triplicate. Stained tissue sections were reviewed and scored independently by two investigators blinded to the clinical data. For cytoplasmic staining, the score was based on the sum of cytoplasm staining intensity and the percentage of stained cells. The staining intensity was scored as previously described (0-3) [44,45], and the percentage of positive staining areas of cells was defined as a scale of 0-3 (0: <10%, 1: 10%-25%, 2: 26%-75%, and 3: >76%). For nuclear staining, the staining score was defined based on the sum of nuclear staining intensity and the percentage of positive nuclear staining. Positive nuclear staining scores were defined as follows: 0: <20%, 1: 20%-49%, 2: 50%-79%, and 3: >80%. The sum of the staining intensity and staining extent scores (0-6) was used as the final staining score. For statistical analysis, a final staining score of 0~2 and 3~6 in cytoplasm or 0~3 and 4~6 in nucleus were respectively considered to be negative and positive expression levels. Expression of ENO1 in the nucleus was observed, but since ENO1 localizes in cytoplasm, only the cytoplasmic staining was evaluated. Immunofluorescence Immunofluorescence was performed according to a previous study [46]. NSCLC cells were seeded on coverslips in six-well plate and cultured overnight. Subsequently, cells were fixed in 3.5% paraformaldehyde and permeabilized in KB solution and 0.2% Triton X-100 at room temperature. After the blocking solution was washed out, cells were incubated with a primary antibody (ENO1) (diluted in KB) for 30-45 min at 37°C and subsequently washed with KB twice. After incubating for 30-45 min at 37°C with secondary antibody (diluted in KB) and washing with KB again, the coverslips were then mounted onto slides with mounting solution containing 0.2 mg/ml DAPI and sealed with nail polish. Slides were stored in a dark box and observed under a fluorescent microscope. Transfection and infection The full-length ENO1-GFP (ENO1), GFP empty vector (PLV-Ctr) lentiviruses were designed by Shanghai Genechem (Genechem, Shanghai, China). The preparation of lentiviruses expressing human ENO1 short hairpin RNA (shENO1-A, B, C) ( Table 2) was performed using the pLVTHM-GFP lentiviral RNAi expression system [40]. NSCLC cell line A549 was infected with full-length ENO1-GFP or GFP empty vector lentiviruses. SPCA-1 cells were infected with shENO1-A, B, C or PLV-shCtr lentiviruses, and polyclonal cells with GFP signals were selected for further experiments using FACS flow cytometry. Total RNA was isolated, and levels of ENO1 mRNA were measured using real-time PCR analysis. Metabolic profiling Metabolic profiles were obtained to assess the relative distribution of various cellular metabolites of NSCLC cells. Cells were collected and quickly frozen. Further sample preparation, metabolic profiling, peak identification, and curation were performed by Metabolon (Durham, NC, USA) using their described methods [48]. MTT assay The viability of cell proliferation was assessed using MTT assay according to our previous study [46]. Cells were seeded in 96-well plates at a density of 1,000 cells/well. Every 24 h for 7 days, 20 μl of MTT (5 mg/ml) (Sigma-Aldrich, St. Louis, MO) was added to each well and incubated for 4 h. Supernatants were removed, and 150 μl of dimethyl sulfoxide (DMSO) (Sigma-Aldrich, St. Louis, MO) was added to each well. The absorbance value (OD) of each well was measured at 490 nm. For each experimental condition, five parallel wells were assigned to each group. Experiments were performed thrice. Clone formation assay Clone formation assay was performed according to our previous study [46]. Cells were seeded in 6-well culture plates at 100 cells/well. Each cell group had three parallel wells. After incubation for 14 days at 37°C, cells were washed twice with Hank's solution and stained with hematoxylin solution. The number of colonies containing ≥50 cells was counted under a microscope. The clone formation efficiency was calculated as (number of colonies/number of cells inoculated) × 100%. Cell migration and invasion assays In vitro cell migration and invasion assays were examined according to our previous study [46]. For cell migration assays, 1 × 10 5 cells in a 100-μl medium without serum were seeded on a fibronectin-coated polycarbonate membrane insert in a transwell apparatus (Corning, USA). In the lower surface, 500 μl DMEM with 10% FBS was added as chemoattractant. After the cells were incubated for 10 h at 37°C in a 5% CO 2 atmosphere, Giemsastained cells adhering to the lower surface were counted under a microscope in five predetermined fields (100×). All assays were independently repeated at least thrice. For cell invasion assays, the procedure was similar to the cell migration assay, except that the transwell membranes were pre-coated with 24 μg/ml Matrigel (R&D Systems, USA). In vivo tumorigenesis in nude mice According to our previous study [17], a total of 1 × 10 6 logarithmically growing A549 cells transfected with fulllength ENO1 and PLV-Ctr vector, SPCA-1 cells transfected with shENO1-B, and the control PLV-shCtr vector (N = 6 per group) in 0.1 ml Hank's solution were subcutaneously inoculated into the left-right symmetric flank of 4-6-week-old male BALB/c-nu/nu mice. The mice were maintained in a barrier facility on HEPA-filtered racks and fed an autoclaved laboratory rodent diet. All animal studies were conducted in accordance with the principles and procedures outlined in the National Institutes of Health Guide for the Care and Use of Animals under assurance number A3873-1. After 15 days, the mice were sacrificed, and their tumors were excised, weighed, and processed for histology. In vivo metastasis assays In vivo metastasis assays were performed according to a previous study [46]. A total of 5 × 10 6 cells were injected into nude mice (n = 5 for each group) through the spleen, respectively. The optical fluorescence images were visualized to monitor primary tumor growth and formation of metastatic lesions. Forty days later, all mice were killed, individual organs were removed, and metastatic tissues were analyzed by H&E staining. Statistical analysis All data were independently repeated at least thrice. SPSS 13.0 and Graph Pad Prism 5.0 software were used for statistical analysis. One-way ANOVA or two-tailed Student's t-test were applied to determine the differences between group in vitro analyses. The chi-squared test was used to determine the differences of ENO1 protein expression between NSCLC tissues and non-cancerous lung tissues of the lung. A p value of less than 0.05 was considered statistically significant. Additional file Additional file 1: Figure S1. Stably upregulated ENO1 (A) or downregulated ENO1 (B) did not induce obvious epithelial to mesenchymal morphology transition changes in SPCA-1 or A549 cells.
v3-fos-license
2018-12-05T10:18:01.538Z
2016-01-01T00:00:00.000
132285209
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://doi.org/10.21120/le/10/2/2", "pdf_hash": "b22542b27f8626f6ea78d98e0f1846c6f43f8d1e", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42825", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "b22542b27f8626f6ea78d98e0f1846c6f43f8d1e", "year": 2016 }
pes2o/s2orc
CorreCtion of AtmospheriC hAze of irs-1 C Liss-iii muLtispeCtrAL sAteLLite imAgery : An empiriCAL And semi-empiriCAL BAsed ApproACh The atmospheric effect greatly affects the quality of satellite data and mostly found in the polluted urban area in the great extent. In this paper, the atmospheric correction has been carried out on IRS-1C LISS-III multispectral satellite image for efficient results for the Raipur city, India. The atmospheric conditions during satellite data acquisition was very clear hence very clear relative scattering model of improved dark object subtraction method for the correction of atmospheric effects in the data has been carried out to produce the realistic results. The haze values (HV) for green band (band 2), red band (band 3), NIR band (band 4) and SWIR (band 5) are 79, 53, 54 and 124, respectively; were used for the corrections of haze effects using simple dark object subtraction method (SDOS). But the final predicted haze value (FPHV) for these bands are 79, 49.85, 21.31 and 0.13 that were used for the corrections of haze effects applying improved dark object subtraction method (IDOS). We found that IDOS method produces very realistic results when compared with SDOS method for urban land use mapping and change detection analysis. Consequently, ATCOR2 model provides better results when compared with SDOS and IDOS in the study. Introduction Remote sensing data is widely used in studies namely groundwater (Mukherjee et al. 2007;Singh et al. 2010a;Singh et al. 2013;Singh et al. 2015a), river water quality (Srivastava et al. 2011), coastal water (Kumar et al. 2015), lake and wetlands (Thakur et al. 2012a;Thakur et al. 2012b;Amin et al. 2014;Singh et al. 2016a), land use/land cover mapping (Singh et al. 2010b;Singh et al. 2013;Singh et al. 2014a: Singh et al. 2014b), land use change trajectories (Srivastava et al. 2013), land use/land cover modeling (Singh et al. 2015b;Mustak et al. 2015), crop suitability (Mustak et al. 2015), urban land use dynamics (Amin et al. 2012), hydrological modeling (Narsimlu et al. 2015), forest mapping (Singh et al. 2012), cyclone tracking (Islam et al. 2015), soil characterization (Paudel et al. 2015), climate change studies (Srivastava et al. 2015), slope estimation (Szabó et al. 2015), landscape ecology (Singh et al. 2016b), ocean studies (Pandey and Singh 2010a;Pandey and Singh 2010b) and watershed management (Yadav et al. 2014).The raw data which is affected by panoramic distortion, earth curvature, failure of sensor detector and detector line losses which are primarily corrected by data providers.But generally the atmospheric effect is not corrected by data providers; it should be done by the users as a preprocessing task.The correction is required in the satellite imageries because visible bands of shorter wavelength are highly affected by atmospheric scattering especially of Rayleigh scattering which is caused by suspended gases, water vapor and aerosols (Yong et al. 2001;Chen 2004;Saha et al. 2005;Gong et al. 2008;Norjamaki and Tokla, 2007;Tyagi -Udhav 2011) which are added as a hazy radiance value instead of actual radiance value which reduced the scene reality of the remotely sensed data and hence such addition is called additive effects of atmosphere in the remote sensing data.The atmospheric effects mostly found in the polluted urban area in the great extent and the correction of such atmospheric effects are mostly carried out for the study of land use and land cover mapping and change detection analysis.The removal of atmospheric additive effects can be done by simple dark object subtraction (SDOS) method and improved dark object subtraction (IDOS) method for multi-band satellite image.SDOS method is a first order atmospheric correction which is better than no-correction at all (Chavez 1988).In this method, constant haze value (DN) of each individual spectral bands are selected using minimum DN value in the histogram from the entire scene is thus attributed to the effect of the atmosphere and is subtracted from each spectral bands (Chavez 1989).The IDOS method which tends to correct the haze in terms of atmospheric scattering and path radiance based on the power law of relative scattering effect of atmosphere (Lillesand -Kiefer 2000).The effects of corrections were studied in urban environment.The image attributes was used for comparing the performance of the correction methods.Overall, the ATCOR2 method performed better than IDOS.According to Teillet (1986), the reflectance of the objects recorded by the artificial satellite sensors is generally affected by atmospheric absorption and scattering, sensor-target-illumination geometry and sensor calibration.These affect the actual reflectance of the objects that subsequently affects the extraction of information from satellite images.There has been considerable attention in research on the need too and the ways of correction of the satellite data from atmospheric effects (Mustak 2013;Song et al. 2000;Chavez 1988;Chavez 1996;Mahiny and Turner 2007).In addition the COST model is an image based absolute correction method, it uses only the cosine of sun zenith angle (cos(TZ)) as an acceptable parameter for approximating the effects of absorption by atmospheric gases and Rayleigh scattering (Mahiny and Turner 2007).The 6S model predicts the reflectance of objects at the top of the atmosphere using information about the surface reflectance and atmospheric conditions (Mahiny and Turner 2007).The meteorological visibility, type of sensor, sun zenith and azimuth, date and time of image acquisition, and latitude and longitude of scene center are needed to run the 6S model.The ATCOR2 model needs path radiance, reflected radiation from the viewed pixel and radiation from the neighborhood, atmospheric conditions (water vapor content, aerosol type, visibility) for a scene can be estimated using the SPECTRA module, finally the surface reflectance spectrum of a target in the scene can be viewed as a function of the selected atmospheric parameters.In this paper, IDOS method of very clear Relative Scattering Model (RCM) is applied for haze correction which produced the very realistic results than SDOS method.Similarly, ATCOR2 provides better results as compared to above mentioned two methods.The study is based on following objectives as (i) to find out the haze values in the data and (ii) to remove the haze values and improve the scene reality of the data for urban land use mapping and change detection analysis. Study Area The whole Raipur city including standard urban area of old Raipur city and standard urban area of Naya Raipur city is situated in the Dharsiwa tehsil and in some parts of the Arang and Abhanpur tehsils of Raipur district has selected as the study area.The study area extends in between 21°04'N to 21°26'N latitudes and 81°30'E to 81°52'E longitudes covering an area of 831.49km 2 on Chhattisgarh plain with an average 236 metre elevation above mean sea level.The city occupies the north-western part of Raipur district and located on the eastern part of the Mahanadi basin as well as the fertile valley of river which is the principal rivers of this city.The temperature is 34.3°C in summer and fall to 19°C in winter having an average rainfall of 1400 mm.The climate of this region is characteristics by a hot and dry summer and well distributed rains in the monsoon season.The population of the Raipur city is 1428623 persons and 71.95% literate (Census 2011). The city is a fast developing important industrial centre and well connected with various major cities of India by road, rail and air transporting systems.The Raipur city is situated along the Mumbai-Nagpur-Hawrah mainline.In this regards, the Raipur is an important city of Chhattisgarh as well as India; so the author has selected the Raipur city as the study area. Database and methodology The study has carried out on IRS-1C LISS-III multispectral satellite data which is collected from National Data Centre (NDC), National Remote Sensing Centre (NRSC), Indian Space Research Organization (ISRO), Balanagar, Hyderabad, Andhra Pradesh, India.The climatic data has collected from Indira Gandhi Agricultural University, Raipur, Chhattisgarh, India and District Statistical Handbook, Raipur.The study of climatic data is used to find out the climatic conditions during satellite data acquisition and it suggests that the study area was having very clear atmospheric conditions during satellite data acquisition.The details of satellite data and method are given in Table 1. and 2. and Figure 1a.and 1b. Simple Dark Object Subtraction (SDOS) SDOS is a very simple image-based method of atmospheric correction which assumes that there are at least a few pixels within an image which should be black (% reflectance) and such black reflectance is termed as dark object which are clear water body and shadows whose DN values zero (0) or close to zero in the image (Chavez 1988).This method is widely used for classification and change detection application (Spanner et al. 1990).SDOS method is a first order atmospheric correction which is better than no-correction at all (Chavez, 1988).In this method, constant haze values (DN) of each individual spectral band are selected as minimum DN value in the histogram from the entire scene and is thus attributed to the effect of the atmosphere which is subtracted from the each spectral bands (Chavez 1989). Improved Dark Object Subtraction (IDOS) IDOS method which tends to correct the haze in terms of atmospheric scattering and path radiance based on the power law of relative scattering effect of atmosphere Fig. 1b.Flow chart of adopted methodology (Lillesand -Kiefer 2000).IDOS method is the improvement over the SDOS method to minimize the chances of overcorrection of DN in the scene.IDOS method is based on two sub-models such as: Histogram Method This method is used to select the haze values such as starting haze value (SHV) in this image and visible band 2 (green) is selected SHV as 79 (Table 3.). Relative Scattering Model (RSM) This model is based on two important relative scattering models as Rayleigh (particle size less than the wavelength) and Mie Scattering (particle size same as the wavelength) models (Slater et al. 1983).These two models are based on power law as for Rayleigh scattering effects of atmosphere which acts with the wavelength in imaging systems as inversely proportional to the fourth power of wavelength (λ -4 ) which means that shorter wavelength of the spectrum are scatter much more than longer wavelengths.This type of scattering is caused by gas molecules which are much smaller than the wavelength of light.The Mie Scattering effects of atmosphere which acts with the wavelength in imaging systems as inversely proportional to the wavelength which vary from λ -0 to λ -4 , and λ -1 for moderate atmosphere and λ-0 for completely cloud cover.But the relative scattering that usually occurs in a real atmosphere that is clear seems to follow more of a λ -2 to λ -0.4 relationship and not a Rayleigh or Mie (Curcio 1961;Slater et al. 1983).Taking this information in account, the relative scattering that occurs in a hazy atmosphere can be approximated as λ -0.7 to λ -0.5 , if similar power law relationships are used.The critical aspect of the method proposed in this paper is that the haze correction DN value used by SDOS techniques be computed using a RSM to ensure that the haze values do represent, or better approximate, true atmospheric possibilities.Using the information supplied by Curcio (1961) and Slater et al. (1983), and extrapolating to very clear and very hazy atmospheres, one possible set of RSM are (Chavez 1988), Table 4. and 5.The principles of RSM are used to predict haze values (PHV) for each spectral band based on SHV in the IDOS method for the atmospheric haze correction for specific atmospheric conditions.The study area was belonging to very clear atmospheric conditions during satellite data acquisition and hence the correction of atmospheric haze has been carried out based on the principle of very clear RSM.The computation of PHV for different spectral bands from SHV of selected band has been done based on the normalized value of RSM as termed as multiplicative factors showing in Table 6. The PHV for different spectral bands is computed based on following equation (1) as: PHV (DN) =IPHV1 of bandi* Multiplicative Factors of next bands (1) 1 Initial Predicted Haze Value (IPHV) is calculated by subtracting the SHV from offset value, the gain and offset value showing in Table 7. In this paper, the SHV is 79 for band 2 and the IPHV is 77.24 (SHV-1.76)for this band but the PHV for next spectral bands such as for band 3 is 40.94 (77.24*0.53),band 4 is 16.99 (77.24*0.22)and for band 5 is 0.77 (77.24*0.01)showing in Table 8.The haze values computed using RSM are not the correct haze values to remove the haze effects from the satellite image.To compute the correct or final predicted haze values (FPHV), the different gain (Lmax) and offset (Lmin) values in the imaging systems has to be adjusted with PHV by means of the addition of offset value and multiplication of normalized gain values.The calculation of normalized gain values along with the offset values. The calculation of FPHV performed using following equation ( 2 There are quite dramatic changes of the haze values resulted from the SDOS method, PHV and FPHV using IDOS method.Therefore, the result of SDOS method and IDOS method are quite different and thus unrealistic haze correction should be occurred if SDOS method is used without considering the principle of relative scattering of atmosphere.The NDVI images (Figure 3.) have shown clear improvement after applying ATCOR2 method using ERDAS Imagine. Conclusion The atmospheric haze correction methods have applied to original data in DN counts.However, normalizing the predicted haze values for gain and offset allows the corrections to be applied without converting entire image's DNs into radiance values (Chavez 1988).The correction of atmospheric scattering is very important, especially for the shorter visible wavelength bands because the path radiance has serious effects on them (Lu et al. 2002).The Raipur city has polluted urban area hence the effects of atmospheric haze plays dominant role on the visible bands of remotely sensed image (IRS-IC, LIIS-III, Multispectral image, 20 th Feb. 2001) which was unable to produce scene reality for the urban land use mapping and change detection analysis.In this regards, IDOS method was used to produce realistic results based on very clear RSM than SDOS method.Thus, better result has been achieved using ATCOR2 as compared to SDOS method and IDOS method.Therefore, ATCOR2 model has been suggested for the better atmospheric correction of the satellite imageries. ): FPHV (DN) =NORi*PHV+Offset (2) The FPHV along with the haze values of individual bands and PHV are showing in Table 8.The FPHV is subtracted from each spectral bands and then whole haze corrected bands of satellite image are stacked to prepare a corrected false color composite image (Figure 2.) in Erdas Imagine modeler.The whole work of atmospheric correction is done in ERDAS Imagine 9.2 version.IDOS method is based on the RSM which has been used to compute PHV that are wavelength dependent and highly correlated to each other.Finally, generates realistic results with proper gain and offset normalization.The correction of atmospheric haze has carried out in this paper by the principle of Rayleigh scattering of very clear RCM (Table 8.; Figure 2.).The haze values selected by histogram method for green band (band 2), red band (band 3), NIR band (band 4) and SWIR (band 5) are 79, 53, 54 and 124 which are used for the corrections of haze effects in SDOS.But the PHV for band 2, band 3, band 4 and band 5 are 77.24,40.94, 16.99 and 0.77 and FPHV for these bands are 79, 49.85, 21.31 and 0.13 which are used for the corrections of haze effects using IDOS for realistic result. Figure 3 Figure 3 (a) NDVI image (from uncorrected image) (b) NDVI of haze corrected satellite imagery using IDOS (c) NDVI of haze corrected satellite imagery using ATCOR2 Table 3 . Radiometric details of Satellite data and selection of Haze Values using Histogram Method Table 4 . Principle of Relative Scattering Model of Atmospheric effects (Source:Chavez, 1988) Table 5 . Principle of Relative Scattering Models as percent (%) contributed for each spectral band Table 6 . Multiplication factors of Relative Scattering Models are used to Predict Haze Values for other bands and Band 2 is selected as SHV is 79
v3-fos-license
2017-03-31T06:34:51.123Z
2014-12-10T00:00:00.000
6356329
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ijponline.biomedcentral.com/track/pdf/10.1186/s13052-014-0099-x", "pdf_hash": "dd92c9004b31695c14cadc8a5aa9fc4c77745b1d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42827", "s2fieldsofstudy": [ "Medicine" ], "sha1": "92564ad4682a8fcde8209004e1983395dd4eceb7", "year": 2014 }
pes2o/s2orc
Effects of maternal epidural analgesia on the neonate - a prospective cohort study Background Epidural analgesia is one of the most popular modes of analgesia for child birth. There are controversies regarding adverse effects and safety of epidural analgesia. This study was conducted to study the immediate effects of the maternal epidural analgesia on the neonate during early neonatal phase. Methods A prospective cohort study of 100 neonates born to mothers administered epidural analgesia were compared with 100 neonates born to mothers not administered epidural analgesia in terms of passage of urine, initiation of breast feeding, birth asphyxia and incidence of instrumentation. Results There was significant difference among the two groups in the passage of urine (P value 0.002) and incidence of instrumentation (P value 0.010) but there was no significant difference in regards to initiation of breast feeding and birth asphyxia. Conclusions Epidural analgesia does not have any effect on the newborns in regards to breast feeding and birth asphyxia but did have effects like delayed passage of urine and increased incidence of instrumentation. Background Safe neonatal outcome is the ultimate aim of any delivery. Pain management is a major issue and part of normal labours. Among various modes of pain management, epidural analgesia is considered a very safe and popular mode of analgesia for child birth [1][2][3]. Considering use of various types of analgesia, it would be desirable to know the adverse effects of any analgesia being used. Epidural analgesia is one of the extensively studied modes of analgesia in labours. Most studies which have been conducted over epidural analgesia primarily focus on the maternal parameters [3][4][5][6]. Despite its popularity, epidural analgesia has remained controversial in regards to its safety [4][5][6]. The meta-analysis regarding the safety of epidural analgesia has remained inconclusive [7,8]. Considering the controversial aspects of epidural analgesia, we intended to study the immediate effects of epidural analgesia in the newborns born to mothers with epidural analgesia and compare with the newborns born to mothers without epidural analgesia. Methods and methodology Methods 100 consecutive mothers who were given epidural analgesia and 100 mothers who were not given epidural analgesia for normal labours were enrolled into the study. The neonates born to two groups of mothers were compared in regards to the time of passage of urine, the initiation of onset of breast feeding, birth asphyxia and instrumentation in the form of vacuum or forceps delivery. Inclusion and exclusion criteria The mothers who were regularly followed up in our antenatal clinic were included in the study after taking the informed consent. Caesarean sections, preterms (Less than 37 weeks of completed gestation), low birth weights (Less than 2.5 kg), antenatally detected major congenital anomalies, multiple gestations, high risk antenatal factors like gestational diabetes, pregnancy induced hypertension, recurrent abortions, elderly primigravida (Above 40 years) and those who did not provide consent were excluded from the study. Study design, sample size and place of study Incidence of epidural analgesia is around 3-5% of all labours in our institute. Considering the delivery rate of around 3000 per year, 100 cases each of epidural analgesia and 100 controls without epidural analgesia was determined sample size. The prospective cohort study was conducted in a tertiary care teaching hospital in India between Jan 2012 and Jan 2013. The study was approved by the institutional review board committee of Maharashtra University of Health Sciences, Nashik, India. Methodology Epidural analgesia is given voluntarily to the normal delivery cases in our institute. Informed written consent was taken from the participants. The pregnant ladies who demanded analgesia for labour pain were provided with epidural analgesia, consisting of 10 ml of 0.125% bupivacaine & 20 mcg fentanyl. The neonates born were followed up to 3 days to note the various study parameters including passage of urine, onset of breast feeding, birth asphyxia and instrumental interventions if any. The study performa were filled up by the duty resident every day during the morning and evening rounds. During the same period, neonates who were born to mothers without epidural analgesia were also followed up and various parameters noted. The results were compared among the two groups and statistical analysis was done using the software Epi Info 3.5.1 and P value was calculated by Chi square test and Fisher's exact test. P value of <0.05 was considered as statistically significant. Table 1 represents the comparative baseline maternal demographic data of the two groups in regards to parity, age group, the addresses and the religions. The baseline maternal data between the two groups are non significant. Table 2 represents the sexes and the different weight groups of newborns in the two groups which were comparable to each other. Results The timing of passage of urine by the newborns that were born with and without epidural analgesia has been represented in Table 3. The passage of urine in first six hours, then between six and 24 hours and more than 24 hours were noted. 42 newborns in epidural analgesia group and 58 newborns in non epidural analgesia group had passed urine in the first 6 hours. In the six-24 hours group, there were 49 newborns in epidural group and 42 in non epidural group. There were total 9 newborns that passed urine beyond 24 hours and all of them were in epidural analgesia group. The P value was highly significant among the two groups (P value-0.002). Thus, the results have shown that in newborns born to mothers with epidural analgesia, there is higher tendency to pass urine later than the newborns without epidural analgesia. The timing of initiation of breast feeding among the newborns those were born to mothers with and without epidural analgesia is shown in Table 4. The timing of breast feeding was divided into 3 groups, 0-six hours, six-24 hours and more than 24 hours. In epidural group and non epidural group, there were 96 and 98 newborns each who had established breast feeding successfully within six hours. Only one newborn in both the groups had established breast feeding between six-24 hours. In epidural group, there were three cases and in non epidural analgesia, there was only one case where breast feeding was established after 24 hours. The P value among the two groups was not significant (P value 0.60). The number of birth asphyxia which has occurred are tabulated in Table 5. In epidural analgesia group, three had birth asphyxia and in non epidural analgesia, only one had birth asphyxia. Although higher number of birth asphyxias had occurred in epidural group, it was not statistically significant (P value 0.621). The number of instrumental deliveries which had taken place in the two groups has been depicted in Table 6. Of the total 13 instrumental deliveries, which included both vacuum and forceps, 11 were from epidural analgesia group and only two were from non epidural analgesia group. The result in the two groups was highly significant (P value 0.010). Discussion We had studied the various parameters in the newborns born to epidural analgesia group and compared with the newborns born to mothers without epidural analgesia. The results included the timing of passage of first urine, onset of breast feeding, birth asphyxia and instrumental delivery. The timing of passage of urine had been divided into 3 groups, a) within first six hours, b) between six & 24 hours and c) more than 24 hours. In the study, lesser numbers of newborns had passed urine within the first six hours in epidural analgesia group (42 out of 110 or 38.2%) than non epidural analgesia group (58 out of 110 or 52.8%), whereas higher number of neonates had passed urine after six hours in the epidural analgesia group (49 out of 91 or 53.8%) than in non epidural group (42 out of 91 or 46.2%). Among nine newborns who had passed urine after 24 hours only, all were in epidural analgesia group. Although the passage of urine has been delayed, it was within physiological period of 48 hours. The delay in passage of urine was highly significant among the newborns in epidural analgesia group (P value 0.002). Epidural analgesia is known to cause urinary retention in the mothers post partum due to the effects of fentanyl [3,6,7,9,10]. Most studies are unable to explain the exact mechanism of post partum urinary retention in the mothers with epidural analgesia. There has been no documentation in the literature regarding the urinary retention in newborns born to mothers with epidural analgesia. This study is the first one to report this finding. Probably, the maternal drug transferred to the newborn could have led to urinary retention as we had used fentanyl in the mothers for epidural analgesia. However, urinary parameters in the mother were not noted in our study. Difficulty in establishment of breast feeding is another controversial issue in the field of epidural analgesia. Successful breast feeding is one of the aims of successful labours. In our study, majority of the mothers had initiated and established breast feeding within six hours of birth confidently (194 out of 200 or 97%). Only six babies had delayed onset of breastfeeding, because of the birth asphyxia and other medical conditions for which oral feeds were withheld. There was no significant difference among the cases and the control groups with P value being 0.60. Various studies have reported that epidural analgesia may lead to difficulty in establishing early breast feeding [11,12]. There are other studies which refute such relationship [1,[13][14][15]. Epidural analgesia per se should not have bearing upon the initiation of breast feeding in the newborns, unless the overdosing of the analgesia may make the mother feel drowsy and lead to delay in the establishment of breast feeding. Considering the appropriate dose of epidural analgesia, there should be no effect upon the initiation of the breast feeding as elicited in the study. Epidural analgesia has also been implicated in being associated with prolonged labour, respiratory distress and lower APGAR scores in the neonates [16][17][18][19]. At the same time, there are other studies which do not support such association [1][2][3]20,21]. In our study, we had studied the incidence of birth asphyxia in the epidural and non epidural groups. We had considered APGAR less than six at five minutes as birth asphyxia. In epidural analgesia group, three babies had birth asphyxia and in non epidural group, only one baby had suffered birth asphyxia. However, the difference was not statistically significant (P value 0.621). Epidural analgesia may be implicated in prolonging the labors by reducing the pain and thus reducing bearing down efforts, however, it is not directly contributing to birth asphyxia per se. Another controversial aspect of epidural analgesia which we intended to study was the higher number of instrumental deliveries associated with epidural analgesia. There are studies which claim higher incidence of instrumental deliveries, including Caesarean delivery with epidural analgesia [19][20][21][22][23]. There are other studies which do not show such relationship [1,2,24]. In our study, Caesarean section was excluded and we could not comment upon the incidence of Caesarean section. In our study total 13 cases required instrumental interventions, including both forceps and vacuum. Out of these, 11 were from epidural analgesia group and only two from non epidural group. The difference in the two groups in the study has been statistically significant (P value 0.010). This may perhaps be explained by the poorer bearing down efforts in the mothers with epidural analgesia. At the same time, it should also be acknowledged that more complicated pregnancies are more likely to be assisted with analgesia and thus may end up being intervened more with instrumental deliveries. Confounding factors like prolonged second stage, dystocias, delayed pushing, more complicated pregnancies, parity, age, may perhaps explain the difference. Summary and conclusions A prospective cohort study was conducted to study the effects of maternal epidural analgesia on the neonate during early neonatal phase. 100 newborns born to mothers who were administered epidural analgesia and 100 newborns born to mothers who were not given epidural analgesia were compared among various study parameters. The newborns born to mothers with epidural analgesia tended to pass urine later significantly than the non epidural group. There was significantly increased incidence of instrumental deliveries in epidural group than in non epidural group. However, there have been no immediate effects upon breast feeding and birth asphyxia in our study. The effect of epidural analgesia on the neonate is of immense significance and should be further explored in the future with more elaborate randomized controlled multi-centre studies.
v3-fos-license
2018-04-03T03:03:46.672Z
2013-07-24T00:00:00.000
8545252
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/14/8/15330/pdf", "pdf_hash": "8cd8335b0c54fd50fe12f8ccc215c4f4cddb454e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42829", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "8cd8335b0c54fd50fe12f8ccc215c4f4cddb454e", "year": 2013 }
pes2o/s2orc
Identification and Phylogenetic Analysis of a CC-NBS-LRR Encoding Gene Assigned on Chromosome 7B of Wheat Hexaploid wheat displays limited genetic variation. As a direct A and B genome donor of hexaploid wheat, tetraploid wheat represents an important gene pool for cultivated bread wheat. Many disease resistant genes express conserved domains of the nucleotide-binding site and leucine-rich repeats (NBS-LRR). In this study, we isolated a CC-NBS-LRR gene locating on chromosome 7B from durum wheat variety Italy 363, and designated it TdRGA-7Ba. Its open reading frame was 4014 bp, encoding a 1337 amino acid protein with a complete NBS domain and 18 LRR repeats, sharing 44.7% identity with the PM3B protein. TdRGA-7Ba expression was continuously seen at low levels and was highest in leaves. TdRGA-7Ba has another allele TdRGA-7Bb with a 4 bp deletion at position +1892 in other cultivars of tetraploid wheat. In Ae. speltoides, as a B genome progenitor, both TdRGA-7Ba and TdRGA-7Bb were detected. In all six species of hexaploid wheats (AABBDD), only TdRGA-7Bb existed. Phylogenic analysis showed that all TdRGA-7Bb type genes were grouped in one sub-branch. We speculate that TdRGA-7Bb was derived from a TdRGA-7Ba mutation, and it happened in Ae. speltoides. Both types of TdRGA-7B participated in tetraploid wheat formation. However, only the TdRGA-7Bb was retained in hexaploid wheat. Introduction NBS-LRR genes are one of the largest families of resistance genes (R gene) in plants. They encode proteins that have a central nucleotide-binding site (NBS) and a C-terminal leucine-rich repeat (LRR) [1]. In the Arabidopsis genome there are 200 NBS-LRR class homologues [2]. In the rice genome, there are 600 NBS-LRR class homologues [3]. Based on the secondary structure of the N-terminus, NBS-LRR proteins are subdivided into two classes: one class carries an N-terminal Toll-interleukin 1 receptor (TIR) domain (TIR-NBS-LRR), and the other has a putative coiled-coil domain (CC-NBS-LRR). Only CC-NBS-LRR is present in monocotyledonous plants [4]. The function of NBS-LRR genes is to participate in plant resistance to pathogens by directly/indirectly interacting with the pathogen's effectors. The relatively conserved domain of NBS has ATP or GTP binding activity and plays a significant role in plant defense signaling [5]. The LRR domain is a major determinant of resistance specificity, and acts as a versatile structural framework for the formation protein-protein interactions with pathogen effectors [6]. It appears that many NBS-LRR genes are tightly linked in clusters within plant genomes. These clusters of genes and the repeat structure in the LRRs domain provide a greater possibility for recombination and gene conversion, and contribute to a faster generation of novel resistance alleles. At the same time, some pseudogenes are produced by recombination or mutation, but they still have the NBS structure and also are expressed in the plant before accumulating enough mutation in their promoter. Most of them are expressed constitutively at a very low level with a variety of tissue specificities and are not induced by treatment with defense signals [7]. Disease responses caused by the NBS-LRR gene change plant metabolism and consume high energy levels [8]. It is also believed that there are fitness costs associated with the expression of NBS-LRR genes and activation of defense response pathways in the absence of pathogens [9]. It has often been observed that activation of a defense-related gene caused a defect in plant growth [10,11]. Some of the NBS-LRR genes might lose function due to mutations in the absence of pathogenic stress situations; non-functional genes can also promote new functional genes through intragenic recombination [12]. Population genetic studies showed that due to the balancing selection mechanism, NBS-LRR genes and its mutant forms widely existed in natural populations of plants [13]. The plant balances the penalty and the necessity of a resistance gene by death and reuse of NBS-LRR genes. NBS-LRR genes undergo alternative splicing [14]. Different splicing products collaboratively play a role in the disease resistance process [15]. There are many reports about cloning plant NBS-LRR genes, functional analysis, genomic distribution, and phylogeny analysis [21][22][23]. However, analysis of wheat NBS-LRR genes focuses on important functional resistance gene cloning [24][25][26]. It still remains largely unknown about the structure and evolution of NBS-LRR genes in wheat. In this paper, we cloned an NBS-LRR gene TdRGA-7Ba from tetraploid wheat Italy 363. Analysis of the sequence of TdRGA-7B from different ploidy wheats showed that it was greatly narrowed down in polymorphism during allopolyploidization. Amplification and Cloning of TdRGA-7Ba from Italy 363 Using a pair of primers PM3b-1880F and Pm3b-3040R, a band of approximately 750 bp was amplified by PCR assay using the cDNA of Italy 363. The fragment was inserted into PGEM-T cloning vector, and twenty clones were subsequently sequenced. A homology search was carried out for these sequences using the nucleotide BLAST search available from NCBI. One sequence was found to have >90% sequence similarity with the Pm3 like genes. In this paper, we focused only on this sequence and named it as TdRGA-7Ba. We obtained the full-length sequence of TdRGA-7B using a combination of 5'-RACE and 3'-RACE ( Figure 1a). The TdRGA-7Ba gene ORF extends 4014 bp long, and has a GC content of 46%. It encodes a protein of 1336 amino acids. As compared with the cDNA sequence, the TdRGA-7Ba gene consists of 3946 bp and 68 bp exons and a 206 bp intron from the start to the stop codon plus a 26 bp 5' UTR and a 370 bp 3' UTR. At 27 bp after the position of the stop codon, there is a 103 bp intron in the 3' UTR. Blast analysis revealed that the amino acid sequence of TdRGA-7Ba had high similarity with other NBS-LRR proteins. It shared 44.7% identity over wheat powdery mildew resistance protein of PM3B (AAQ96158), and 16.0% identity to rice bacterial blight resistance protein XA1 (BAA250 68), and 15.5% homology with the Arabidoposis Pseudomonas syringae resistance protein RPM1 (NP187360). Analysis by the protein prediction websites InterProScan (http://www.ebi.ac.uk/Tools/ InterProScan) and Pfam (http://pfam.sanger.ac.uk/search) revealed that TdRGA-7BA contained the full NBS domain Kinase 1a, Kinase 2 and Kinase 3 at the central part, and 18 LRR repeats at the C-terminal part. Analysis by the COILs software program (http://www.ch.embnet.org/ software/COILS_form.html) revealed that there was a coiled-coil domain at the N-terminus. Therefore, TdRGA-7Ba is a CC-NBS-LRR gene (Figure 1b). Expression Analysis of the TdRGA-7Ba To detect the expression pattern of the TdRGA-7Ba in Italy 363, primer pair R-EX-F and R-EX-R were designed to amplify gene products from Italy 363 cDNA, and the PCR products were then cloned in the TA-vector and sequenced. Sequence analysis showed that the products had only one single sequence type and was not different from the original sequence. This result suggested that the primer pair could test the expression levels of TdRGA-7Ba. Transcription levels showed that TdRGA-7Ba was present in all tested organs: root, leaf, culm and spikelet, but was expressed at higher level in the leaf, and at lower levels in the root and spikelet (Figure 2a). One week seedlings of Italy 363 were inoculated with the powdery mildew isolate E18, and harvested for RNA isolation at 0, 6, 12, 16, 24, 48, 96 h and 7 days later. Expression tests showed there was no difference in expression between the samples (Figure 2b). This observation indicated that TdRGA-7Ba was not induced by the powdery mildew isolate E18. It resembled most NBS-LRR genes, which were not induced by treatment with defense signals [7]. Chromosomal Assignment of TdRGA-7B Gene To determine the chromosomal location of the TdRGA-7B sequence we amplified the specific band from genomic DNA of the diploid wheat and Aegilops speltoides using the primer pair of R-EX-F and R-EX-R. Only the B genome source of Aegilops speltoides could amplify the band. All the nulli-tetrasomic (NT) lines of Chinese Spring could amplify the band except Nulli-7B lines. The result showed that TdRGA-7B was located on chromosome 7B (Figure 3a). Alternative Splicing of TdRGA-7Ba When we amplified the full length of TdRGA-7Ba by using the primers RLF and RLR from cDNA of Italy 363, we found several short bands on the agarose gel ( Figure 4a). All about 10 bands were cloned and sequenced, and six different lengths of fragments represented TdRGA-7Ba gene's different splice variants. The longest fragment was 4137-bp and the shortest was 2179-bp (from the start codon to 226-bp after the stop codon, and included the intron in the 3'UTR) ( Figure 4b). All the fragments included the CC and NBS domains, but the LRR domain varied for 0 to 18 repeats. Genetic Variation and Phylogenetic Analysis of TdRGA-7B In order to analyze the variation of the TdRGA-7B gene, 21 accessions of tetraploid wheat from all eight species with the AABB genome (Table 1) were performed PCR in order to detect the TdRGA-7B gene sequences by using the primers R-EX-F and R-EX-R. Comparison of the genomic sequences revealed two types of variation of the TdRGA-7B gene in tetraploid wheat. In 14 accessions, TdRGA-7B gene sequences were similar to it in Italy 363, which was named the TdRGA-7Ba type. However, in the other 6 accessions it had a 21 bp and 4 bp deletion at position +1670 and +1892. The 4 bp deletion results in the TdRGA-7B an in-frame premature termination at position +1957 within the transcript, thus becoming a pseudogene, which was named the TdRGA-7Bb type. The 6 materials with TdRGA-7Bb belonged to four species: T. dicoccoides, T. turanicum, T. durum, T. turanicum ( Figure 5). In order to further study the origin and evolution of the TdRGA-7B gene and the distribution in different ploidy wheat species, we detected TdRGA-7B variation in 20 accessions of the B genome donor Ae. speltoides and 18 accessions of hexaploid wheat. In Ae. speltoides, we found TdRGA-7Ba (16 accessions) and TdRGA-7Bb (4 accessions), but in 18 accessions representing all 6 species with the AABBDD genome of hexaploid wheat, all the samples were shown to be of the TdRGA-7Bb type. A phylogenic tree was constructed using MEGA5.1 software. These sequences were constructed with a black oat sequence (FJ829744) as the out-group, which was the most similar sequence with TdRGA-7B blasted in the NCBI program. The phylogenic tree indicated that TdRGA-7Ba genotypes were more divergent, while all TdRGA-7Bb genotypes were highly similar ( Figure 6). Therefore, the TdRGA-7Bb type might have emerged relatively late in the process of evolution. In other words, it came from a deletion event in the TdRGA-7Ba gene. The Development SSR Molecular Marker for TdRGA-7B In the NBS domain of TdRGA-7B a trinucleotide repeat of AAG was different from 8 to 16 times in our materials. It can be used as gene-derived Simple Sequence Repeat (SSR) marker to track this gene. A pair of SSR primers R7B-SSR-F and R7B-SSR-R was designed based on the end sequence of the AAG repeats by on-line software primer 3.0. It could amplify a band from 422 to 446 bp length in different wheat materials (Figure 3b). Discussion In this paper, we have cloned an NBS-LRR gene TdRGA-7B from tetraploid wheat Italy 363. It was located on chromosome 7B. There were many R genes assigned on chromosome 7B; such as powdery mildew resistance genes Pm5 [27][28][29][30] and Pm47 [31], yellow rust resistance genes Yr2 [32] and Yr6 [33], stem rust resistance gene Sr17 and leaf rust resistance gene Lr14 [34]. Therefore, in chromosome 7B of wheat, there might be an enrichment area of resistance genes. TdRGA-7B may be one of the resistance genes or located near those genes. We developed an SSR marker according to the TdRGA-7B sequence. The SSR marker can be used as a co-dominant marker in tracking itself and those genes near TdRGA-7B. In eukaryotes, alternative splicing (AS) contributes to the complexity and the diversity of gene expression [35]. Alternative splicing has been investigated more comprehensively in human and animals. About 70%-80% of genes of humans have alternative splicing shown by microarray assay [36]. It may change protein domain organization, protein activity and localization and might influence the interaction between protein subunits and protein post-transcriptional regulation. AS might also produce non-functional proteins [37]. NBS-LRR genes have been reported to undergo alternative splicing [14]. Some of the different splicing products collaboratively played a role in the disease resistance process [15]. In our study, TdRGA-7Ba had 6 different AS, and all of them included the whole CC and NBS domains. The function of these splicing variants need further work to prove. Hexaploid wheat was formed only about 10,000 years ago from a natural hybridization of tetraploid wheat with diploid goatgrass Aegilops tauschii. Newly formed allopolyploids are often characterized by limited genetic variation, called "polyploidy bottleneck" [20]. However, R genes are expected to be variable in their ability to cope with rapidly evolving pathogens. Tetraploid wheat can be used as a gene pool for wheat. The main kinds of R genes are conserved in the NBS domains, which offers a way to isolate these types of sequence by PCR using degenerate primers designed based on the conserved domains. Using this approach, R Gene Analogs (RGAs) have been isolated extensively, such as in soybean [38], lettuce [39], barley [40], coffee [41], sunflower [42], strawberry [43], ginger [44] and cucumber [45]. In wheat, an Mla homologue TaMla1 was cloned from Triticum monococcum and was proved to have the conserved function against powdery mildew [46]. Many cloned RGAs are either closely linked to known R gene loci or arranged in clusters similar to R genes. However, few were focused on their evolution. We tested the TdRGA-7B in dipoid, tetraploid and hexaploid wheat. Both the TdRGA-7Ba and TdRGA-7Bb types were detected in Ae. speltoides, which showed that the differentiation between TdRGA-7Ba and TdRGA-7Bb was before the formation of tetraploid wheat, which is 0.5 million years ago [47]. The phylogenic tree indicated the TdRGA-7Bb to be assembled in one sub-branch, so we speculate that the formation of TdRGA-7Bb was the result of one single mutation. The fact that two types of TdRGA-7B are in the tetraploid wheat, demonstrates that both types of TdRGA-7B participated in the formation of tetraploid wheat. That is to say, at least two independent hybridization events happened at that time. However, only the TdRGA-7Bb type is in hexaploid wheat (AABBDD). This suggests that maybe only the TdRGA-7Bb participated in the formation of hexaploid wheat, or TdRGA-7Ba also participated to the process and then has been lost. The reason of only TdRGA-7Bb existing in hexaploid wheat is speculated as follows: first, hybrid incompatibility. One of the hybrid incompatibilities is hybrid necrosis, and the resistance process to pathogen is in the content of hypersensitive necrosis. The resistance genes can induce necrosis in the hybrid as its by-product. Resistance genes recognize effectors of pathogen, so it is highly more likely to block the distant hybridization than other proteins by recognizing foreign proteins [48]. An NBS-LRR-type disease resistance (R) gene was necessary and sufficient for induction of hybrid necrosis in intraspecific crosses of Arabidopsis thaliana [49]. The functional TdRGA-7Ba might block the hybridization of tetraploid wheat and goatgrass and was excluded out of the hexaploid wheat. Second, fitness costs. Constitutively expressed NBS-LRR genes are not particularly useful in an environment without the existence of a counterpart pathogen, and hexaploid wheat often has gene functional redundancy because of tripled genomes. These genes tend to be lost or become pseudogenes to avoid a fitness cost to the host species [50]. TdRGA-7Ba might have lost its value in hexaploid wheat and was thus evolutionarily excluded. Plant Materials T. durum Italy 363 was kindly provided by Dr. Fangpu Han, Institute of Genetics and Developmental Biology, Chinese Academy of Sciences (Beijing, China). The wheat line Chancellor and the powdery mildew isolate E18 were kindly supplied by Dr. Xiayu Duan, Insitute of Plant Protection, Chinese Academy of Agricultural Sciences (Beijing, China). All other diploid, tetroploid and haxploid wheats (Table 1) DNA, RNA Extraction and cDNA Synthesis All wheat seedlings were grown in a growth chamber under a 16 h/8 h, 20 °C/18 °C day/night cycle with 70% relative humidity. The one-week seedlings from all plant materials were harvested, frozen immediately in liquid nitrogen, and stored at −80 °C. Genomic DNA was extracted by the CTAB method [51]. Total RNA was extracted from leaves and other organs by TRIZOL (Invitrogen, Carlsbad, CA, USA) following the manufacturer's instruction. The first strand cDNA was reverse transcribed using oligo(dT)18 primers (TaKaRa, Shiga, Japan) and transcriptase (M-MLV, Promega, Madison, WI, USA) at 42 °C for 1.5 h. Control reactions included a positive RT-PCR control with tubulin specific primers (tubF-345: 5'-TGAGGACTG GTGCTTACCGC-3' and tubR-852: 5'-GCACCATCAAACCTCAGGGA-3', which were designed according to Triticum aestivum alpha-tubulin cDNA (TAU76558) ) and used to amplify the cDNA, and a negative control with tubulin primers, which were used to amplify the RNA to test for genomic DNA contamination. The full-length cDNA sequence of TdRGA-7Ba was obtained by using the SMARTer™ RACE cDNA Amplification Kit (CLONTECH, Palo Alto, CA, USA). The experiments were carried out according to the product user manual. According to the sequence of the PCR fragment, gene specific primers (GSP) were designed. 5'-RACE PCR (Rapid Amplification of cDNA Ends) was performed with the general primer UPM and 5' GSP (5'-TCACTGAGATCCTTCTTGTTTCCAAGG-3'), and 3'-RACE with general primer UPM and 3'-GSP (5'-GTTTATGAGCAATTGTGGAAAGTTGGTAG-3'). The Expression Pattern of TdRGA-7Ba The expression pattern of TdRGA-7Ba was tested by semi-quantitative PCR. The gene specific primer pair R-EX-F (5'-ATGTGGATACTCTGGCTC -3') and R-EX-R (5'-AGCTGGAGAGCTGTTATCC -3') were designed according to the coding region of TdRGA-7Ba. PCR products of tubulin (TAU76558) in wheat were used as the internal control. The volume of PCR reaction was 50 μL with 5 μL first-strand cDNA as the template. Reactions were performed with EX Taq Polymerase (TaKaRa), using the following profiles: 94 °C for 5 min, 27-32 cycles of 30 s at 94 °C, 30 s at 58 °C, 1.5 min at 72 °C, and with a final extension 72 °C for 10 min. The tubulin PCR assay was performed by 27 cycles and the TdRGA-7Ba PCR assay was 32 cycles. The PCR products were separated on 1.0% (w/v) agarose gels. Chromosomal Assignment of TdRGA-7B Gene Sequence The TdRGA-7B specific PCR band was used to test the existence of TdRGA-7B in Chinese Spring, T. urartu Thum, Aegilops speltoides (Tausch) Gren. and a series of Chinese Spring nulli-tetrasomic (CS-NT) lines. The PCR assay was performed by using the primers of R-EX-F and R-EX-R. The Products were separated on 1.0% (w/v) agarose gels. Phylogenetic Analysis of TdRGA-7B The TdRGA-7B fragment was amplified by the primer pair R-EX-F and R-EX-R from every material described in Table 1. Every material was performed PCR assay with tree times separately and at least three clones were sequenced from every PCR product to reduce experimental error. Phylogenetic trees were constructed from CLUSTALW alignments of the genomic DNA sequences of TdRGA-7B using the Maximum-Likelihood method available in the Mega5.1 software program (http://www.megasoftware.net/). Confidence values for nodes were calculated using 1000 bootstraps. The Development of the SSR Molecular Marker for TdRGA-7B In the NBS domain of TdRGA-7B a trinucleotide repeat of AAG was different from 8 to 16 times in our materials. A pair of SSR primer R7B-SSR-F (5'-GAAAGACCACGTTAGCAC-3') and R7B-SSR-R (5'-TTCCCAAACATCATC CAG-3') were designed based on the end sequence of the AAG repeat by using an on-line software primer 3.0. The volume of the PCR reaction samples was 20 μL with 2 μL of DNA as template. Reactions were performed with EX Taq Polymerase (TaKaRa), using the following profiles: 94 °C for 5 min, 40 cycles of 30 s at 94 °C, 30 s at 58 °C, 30 s at 72 °C, and with a final extension of 72 °C for 10 min. The products were separated on 8% polyacrylamide gels. Conclusions In the current work, we cloned an NBS-LRR gene TdRGA-7Ba from tetraploid wheat cultivar Italy 363. Analysis of the sequence of TdRGA-7B from different ploidy wheat showed that there were two types of TdRGA-7B in dipoid and tetraploid wheat, but only the mutated TdRGA-7Bb type existed in all six species of hexaploid wheats (AABBDD). TdRGA-7Bb is a mutant of TdRGA-7Ba and was formed in Ae. speltoides. Two types of TdRGA-7B participated in the formation of tetraploid wheat, but only the TdRGA-7Bb form was retained in hexaploid wheat. This gene is greatly diminished in polymorphism during allopolyploidization.
v3-fos-license
2022-03-26T15:17:34.345Z
2022-03-24T00:00:00.000
247686489
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "7128166bfb57e1fd79899caa7262e6c66c651461", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42831", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "0a8d903c4ecd9b86335c7cd6c88668ca498302fa", "year": 2022 }
pes2o/s2orc
First fossil-leaf floras from Brunei Darussalam show dipterocarp dominance in Borneo by the Pliocene The Malay Archipelago is one of the most biodiverse regions on Earth, but it suffers high extinction risks due to severe anthropogenic pressures. Paleobotanical knowledge provides baselines for the conservation of living analogs and improved understanding of vegetation, biogeography, and paleoenvironments through time. The Malesian bioregion is well studied palynologically, but there have been very few investigations of Cenozoic paleobotany (plant macrofossils) in a century or more. We report the first paleobotanical survey of Brunei Darussalam, a sultanate on the north coast of Borneo that still preserves the majority of its extraordinarily diverse, old-growth tropical rainforests. We discovered abundant compression floras dominated by angiosperm leaves at two sites of probable Pliocene age: Berakas Beach, in the Liang Formation, and Kampong Lugu, in an undescribed stratigraphic unit. Both sites also yielded rich palynofloral assemblages from the macrofossil-bearing beds, indicating lowland fern-dominated swamp (Berakas Beach) and mangrove swamp (Kampong Lugu) depositional environments. Fern spores from at least nine families dominate both palynological assemblages, along with abundant fungal and freshwater algal remains, rare marine microplankton, at least four mangrove genera, and a diverse rainforest tree and liana contribution (at least 19 families) with scarce pollen of Dipterocarpaceae, today’s dominant regional life form. Compressed leaves and rare reproductive material represent influx to the depocenters from the adjacent coastal rainforests. Although only about 40% of specimens preserve informative details, we can distinguish 23 leaf and two reproductive morphotypes among the two sites. Dipterocarps are by far the most abundant group in both compression assemblages, providing rare, localized evidence for dipterocarp-dominated lowland rainforests in the Malay Archipelago before the Pleistocene. The dipterocarp fossils include winged Shorea fruits, at least two species of plicate Dipterocarpus leaves, and very common Dryobalanops leaves. We attribute additional leaf taxa to Rhamnaceae (Ziziphus), Melastomataceae, and Araceae (Rhaphidophora), all rare or new fossil records for the region. The dipterocarp leaf dominance contrasts sharply with the family’s <1% representation in the palynofloras from the same strata. This result directly demonstrates that dipterocarp pollen is prone to strong taphonomic filtering and underscores the importance of macrofossils for quantifying the timing of the dipterocarps’ rise to dominance in the region. Our work shows that complex coastal rainforests dominated by dipterocarps, adjacent to swamps and mangroves and otherwise similar to modern ecosystems, have existed in Borneo for at least 4–5 million years. Our findings add historical impetus for the conservation of these gravely imperiled and extremely biodiverse ecosystems. The celebrated history of biological exploration and research in Malesia began in the 17 th century with the expeditions of Georg Eberhard Rumphius (de Wit, 1952). Later, Alfred Russel Wallace's seminal observations in the region formed the basis of the field of biogeography and his independent discovery of evolution by natural selection (Darwin & Wallace, 1858;Wallace, 1860;Wallace, 1869). Wallace (1869) posed an important, apparently overlooked conjecture by observing (italics ours) "the existence of extensive coal-beds in Borneo and Sumatra, of such recent origin that the leaves which abound in their shales are scarcely distinguishable from those of the forests which now cover the country." Nearly all recent paleobotanical work in Malesia comes from the Permian of Sumatra (Indonesia; van Waveren et al., 2018;van Waveren et al., 2021) and the Late Triassic of Bintan Island (Indonesia; Wade-Murphy & van Konijnenburg-van Cittert, 2008). Research on Cenozoic regional floras falls into three preservational domains: compression floras dominated by leaves, fossil woods, and pollen. Work on compression floras largely dates to the 19 th and early 20 th centuries, starting even before Wallace's time, and covers material from the Paleogene of Sumatra (Indonesia; Heer, 1874;Heer, 1881) and South Kalimantan (Indonesia; Geyler, 1875; see also von Ettingshausen, 1883b); the Neogene of Sumatra (Kräusel, 1929a), Java (Indonesia; Göppert, 1854;von Ettingshausen, 1883a;Crié, 1888), and Labuan Island (offshore Malaysian Borneo; Geyler, 1887); and a few other areas (see summaries in Kräusel, 1929b;Bande & Prakash, 1986;van Gorsel, 2020). We are not aware of any significant revisions of these important early reports (van Konijnenburg-van Cittert, van Waveren & Jonkers, 2004). As for comparable works on compression floras from the discovery era elsewhere in the world (see Dilcher, 1971;Hill, 1982), we must assume that many of the historical identifications are inaccurate, pending restudy of the type collections (van Konijnenburg-van Cittert, van Waveren & Jonkers, 2004). The body of work focused on Malesian fossil woods is more botanically informative than for compressions and includes Neogene records of many plant families that are extant in the region. The wood literature encompasses historical to comparatively recent studies of apparently ex-situ specimens from the Neogene of Sumatra, Borneo, and Java (Kräusel, 1922;Kräusel, 1926;den Berger, 1923;den Berger, 1927;Schweitzer, 1958;Kramer, 1974a;Kramer, 1974b;Mandang & Kagemori, 2004; see also Ashton, 1982;Wheeler, 2011). From the evidence at hand, there is a consensus that dipterocarps became dominant in everwet rainforests of the Malay Archipelago after about 20 Ma (Ashton, 1982;Morley, 2000;Heinicke et al., 2012). However, there has been no comparative use of compression floras in assessing past dipterocarp abundance. Compression floras often provide taxonomic resolution below the family level, and unbiased collections of fossil leaves are widely used in paleoecology to evaluate diversity and relative abundance at a far more local scale than is possible from pollen or ex-situ woods (e.g., Chaney & Sanborn, 1933;Burnham, 1994b;Wing & DiMichele, 1995;Wilf, 2000). Significantly, relative leaf area and leaf counts in modern litter samples correlate directly with relative stem basal area in source forests (Burnham, Wing & Parker, 1992;Burnham, 1994a;Burnham, 1997). Thus, leaf and pollen data from the same strata, when both are based on unbiased ("quantitative" visited had abundant but degraded or hashy compressed plant remains, primarily of twigs and small leaf fragments with little potential for identification, as noted in various geological studies (Tate, 1976;Kocsis et al., 2018). We found and excavated macrofossils suitable for larger-scale collection at two localities, Berakas Beach and Kampong Lugu Berakas Beach locality The natural fossiliferous outcrop at Berakas Beach is located in the gullies and creeks that cut the sea cliffs running along the coast in the Berakas Forest Reserve, next to the Muara-Tutong highway (Figs. 1, 2). The outcrop exposes rocks belonging to the Berakas Member of the Liang Formation (Sandal, 1996) that top up the core of the Berakas syncline (e.g., . The age of the Liang Formation has been proposed as Pliocene and the youngest beds possibly Pleistocene, based on the late Miocene age of the underlying Seria Formation and the overlying Pleistocene terraces (Liechti et al., 1960;Wilford, 1961); however, some reports stretch the lower age limit to the latest Miocene (Sandal, 1996). The Liang Formation overlies late Miocene, dominantly marine successions that crop out in the coastal areas of northern Borneo with locally varying depositional settings (Liechti et al., 1960;Sandal, 1996;Wannier et al., 2011). The sediments included in the Berakas syncline were deposited in a protected embayment and partly influenced by tidal processes; however, the younger Liang Formation strata are dominated by deposits of meandering rivers that filled up the bay and cut through the tidal and distributary sediments (Wannier et al., 2011). At the base of these river deposits, conglomeratic channel-lag beds containing fossil wood (not studied here) are well exposed due to recent weathering and erosion that formed gullies and small canyons in the area (Fig. 2). Otherwise, the lithology is dominated by fine to medium-grained sandstone that is often cross-bedded and intercalated claystone beds with high organic content and leaf compressions. Each sedimentary unit yielded amber fragments that have dipterocarp origin ; pollen from these outcrops, at a somewhat higher stratigraphic position than the pollen samples studied here (marked in Fig. 2), was recently analyzed as part of a separate study (Roslim et al., 2021: their sample 16). The fluvial layers in the investigated Berakas outcrops dip northwest at 5-30 . The ca. 13 m of logged strata contain compressed leaves and rare reproductive plant fossils at their clay-rich base (Fig. 2). Plant fossils were sampled from the locations shown in Fig. 2 as logs 1L and 1R, collectively as locality PW1501 (N 4.99404 , E 114.92297 ), all from a single layer exposed on opposite banks of a small gully. About 30 m to the northeast ( Fig. 2: log 2R) sits fossil locality PW1502 (N 4.99423 , E 114.92316 ), where a similar clay-rich succession crops out with variable, sandier intercalations. Most of the section has sedimentary structures that point to deposition in a river system characterized by episodic increases of water flows. The presence of tree trunks, often fully embedded into clay deposits, indicates elevated bed and suspended loads. The sandier portions with cross stratifications and abundant reactivation surfaces point to calmer conditions that could have alternated between anastomosing and meandering fluvial systems. This variety of environmental conditions is plausible, considering the general likelihood of significant runoff events in the wet tropics. Possibly, the fossil-leaf-rich clay layers accumulated as part of fine-grained crevasse splay deposits. Several clay samples were taken, and one was processed from each of the two macrofossil localities for palynological study of age, paleoenvironment, and floral composition (see Palynology). Kampong Lugu locality The Kampong Lugu fossil locality is situated southwest of Jerudong on the eastern flank of the Belait syncline (e.g., Sandal, 1996; also the western flank of the Jerudong anticline), and the fossiliferous strata are located in a new lithologic unit that we discovered, exposed mainly due to excavation (Figs. 1, 3). At the site (Fig. 3), the late Miocene marine sediments of the Miri Formation are exposed and dip northwest at 30 (e.g., Wilford, 1961). The age of the Miocene sediment series ranges from 10-12 Ma (east-southeast of the site; Back et al., 2005) to 6-8 Ma (north-northeast of the site, near Tutong; Kocsis et al., 2018;Roslim et al., 2020). The new fossiliferous unit, a 7-7.5 m thick claystone, onlaps the Miocene marine series horizontally, showing a sharp, 30 angular unconformity (Fig. 3) that presumably represents a significant hiatus in the depositional system. A much younger, possibly Plio-Pleistocene age of the fossiliferous claystone is likely on that basis alone. The claystone is grey and sometimes light brown, relatively uniform but with several alternations and intercalations of sandy lenses toward the lower part of the section. The claystone is rich in leaf compressions at several levels, although no reproductive structures were found. The entire exposure was sampled opportunistically as a single fossil locality, PW1503 (N 4.87582 , E 114.80229 ), along with two pollen samples (see Palynology). Bioturbation is very rare in the fossiliferous unit. The presence of thin, sandy layers could represent occasional, low-energy fluvial input, but there are no sedimentary structures indicating major fluvial forms such as channels, meanders, and oxbow lakes. Palynology From the leaf-bearing horizons at each site, we took several pollen samples from the freshest rock available, of which two were processed per site. At Berakas Beach, one sample each was processed from fossil localities PW1501 (Fig. 2: sample S6) and PW1502 ( Fig. 2: sample S8), and at Kampong Lugu locality PW1503, samples S1 and S2 came from the respective positions shown in Fig. 3. The samples were processed using standard palynological techniques. Approximately 20 g of a crushed, washed, and dried sample were first reacted with 20% hydrochloric acid to dissolve and disaggregate the carbonates. Once all chemical reactions had ceased, the sample was neutralized with water, then reacted with hydrofluoric acid (40%) to dissolve and disaggregate the silicates. The sample was then sieved with a 10 µm mesh nylon sieve, using water to neutralize. Any fluoride precipitates were removed by warming the residue in 20% hydrochloric acid, then re-sieving the samples with water to neutralize. The now-concentrated organic fraction was examined under the microscope to assess the need for oxidation using concentrated nitric acid or for mechanical separation using ultrasonic vibration. Representatives of the oxidized and unoxidized organic fractions were mounted onto cover slips, and these were glued to glass slides using Norland Optical Adhesive No. 63. The palynology samples were logged quantitatively (Appendix 1). Palynomorph recovery in all samples proved very high, and it was possible to make counts of 300 specimens for each, after which the slides were scanned for additional taxa logged as presence-absence (Appendix 1). The remaining cover slips were observed for other significant taxa. All palynomorph groups were recorded, including spores, pollen, freshwater algae, fungal bodies, and marine microplankton. A semi-quantitative assessment was also made of the kerogen, not intended as a detailed kerogen study but rather a determination of the main kerogen types and their derivation. Corel Draw (Corel, Ottawa, ON, Canada) was used to compose an illustration of light micrographs for representative palynomorphs (Fig. 4). Macrofossils The fossiliferous outcrops were saturated with water and often overgrown, making the standard bench-quarrying techniques and enormous sample sizes of dryland paleobotany impossible. Quarrying generally was shallow and laterally extended to prioritize the driest or least weathered blocks, split using a rock hammer or pocket knife. All potentially identifiable macrofossils were collected and later lab-tallied (Table 1; Appendix 2) to ensure an unbiased sample, a first, to our knowledge, for a Cenozoic paleobotanical collection in the Malesian region. Unidentifiable material often appeared as hash or other tiny fragments ("Un" in Appendix 2). Macrofossils were trimmed, usually with a pocket knife due to the soft, wet matrix, provided a unique field number (Appendix 2) with letter suffices indicating parts and counterparts, and field-photographed (when conditions permitted) to create an immediate visual record. The total macrofossil collection consists of 339 compression specimens (slabs, some containing multiple fossils; Appendix 2), 136 from Berakas Beach and 203 from Kampong Lugu (Table 1). Each specimen was field-wrapped in plastic film to slow drying and thus avoid catastrophic cracking, then in sanitary paper to increase protection and wick away moisture. The specimens were packed into suitcases for shipping, and these were stored in air-conditioned rooms at Universiti Brunei Darussalam to dry for several months. The repository of the fossils is the Herbarium of Universiti Brunei Darussalam (UBDH). The specimens were removed from suitcases and inspected on receipt of the loaned material at the Penn State Paleobotany Laboratory (loan export approved by Muzium Brunei 19 September 2015, reference JMB/654/86/17). Although there was mold growth on the wrapping paper, and some fragile specimens had minor breakage, nearly all fossil material was undamaged. Moldy material was removed and the collection left, still wrapped, for about two more months to ensure complete drying. The dried fossils were more friable and considerably lighter in color since the time of collection, but they were undamaged and stable for the most part. We completely unwrapped them and labeled the slabs with their field numbers (Appendix 2), using a Brady Lab Pal handheld label printer (Brady, Milwaukee, WI, USA). In-situ cuticle was rare and unusably degraded, although dispersed cuticle found in the palynological analyses holds potential for future study (see Results). Following vetting of the collection, a UBDH collection number was assigned to each field-numbered slab (Appendix 2). Both UBDH numbers and field numbers (Appendix 2) are referenced in our descriptions for faster correlation to field data, specimen labels, and photographs. Specimens were prepared and cleaned with standard air tools (Paleotools Micro Jack 2 and PaleoAro; Paleotools, Brigham, UT, USA) and precut Paleotools needles mounted on pin vises. The collection was photographed with Nikon D90 and D850 DSLR cameras under polarized light (Nikon USA, Melville, NY, USA). The specimens were usually lit only from one side of the copy stand to increase surface contrast and relief capture. Reflected-light and epifluorescence microscopy were done on a Nikon SMZ-1500 stereoscope with a DS-Ri1 camera and Nikon NIS Elements v. 2 and 3 software. However, due to the generally limited preservation of detail in the fossils and low contrast with the matrix, DSLR photography almost always showed fine features better than microscope photography. Many specimens were photographed at multiple focal points to increase detail capture from uneven fossil surfaces, and the series of highest interest were then z-stacked in Adobe Photoshop CC (Align and Blend functions; Adobe, San Jose, CA, USA) to increase the depth of field. A few photos were laterally merged (stitched) from overlapping panels, using the Photomerge macro in the same application. The resulting image library was organized in parallel with the physical specimens using the Adobe Bridge CC visual browser, using the keyword functions to develop a set of searchable and filterable metadata attributes for each fossil . Standard leaf architectural terms (Ellis et al., 2009) were applied hierarchically as Bridge keywords (characters) and sub-keywords (character states). This simple, intuitive system allows rapid filtering and visual comparisons using the Bridge filter and search functions and a smooth workflow from Bridge to Adobe Camera Raw and Photoshop. Camera Raw was used for reversible whole-image adjustments of crop, alignment, contrast, grey levels, and color temperature to increase the visibility of features, then to export the images to Photoshop as smart layers with a standard sRGB color profile. Photoshop was used to compose the macrofossil plates at 1,200 dpi, using the Smart Layers feature for continued reversible editing of the layers in Camera Raw (launched directly from the Photoshop layers palette) and maintenance of full image resolution. One dipterocarp fruit fossil was selected for CT scanning to ascertain if any taxonomically informative features were hidden in the sediment. Scanning was done by Whitney Yetter and Timothy Stecko at the Penn State Center for Quantitative Imaging, using a General Electric v|tome|x L300 system at 20 mm resolution (see https://iee.psu.edu/labs/center-quantitative-imaging/generalelectric-micro-nano-ct-system). Scan data were post-processed using ImageJ Fiji (open access at https://imagej.net/software/fiji/downloads) and Avizo (Thermo Fisher Scientific, Hillsboro, OR, USA) software. The complete image library of the Brunei macrofossil collection is available at full resolution on Figshare Plus, DOI 10.25452/figshare.plus.16510584, providing open access to far more material than we can illustrate in this article. The image library includes all field and lab images of the macrofossils, converted only once to jpeg format with minimal compression from camera raw or tiff format; high-resolution (1,200 dpi) versions of the composed macrofossil plates; CT reconstruction animations; and an archive of the CT raw image stacks and acquisition data. The leaf architecture data generated for each specimen were applied to segregate the material into distinctive morphotypes, each with a two-letter prefix (here, "BR"; Table 1; Appendix 2) and a designated exemplar specimen, as long practiced in angiosperm leaf paleobotany (e.g., Johnson et al., 1989;Ash et al., 1999;Iglesias et al., 2021). The morphotype system is informal by nature and widely used as a prelude to formal systematic work, with the exemplar specimens as potential future type specimens. We organized the morphotypes systematically to the degree possible and described them informally (see Morphotype Descriptions). We use "species," "morphotypes," and the "BR" morphotype codes interchangeably in the text for convenience and readability. The nature of the material limited the number of recognizable entities, and we assume that the number of species that contributed to the assemblage was much higher than the number of recognized morphotypes. We only assigned morphotypes based on distinctive preserved characters, and the majority of the specimens were unidentifiable (Table 1). Several morphotypes probably represent multiple biological species with similar features. Most leaves had similar physiognomy typical of tropical rainforest assemblages, such as elliptic and untoothed blades. No new nomenclature or type specimens are declared at this time due to the nature of the work, which intends to survey the whole flora so far collected and to lay a foundation for future paleobotanical research in a new area; additional specimens are likely to increase understanding of the fossil taxa and support formal treatments. In addition, many of the fossils could represent extant species or do not preserve diagnostic characters that differentiate them from living taxa. For readability, botanical authorities are listed only in the Morphotype Descriptions section. Nomenclature and authorities follow World Flora Online (http://www.worldfloraonline.org). The conservation status for various taxa discussed comes from the IUCN Red List (IUCN, 2021), Bartholomew et al. (2021), and other sources as cited. Reference material included a variety of physical and digital herbarium specimens and cleared leaves, in addition to the literature cited. Digital resources that were especially useful, among many, for comparative material from the region included the websites for the Naturalis Biodiversity Center, Leiden (L, https://bioportal.naturalis.nl), the Herbarium of Universiti Brunei Darussalam (UBDH, http://ubdherbarium.fos.ubd.edu. bn), Muséum National d'Histoire Naturelle, Paris (P, https://science.mnhn.fr/institution/ mnhn/collection/p//list), the Herbarium of the Arnold Arboretum, Harvard University Herbaria (A, https://huh.harvard.edu), and JStor Global Plants (https://plants.jstor.org). For cleared leaves, we consulted slides and images from the National Cleared Leaf Collection (NCLC) held in the Division of Paleobotany of the National Museum of Natural History, Washington, D.C., including both the Jack A. Wolfe (NCLC-W) and Leo J. Hickey (NCLC-H) contributions. These collections are also online at http://clearedleavesdb.org (NCLC-W) and https://collections.peabody.yale.edu/pb/nclc (NCLC-H) and were recently consolidated digitally as part of an open access leaf-image dataset . The order of morphotype listing is 22 "dicots" (non-monocot angiosperms), three monocots, then one morphotype with unknown affinities. Length and width measurements are estimated if 1 cm or less of the missing dimension was inferred by eye. If this was not possible, or if the largest dimension was found in a fragmented specimen, the largest measurable dimension for the species is given as a minimum length or width denoted as an inequality (i.e., "length > 5 cm"). Insect-feeding damage was recorded (Appendix 2) following the damage type (DT) system of Labandeira et al. (2007) and is included in the morphotype descriptions. Palynoflora Palynological results are summarized for the Berakas Beach and Kampong Lugu sites in Appendix 1, Table 2, and Fig. 4. Fern spores and fungal bodies dominate both assemblages. Mangrove pollen is common at Kampong Lugu but comparatively rare at Berakas Beach. Uncommon tree and liana pollen at both sites record deposition primarily from the adjacent lowland rainforests, which were the sources of the abundant leaves that were transported into the depocenters and fossilized, and to some extent from more distant slope forests. Dipterocarp pollen was notably rare, even though the family dominated all the leaf assemblages (see Macroflora). Additional microfossil elements include freshwater and marine algae, dinocysts, and foraminifera. Age constraints Age-specific palynomorphs are rare in the assemblages ( Berakas Beach A very high abundance assemblage of well-preserved palynomorphs was recorded from both Berakas Beach samples, which are very similar to each other (Appendix 1). The assemblage is dominated by terrestrially derived miospores and fungal bodies and includes rare marine microplankton and freshwater algae. Marine microplankton (<1% of the total sample) includes the marine alga Tasmanites sp., the dinocysts Operculodinium sp. and Spiniferites sp., and a foraminiferal test lining. The freshwater alga Botryococcus sp. Fungal bodies occur at very high abundance. These include very common fungal hyphae, abundant and diverse fungal spores (mainly Inapertisporites and Mediaverrunites, with Alleppeysporonites, Brachysporosporites, Dicellaesporites, Diporisporites, Dyadosporites, Fusiformisporites, Multicellaesporites, Multicellites, and Pluricellaesporites), and common fungal fruiting bodies (mainly Phragmothyrites). Kerogenaceous material is predominantly of terrestrial origin, including abundant plant cuticle, common degraded vitrinite, relatively common structured inertinite and structured dark vitrinite, and rare plant tracheids. The cuticle occurs as fragments of ca. 50 to > 300 micron size, some showing good cell structure and stomata with potential for future study through a dedicated maceration effort. Reworked or transported material includes one specimen each of the bisaccate gymnosperm pollen Pityosporites from Mesozoic or older sediments, Spinizonocostites echinatus (Maastrichtian-Eocene), and Cicatricosisporites (Cretaceous or younger), as well as taxodioid conifer pollen presumably transported from mainland Asia. Palynotaxa from Berakas Beach reported (from a higher stratigraphic level; see Overall, the Berakas Beach palynological assemblage is dominated by ferns and marshor swamp-derived fungi, along with freshwater algae, rare mangrove pollen, and low numbers of lowland forest and slope-forest pollen. The data indicate that the depositional environment was a lowland, fern-dominated swamp with some restricted marine influence, into which occasional palynomorphs (and abundant leaves) were deposited from the adjacent lowland tropical rainforest, with additional pollen input from more distant areas. Kampong Lugu At Kampong Lugu, a very high abundance assemblage of well-preserved palynomorphs was found in both samples, which are self-similar (Appendix 1). Terrestrially derived miospores and fungal bodies dominate this assemblage, which also includes rare marine microplankton such as dinocysts (Impletosphaeridium, Operculodinium, and Spiniferites), the marine algae Leiosphaeridia and Tasmanites, and two foraminiferal test linings. Freshwater algae include common to very common Chomotriletes and abundant Botryococcus (typical of still-standing water). The specimens of Botryococcus, as at Berakas Beach, are structureless or almost so, a type of preservation suggesting stressed, possibly brackish environmental conditions (Guy-Ohlson, 1992). In summary, the Kampong Lugu palynological assemblage is dominated by ferns and mangroves, with marsh-swamp-derived fungi, freshwater algae, and rare but diverse additions from the nearby tropical lowland rainforest and potentially more distant hill forest. The sample indicates that the depositional environment was a mangrove swamp with a restricted marine influence, into which occasional palynomorphs were deposited from the nearby lowland tropical rainforest (also the source of the abundant leaves) and more distant hill forests. Macroflora The macrofloral collection from Berakas Beach and Kampung Lugu, combined, totaled 339 fossiliferous slabs, each of them containing one or more fossil leaves or other plant specimens, of which 136 slabs (40%) preserved material of 142 individual plant fossils that we sorted into 25 distinct morphotypes (Table 1; Appendix 2). Fossils on the remaining 60% of slabs lacked preservation of distinctive features and were categorized as unidentifiable ("Un"; Appendix 2). Preservation was significantly better at Kampong Lugu, where a majority of slabs had identifiable material, compared with less than a fourth at Berakas Beach (Table 1). However, despite the much smaller sample size of identifiable material at Berakas Beach, morphotype diversity was a third higher, including more unique morphotypes than Kampong Lugu and the only reproductive material in the collections (Table 1). Nearly all specimens identified to morphotype are "dicot" (non-monocot angiosperm) leaves, with rare reproductive structures and monocot leaf fragments. Even though the total sample size was limited by preservation, it is clear ( Table 1) that dipterocarp remains overwhelmingly dominate both assemblages by relative abundance, comprising 79% of total identified specimens (42% at Berakas Beach and 90% at Kampung Lugu); the four dipterocarp morphotypes also occupy the four highest-ranked abundances overall. Dipterocarps also rank highest in observed diversity, with evidence for at least four species in three genera (Dipterocarpus, Dryobalanops, and Shorea). Dryobalanops leaves were, by far, the most abundant taxon in the whole collection, especially at Kampong Lugu, where they comprised 78% of specimens (Table 1). Several other morphotypes (Table 1) and many unidentified leaves also appear to represent dipterocarps, although we did not assign them to the group. Other groups present include Melastomataceae, Rhamnaceae (Ziziphus), Araceae (Rhaphidophora), and probable Malvaceae, Myrtaceae, and Arecaceae. Notably, the dipterocarp-rich macrofloras lack a single fern fossil and are strikingly different from the fern-dominated, dipterocarp-poor palynofloras derived from the same strata, reflecting significant differences in preservational filters and pathways. Several factors inhibit the preservation of tropical fern macrofossils, including low biomass, lack of dehiscence, and the low potential of epiphytic fern remains to reach depocenters before they decompose (Scheihing & Pfefferkorn, 1984; see Introduction regarding dipterocarp pollen preservation). Insect damage was rare (Appendix 2), presumably because of overall preservation quality, and included several types of external feeding (hole feeding and skeletonization) as well as possible galls and a possible mine (Appendix 2). No domatia were observed on fossil dipterocarp leaves (Guérin, 1906), probably due to preservation limitations. Distinguishing features. Morphotype BR01 (= Dipterocarpus sp. BR01) has a thick midvein with strong relief, a longitudinally well-folded blade, and plicate vernation, each of these features making marked impressions in the sediment (Fig. 5). Depending on the amount of compression, the plications preserve with a strongly corrugated texture ( Fig. 5A) or, more commonly, as subtle to pronounced longitudinal bulges in the intercostal areas (Figs. 5B-5F). The petiole is stout (Fig. 5C). Secondary veins are robust, up to at least 11 pairs, and regular, course eucamptodromous and unbranched, angle gradually becoming more acute apically. Tertiaries are numerous, thin, and opposite percurrent (Fig. 5F). The margin is entire and not visibly sinuate. Remarks. Features of the fossils commonly found in Dipterocarpaceae (Ashton, 1964(Ashton, , 1982 include their elliptic blades with thick, high-relief midveins, prominent longitudinal folding, and thickened petioles; regular, unbranched secondaries; and dense, opposite percurrent tertiary veins. Combined with the well-preserved plications, these features place the fossils with high confidence in Dipterocarpaceae. The two living dipterocarp genera that often have plicate vernation are Dipterocarpus and Parashorea Kurz, which nests within Shorea Roxb. ex C.F. Gaertn. in molecular phylogenetic analyses (Heckenhauer et al., 2017;Ashton et al., 2021). Ashton (1982) described Parashorea species as having unthickened or barely thickened petioles, unlike the prominently swollen petioles in Dipterocarpus and the fossils. Although the fossils do not have sinuate margins, many living species of Dipterocarpus also lack this feature. From all the evidence, we consider the fossils to represent one or perhaps more species of Dipterocarpus. Specimens from Berakas Beach and Kampong Lugu have no discernible differences, although they may well have originated from different species with similar leaf morphology. See Remarks for morphotypes BR02 (Dipterocarpus sp. BR02), BR13, and BR18 for additional comparisons within the fossil assemblage. Dipterocarpus (Keruing) is a widespread genus of medium to large trees, with ca. 70 species from Sri Lanka through India and Indochina and the Malay Archipelago to the Philippines; in Brunei, there are ca. 26 species, most occurring below ca. 900 m altitude, especially in habitats with high insolation such as riparian corridors, heath forests, and ridges (Ashton, 1964;Ashton, 2004;Coode et al., 1996). Borneo is the center of diversity and endemism for the genus (Ashton, 1982). Additional material. Four specimens from Kampong Lugu ( Fig. 6D; Appendix 2). Distinguishing features. Morphotype BR02 has very large leaves, along with brochidodromous, widely spaced major secondaries and a prominently sinuous (per Ashton, 1964) margin following the secondary loops, combined with plications preserved as longitudinal furrows in the intercostal areas (Figs. 6A, 6B). Tertiary veins are strongly opposite percurrent with convex course, prominent, and regular (Fig. 6B). The lamina and secondary veins (and not the adjacent sediment) are covered with minute pits inferred to be hair-base impressions (Fig. 6C). Description. Midvein stout with high relief, blade preserving plications; length not measurable; width on one specimen inferred as >> 22 cm, the largest in the collection (Fig. 6D); margin strongly sinuous following secondary loops. Primary venation pinnate; major secondaries simple brochidodromous with strong loops closely aligned with the marginal sinuations, spacing regular, wide (to ca. 2 cm), angle smoothly decreasing apically. Intercostal tertiary veins prominent, opposite percurrent, course convex, obtuse to midvein; vein angle consistent; epimedial tertiary veins perpendicular to midvein at departure, then parallel to intercostal tertiaries. Quaternary veins mixed percurrent, quinternary veins regular reticulate. Base not preserved, apex poorly preserved. Hair-base pits ubiquitous on the major veins and laminar surface of the presumably once-tomentose blade. Remarks. The combination of a plicate, tomentose blade, a prominently sinuous margin that follows strong brochidodromous secondary loops, a thick and raised midvein, regular stout secondaries, and regular opposite percurrent tertiaries clearly points to affinity with Dipterocarpus, even in the absence of a preserved leaf base. Some specimens have very widely spaced secondary veins, indicating notably large leaf sizes based on scaling relationships (Sack et al., 2012). Even as a fragment, the exemplar specimen is already one of the largest fossils in the collection (Fig. 6A), and we estimate its original leaf area at ca. 16,000 mm 2 (large mesophyll) based on the vein-scaling method of Sack et al. (2012). Among living Dipterocarpus species in Brunei, the preserved features of the fossil, including size, brochidodromy, prominent percurrent tertiaries, and conspicuous marginal sinuations, are most similar to three species noted by Ashton (1964) for their very large leaves: D. confertus Slooten, D. elongatus Korth (D. apterus Foxw.), and D. humeratus Slooten. The marginal curvature and strong sinuations observed in the fragmentary fossils seem incompatible with the elongate, comparatively straight-sided leaves with relatively shallow sinuations of D. elongatus. However, the fossils are very similar in their preserved architecture to the other two species listed. Of those, D. confertus has a more conspicuously hairy leaf surface, especially on the veins, corresponding to the numerous hair bases preserved in the fossils (Fig. 6C). Dipterocarpus confertus is a Near Threatened, Borneo endemic species to 50 m tall, occurring in mixed dipterocarp forests below 800 m (Ashton, 1964;Ashton, 1982;Ashton, 2004). Morphotypes BR01 and BR02, both assigned to Dipterocarpus, are distinguished based on the non-sinuate margin and mostly eucamptodromous secondaries in BR01, compared with the strongly sinuate margin, brochidodromous secondaries, and numerous surficial hair-base pits in BR02. The widely spaced secondaries and very large leaf size in some BR02 specimens further distinguish it from BR01. Distinguishing features. Morphotype BR03 is an elliptic to ovate-lanceolate microphyll with a straight (cuneate) base, prominent midvein with high relief (Figs. 7B, 7C), and broad-acuminate apex (Fig. 7D). The blade is longitudinally folded (Figs. 7B, 7C), making a strong impression in the sediment along with the midvein. Major secondaries (Figs. 7E-7G) are numerous, very thin, unbranched, high-angled, and closely spaced, entering an intramarginal vein that originates near the base and runs barely inside the margin (Fig. 7C); secondaries alternate with very thin intersecondaries that are more deflected than, but with length nearly as long as, the secondaries. Tertiaries are regular reticulate, in small, well-defined rectangular or other polygonal fields, the fields packed in ca. two rows per secondary-intersecondary pair. The fine vein mesh is often pushed through with tiny sediment plugs (Fig. 7G). The blade is often coalified; cracking in the coal presents artifactual patterns resembling venation, which is best seen where the coal has flaked off or is manually removed with a needle to reveal the venation impression underneath. Description. Blade attachment marginal. Petiole length > 9.3 mm, width to ca. 1.6 mm (n = 2); petiole not thickened at insertion and often preserved in a microstratigraphically offset position from the well-impressed, folded blade and thickened, high-relief midvein. Blade apparently coriaceous, based on the extensive coalification observed. Midvein prominent with strong relief and blade longitudinally folded, each feature impressing the sediment. Lamina length 4.0-7.7 cm (n = 6); width 0.7-5.0 cm (n = 44); L:W ratio 2.8:1 (n = 6); lamina shape elliptic or ovate-lanceolate, symmetrical. Margin unlobed and entire, thickened, and slightly revolute; basal margin not inrolled. Base angle acute; base shape straight with slight decurrence at insertion. Apex broad-acuminate. Primary venation pinnate. Major secondaries parallel, very thin, dense, >60 pairs, diverge from primary at a high angle, uniformly spaced, course without branching, then tightly loop barely inside the margin to join a slightly irregular intramarginal vein that arises near the leaf base and is difficult to discern from the margin. Intersecondaries parallel to major secondaries, nearly as long as the secondaries then reticulating toward the margin, course deflected by tertiaries, frequency usually one per intercostal area. Tertiary venation conspicuously regular reticulate, making small, densely packed rectangular or other polygonal fields of somewhat variable size that are packed in ca. two rows between each secondary-intersecondary pair. Quaternary and quinternary veins indistinct, apparently reticulate. Sediment plugs often push through and slightly distort the appearance of the vein mesh. Elongate slot feeding was observed, oriented parallel to secondary veins (DT8). Remarks. Morphotype BR03 is by far the most common form at Kampong Lugu and in the whole collection (Table 1), also occurring as a single specimen at Berakas Beach. The distinctive venation pattern makes the morphotype easily recognizable, even from small fragments. At Kampong Lugu, there are numerous fragments of the morphotype in the sediment, attesting to even higher dominance than we could reliably tabulate. Characters of living Dryobalanops species (Ashton, 1964) match some to all the features of the fossils, including microphyll size, elliptic to lanceolate shape, prominent midveins, folded blades, broad-acuminate apices, dense and parallel major secondaries alternating with intersecondaries, intramarginal veins almost on the margin, and tertiary veins in small, regular fields. Although most Dryobalanops species are described as not having intersecondary veins, which may be obscure in fresh or herbarium material, the intersecondaries are noted in older literature (van Slooten, 1932) and are easily visible in cleared-leaf specimens. When parallel secondary venation occurs in other dipterocarps, namely Cotylelobium Pierre and Hopea Roxb., it is considerably less dense and entirely different in appearance from Dryobalanops (Ashton, 2004) and the fossils. The general combination of thin, dense secondary venation and intramarginal veins occurs in several unrelated plant families (Hickey & Wolfe, 1975). Examples in the Brunei flora include Myrtaceae (i.e., Syzygium P. Browne ex Gaertn. spp.), Sapotaceae (Payena A. DC.), Ochnaceae (Ouratea Aubl.), Moraceae (Ficus L.), and Calophyllaceae (Calophyllum L.;Coode et al., 1996). However, those families and others with some comparable leaves (e.g., Anacardiaceae, Vochysiaceae) lack most or all key characters of the fossils, especially the high-rank reticulate tertiary mesh. For example, in Myrtaceae and Sapotaceae, the major secondaries are not nearly so densely spaced as in the fossils and living Dryobalanops and are less regular, the tertiaries are much less organized, and the intramarginal vein is located farther from and is easily distinguished from the margin (see also possible myrtaceous morphotypes BR07 and BR08). Other genera with species that are superficially similar to the fossils, such as Calophyllum and Ouratea spp., do not usually have a broad-acuminate apex; their tertiaries are less organized and do not form a similar mesh (a leaf fragment similar to Calophyllum was found at Berakas Beach: Appendix 2). Dryobalanops (Kapur) is a Malesian genus of very tall to emergent (to 65 m tall) large-crowned trees with seven species; northern Borneo is the center of diversity and endemism (van Slooten, 1932;Meijer & Wood, 1964;Ashton, 1982). Four species are found in Brunei, all below ca. 800 m altitude but each in different habitats (Ashton, 1964;Coode et al., 1996): D. aromatica C.F. Gaertn., D. beccarii Dyer, D. lanceolata Burck, and D. rappa Becc. The distinctive, well-organized tertiary vein fields of the fossils are most comparable with those of D. aromatica (also in Sumatra, Peninsular Malaysia, and elsewhere in northern Borneo), as well as two species with ranges nearby: D. fusca Slooten (Sarawak and West Kalimantan) and D. oblongifolia Dyer (Sumatra, Kalimantan, Sarawak, and Peninsular Malaysia). Of those three species, D. aromatica and D. oblongifolia have very different leaf shapes (orbicular and oblong, respectively) from the fossils (elliptic to ovate-lanceolate), and the intramarginal vein of D. aromatica often arises from a pair of secondaries that diverge noticeably above the leaf base, unlike the near-basal divergence in the fossils (Fig. 7C). Overall, the fossils' general features, including size range, petiole length, base and blade shapes, and details of the tertiary reticulation are closest to D. fusca, a Critically Endangered low-elevation kerangas species (van Slooten, 1932;Ashton, 1982;Ashton, 2004;Randi et al., 2019). However, the fossils lack or did not preserve the characteristic, dense hairs found on the lower leaf surface of D. fusca. Distinguishing features. The exemplar specimen (Fig. 8) is a winged fruit consisting of a nut with an ovoid body and two attached, apparently subequal, obovate wings (calyx lobes) visible on the surface with ca. nine parallel veins each. A third wing of similar size and shape is preserved within the sediment, visible under CT scan (Fig. 8C). The wings extend basally, adpressed around the nut. The isolated wing fragments (Figs. 9A-9D) are large (to ca. 9 cm length, 1.4 cm width), obovate, with ca. 10 parallel veins, joined by percurrent, variably oriented and curved cross veins. All wing apices, when preserved, are asymmetrically rounded. Description. Nut ovoid, 9.3 by 6.8 mm, making a rounded impression in the sediment, with traces of the nut wall preserved. All internal material degraded and coalified, apiculus and styles not preserved. Two attached wings (calyx lobes) of exemplar specimen visible at surface, apparently subequal, one (at left in Fig. 8A) with apical portion broken off, the other relatively complete. Wings adpressed to the nut body near the nut apex, following the remaining positive relief of the nut body basally (where still preserved) along one margin for more than 60% of the nut length, then broken off preservationally over most of the nut body (Fig. 8, white arrows). Wings obovate, length of free portion 5.8 cm, maximum width 0.9 cm at ca. 80% of the wing length, with asymmetrical, subrounded apices and ca. nine parallel veins, each arising separately from the base. Cross veins percurrent, with variable course, angle, and spacing. One wing preserved within the sediment directly underneath the broken surface wing, visible under CT (Fig. 8C), subequal to the surface wings in dimensions, shape, and venation, the base closely adpressed to the obverse face of the nut (Fig. 8C, inset). Isolated wings obovate, preserved length to 8.7 cm, width 1.0-1.4 cm (n = 3), gently tapered basally and apically. Base not preserved; apex acute, subrounded, and slightly asymmetrical. Venation well preserved only in one specimen (Figs. 9A, 9B). Parallel veins ten or more; two parallel veins run close to the margin, thinner than the medial parallel veins. Cross veins percurrent, closely spaced, with angle, spacing, and course variable. Higher-order venation reticulate. Remarks. The configuration of the exemplar specimen is unique to Dipterocarpaceae, including obovate, subequal, parallel-veined wings attached laterally to and clasping an ovoid nut (e.g., Ashton, 1982). For the dispersed wing fragments, the large size, numerous parallel veins, variable-percurrent cross venation, and presence of visually distinct higher-order venation also distinguish them as dipterocarpaceous and separate them from showing an ovoid nut with two clasping, obovate, apparently subequal fruit wings (calyx lobes); the wing at left is missing its apex (fragments distal to the wing are dark-stained matrix, not fossil), but the wing at right is relatively complete; (C) Rotational views of CT scans, showing an additional large wing (dark color) embedded in the sediment directly underneath the broken wing at the surface, subequal in size and shape to the more complete wing at the surface. Inset, initial scan that captured fragments of the embedded wing (dark) clasping the obverse surface of the nut well toward the base. Full-size  DOI: 10.7717/peerj.12949/ fig-8 a suite of other extant and fossil taxa with fruit wings derived from perianth lobes, as detailed previously with regard to other dipterocarp fruit fossils (Shi & Li, 2010;Feng et al., 2013;Shi, Jacques & Li, 2014). The cross veins and lack of conspicuous resin ducts also distinguish the isolated fossils from leaves of parallel-veined gymnosperms in the region, such as Agathis Salisb.; the high cross-vein variability, combined with the obovate shape, are not found in any monocot leaves to our knowledge. The exemplar specimen and the isolated wings are generally similar in having a large number of parallel veins (at least 9-10), which is typical among the dipterocarps only of Hopea and Shorea species (Shi & Li, 2010). Molecular analyses have resolved Hopea as a derived subclade of Shorea (e.g., Heckenhauer et al., 2017;Ashton et al., 2021). Both traditional genera have their wing bases adpressed to the nut body and no calyx tube, as in the exemplar specimen, and are usually five-winged. In most Hopea species, there are two extended, prominent, subequal outer calyx lobes (wings) and three reduced, non-aliform inner lobes that are mostly adpressed to the nut (and would not be preserved in these specimens). In contrast, Shorea most often has two small inner wings with very narrow bases and three larger outer wings with wider bases, all well extended beyond the nut body. The exemplar specimen was found in the field with only the apex of one wing visible. After mechanical preparation, the two subequal wings and attached nut were revealed at the surface (as in Fig. 8A), appearing from all visible cues to represent Hopea. However, because the isolated wings from the same site (Figs. 9A-9D) appeared to represent Shorea, we used CT scanning to test the idea that the exemplar specimen might also represent Shorea, which would be the case if additional wings were present. One large wing was recovered from CT scanning (Fig. 8C), with the same size and venation as the better-preserved wing at the surface. This fortuitous discovery eliminated the possibility of Hopea and validated the Shorea hypothesis, which only requires the further, likely, presumption that the two smaller wings were lost to preservation or not detected in the CT scan. We considered the possibility that the broken surface wing (Fig. 8A, left), which appears small in CT scans (Fig. 8C), is a small wing; however, in surface view its preserved width, including at the base, its vein spacing, and its attachment to the nut are nearly the same as the more complete surface wing. Shorea is favored over Hopea for the dispersed wings as well as the articulated exemplar, although the dispersed wings are somewhat larger, and thus more than one Shorea species could be present. In our observation of extant material, Hopea cross veins are sparse, whereas Shorea cross veins are denser as in these (Fig. 9B) and other Shorea fossils (e.g., Shi, Jacques & Li, 2014). In addition, the large wing size is far more typical of Shorea than Hopea species. The enlarged dispersed wings and their well-marked, densely percurrent and variable cross veins resemble some living Brunei species in the Red Meranti group (S. subgenus Rubroshorea Meijer; see Ashton et al., 2021) that have broad wing bases (e.g., S. ferruginea Dyer ex Brandis; see Ashton, 1964). A few leaves with potential affinity to Shorea were found at Kampong Lugu (see morphotype BR13; Figs. 13D-13F). Both Hopea (Selangan) and Shorea (no common name applies to the whole genus; Ashton, 1964) are widespread and diverse in Brunei (Ashton, 1964;Coode et al., 1996) and beyond, although their numbers are drastically reduced due to anthropogenic pressures (Ashton, 2014). Borneo is the center of diversity and endemism for both genera (Ashton, 1982). Hopea is a lowland genus of the subcanopy to canopy with over 100 species in total, usually occurring below 800 m elevation from southern India and southern China into the Malay Archipelago to New Guinea; Shorea has nearly 200 species, often of dominant to emergent trees in varied lowland habitats (mostly below 1,200 m), from India to the Philippines, Java, and Wallacea (e.g., Ashton, 2004 Description. Blade attachment marginal. Lamina length > 12 cm, width > 9.7 cm; base cordate; margin not preserved; toothing, lobing, and symmetry unclear. Primary venation basal actinodromous with seven primary veins; agrophic veins compound with robust, apparently unbranched minor secondaries directed toward the margin. One pair of non-interior major secondaries preserved, diverging far above the base. Interior secondaries, becoming tertiary veins distally, are thin, closely spaced, opposite percurrent, and concentric in appearance from joining the primaries at right angles. Higher-order venation reticulate. Remarks. Morphotype BR05 matches the general features of the family Malvaceae Juss., which has many species with cordate bases, basally actinodromous primaries, compound agrophic veins, and opposite percurrent, concentric tertiaries (Carvalho et al., 2011). However, the family identification is not definite without a sufficiently preserved margin to detect potential teeth or lobes, which have distinctive characters in Malvaceae that allow separation from other families with broadly similar features (Carvalho et al., 2011). Species with similar leaves are found in several malvaceous genera in the region today, including Firmiana Marsili, Grewia L., Sterculia L., and Trichospermum Blume. Distinguishing features. Morphotype BR06 has an elliptic blade with five basal perfectacrodromous, unbranched primaries that reach the apex (Fig. 10C) and numerous, closely spaced, percurrent, convex interior secondaries departing the primaries at acute angles. The marginal venation is looped (Fig. 10B). Description. Blade attachment marginal. Lamina length to > 8.7 cm (n = 1), width to > 5.6 cm (n = 4); estimated length L:W ratio 3.3:1 (n = 1); lamina symmetrical. Margin unlobed and entire. Base and apex angle acute; base shape concavo-convex. Primary venation basal perfect-acrodromous with five primaries extending from base to apex, the lateral primaries close to the margin and much thinner than the medials. Secondaries interior, percurrent (scalariform), unbranched, course slightly to markedly convex, spacing slightly irregular, angle to the midvein acute and slightly irregular. Tertiary and higher-order venation irregular reticulate. Marginal ultimate venation in ca. two series of small, well-developed loops flattened inside the margin, in places forming a weak intramarginal vein. Remarks. Morphotype BR06 has several well-known characters of Melastomataceae subfamily Melastomatoideae, which has many species with perfect-acrodromous primaries extending to the apex, ladder-like interior secondaries (transverse veins), and looped marginal venation or an intramarginal vein. Those features, along with a non-cordate base, do not occur together in other families (Carvalho et al., 2021) and make many of the melastomes instantly recognizable in the field (Gentry, 1993). The family is very diverse in Brunei today, with about 25 genera (Coode et al., 1996). Among those, species of Pternandra Jack show some similarities to the fossils in general aspect, including their slightly irregular interior-secondary venation patterns (M. Carvalho, 2021, personal communication). Outside of Melastomataceae, the most similar taxon in the living Brunei flora is probably Anisophyllea R. Br. ex Sabine (Anisophylleaceae), which also has acrodromous venation with five or more primaries. However, the venation in that genus is much less organized than in Melastomataceae, including irregular basal offsets of the primaries that are very different from the perfect-acrodromous fossils. See Ziziphus sp. BR09 for additional comparisons within the fossil assemblage. Distinguishing features. Morphotype BR07 has an oblong blade and a straight base with a ca. 90 angle. It has thin, closely spaced major secondary veins, with flattened loops near the margin forming a weak intramarginal vein, and intersecondary veins with frequency usually one per intercostal area and length less than 50% of subjacent secondaries. Tertiaries are irregular and reticulate. Distinguishing features. The specimens in morphotype BR08 are likely to be fragments of long-elliptic leaves. The thin, dense major secondaries have irregular spacing and nearly uniform angle to the midvein, terminating in a well-marked intramarginal vein. Intersecondaries are present but difficult to distinguish from the random reticulate tertiaries. Remarks. Comparing morphotype BR08 with BR07 (the other possible Myrtaceae), BR07 has brochidodromous major secondaries, stronger intersecondaries, and a weaker intramarginal vein, whereas BR08 lacks distinct secondary loops, has weaker intersecondaries, and has a stronger intramarginal vein that is closer to the margin; BR08 is also larger than BR07 and apparently long-elliptic. Compared with BR03 (Dryobalanops), both BR07 and BR08 have intramarginal veins that are more distinct from the margin and significantly looser tertiary-vein organization. As for BR07, other familial assignments are possible. See morphotype BR21 for additional comparisons. Distinguishing features. Morphotype BR09 has three basal perfect-acrodromous primary veins and is markedly asymmetrical (Figs. 12A, 12D), with tiny marginal serrulations (Fig. 12C). The lateral primaries form agrophic vein complexes with weakly looping minor secondaries; the agrophic-vein field is larger on one side of the blade than the other (Figs. 12A, 12D). All major secondary veins are interior (Fig. 12B), percurrent, thick, well spaced, and slightly irregularly angled; their course is convex, deflected by tertiaries, and nearly perpendicular to the primaries at departure. Tertiary and higher-order venation is clearly visible. Description. Blade attachment marginal, insertion area not preserved. Lamina length to > 10.6 cm, width 3.3-6.6 cm (n = 2); lamina shape elliptic and strongly asymmetrical. Margin unlobed and serrulate. Base angle acute; base shape convex. Primary venation basal perfect-acrodromous with three strong primary veins; agrophic veins simple, prominent, weakly looped, developed into a larger field on one side of the blade with more than nine minor secondaries per field. Major secondaries all interior, percurrent, unbranched or branched, thick and apparently raised, departure from primaries nearly perpendicular or slightly acute, spacing wide (ca. 2.5-5.0 mm), angle somewhat irregular, course slightly to markedly convex and deflected at junctions with the tertiaries. Non-interior major secondaries and intramarginal vein absent. Tertiary, quaternary, and quinternary veins irregular reticulate, clearly visible, apparently raised. Marginal ultimate venation a series of small loops inside the margin, giving off short exterior tertiary veins that connect the loops to a thin fimbrial vein. Serrulations minute, closely spaced (ca. 5 per cm), shape straight-convex or convex-convex, vascularized by the exterior tertiary veins and fimbrial vein. Laminar glands, hairs, domatia, and marginal callosities not observed, presumably due to preservation. Small hole-feeding marks present, to ca. 3 mm length (DT1, DT2; Figs. 12A, 12B). Remarks. The markedly asymmetrical blade with three strong acrodromous primaries, well-developed, percurrent interior secondaries, looping agrophic veins, and serrulate margin present a distinctive combination that diagnoses the fossils as Rhamnaceae. These features are considered typical of the ziziphoid genera Ziziphus, Paliurus Mill., Ceanothus L., and a few others, although there is a consensus that these taxa cannot be distinguished using leaf architecture alone (Meyer & Manchester, 1997;Burge & Manchester, 2008;Jud et al., 2017). Nevertheless, Ceanothus and Paliurus species almost always have some non-interior major secondaries, and their venation is thus quite different from the strongly percurrent, entirely interior major secondaries of the fossils and many Ziziphus species. Instead, major secondaries of Ceanothus and Paliurus may be eucamptodromous or reticulodromous, not reaching the lateral primaries except through (2021), the fossils most closely resemble Z. kunstleri King (including Z. cupularis Suess. & Overkott by synonymy) in having very similar well spaced, often-branching interior secondary veins (transverse veins) and well-marked higher-order venation that deflects the interior secondaries at the junctions. Additional shared features include closely comparable blade size and shape, marginal ultimate venation, and density and type of marginal serrulations, as well as the lack of an intramarginal vein. The only significant differences appear to be that the blade of Z. kunstleri is more symmetrical and has more numerous minor secondary veins in the agrophic complexes than the fossils. Ziziphus kunstleri is a Near Threatened liana distributed in the lowlands of Borneo (including Brunei), Peninsular Malaysia, and Thailand (Cahen, Rickenback & Utteridge, 2021). Consistent with the idea that these leaf fossils could represent lianas, leaf lengths less than 20 cm, as in both fossil specimens, are found in the climbers but not in the arborescent Ziziphus species of Borneo (Cahen, Rickenback & Utteridge, 2021). Within this study, morphotype BR09 is superficially similar, because of its perfect-acrodromous primary venation, only to morphotype BR06 (Melastomataceae). However, BR09 is asymmetrical with three primaries, whereas BR06 is symmetrical with five, and BR06 has lateral primaries near the margin, which is entire; in BR09, welldeveloped, looping agrophic veins dominate the lateral venation, and the margin is serrulate. Distinguishing features. Morphotype BR10 is long-elliptic, with eucamptodromous, regularly spaced, numerous (up to at least 12 pairs) major secondaries, whose angle to the midvein decreases smoothly proximally. Remarks. Morphotype BR10 is a generalized category representing the combination of near-oblong shape, numerous eucamptodromous and unbranched major secondary veins, and opposite percurrent tertiary veins. Based on these features, BR10 is likely to represent one or more dipterocarp species of uncertain generic affinities. Exemplar and only specimen. UBDH F00037a,b (PW1501-37a,b, from Berakas Beach; Fig. 13A). Distinguishing features. Morphotype BR11 has an asymmetrical, subrounded base, with two pairs of basal major secondary veins at an abruptly large angle to the midvein, more so on one side than the other, and thin, opposite percurrent tertiary veins. Distinguishing features. Morphotype BR12 is a fragment of a pinnate leaf preserving irregularly spaced and angled secondary veins and strong, numerous, opposite percurrent but irregularly spaced and curved intercostal and epimedial tertiary veins. The tertiaries are nearly perpendicular to the midvein at departure and only slightly increase in angle exmedially. Remarks. Even though the single specimen of morphotype BR12 is only partially preserved, it is distinctive within the collection for its strong, numerous, straight opposite percurrent intercostal and epimedial tertiaries and its irregular secondaries, which must be carefully distinguished from linear, vein-like impressions of other material (probably of small twigs) in the fossil. The most similar morphotype here is Dipterocarpus sp. BR02, which has regular and much wider secondary and tertiary-vein spacing and a strongly impressed midvein, features lacking in morphotype BR12. Additional specimen. One specimen from Kampong Lugu (Fig. 13F). Distinguishing features. Morphotype BR13 has strong, regular, eucamptodromous, unbranched major secondaries upturned near the margin, with angle to the midvein increasing proximally and spacing decreasing proximally. Short intersecondary veins are variably present. The tertiary veins have a very high angle and nearly parallel the midvein (Figs. 13E, 13F). The midvein is thick, especially near insertion, suggesting that a broad petiole was present. Exemplar and only specimen. UBDH F00056 (PW1501-56, from Berakas Beach; Fig. 15A). Distinguishing features. Morphotype BR17 has a markedly decurrent base and a convex, smoothly curved, almost rounded margin with a short-acuminate apex; the blade is basally and medially asymmetrical. The major secondaries are brochidodromous with flattened exterior loops. Distinguishing features. Morphotype BR18 is narrow-elliptic with an acute base and apex, and it has strong, regular, eucamptodromous major secondaries with regular spacing and a uniform, low angle to the midvein. The tertiaries are thin and opposite percurrent. Fig. 15B). Remarks. Based on its regular, robust secondary veins and dense, opposite-percurrent tertiaries, the specimen is probably a small leaf of Dipterocarpaceae and could have affinities with Dipterocarpus sp. BR01. We report morphotype BR18 separately because its secondary vein angle is lower, no traces of plications are visible, and the blade is much smaller and narrower than BR01. Distinguishing features. Morphotype BR20 is microphyll in size and elliptic, egg-shaped with a smoothly curved margin. It has numerous weak, thin major secondaries with irregular courses and angles that reticulate approaching the margin. Distinguishing features. Morphotype BR21 has a long-acuminate apex. Major secondaries are thin and brochidodromous, with flattened loops close to the margin. Tertiary fabric is reticulate. Remarks. Although only the apical portion of a single specimen is preserved and its affinities are unknown, morphotype BR21 is distinct from comparable morphotypes, such as BR07 and BR08, due to its long-acuminate apex and thin, widely spaced, brochidodromous major secondaries with flattened loops near the margin. Morphotype BR17 also has some similarities, but its secondary veins are denser and higher-angled, with a shorter apex than BR21. Distinguishing features. The blade of BR22 is long-elliptic and unlobed, with a thick costa and quilted relief; primary veins (per Boyce, 2001, also termed lateral veins) are moderately spaced, separated by numerous subparallel to reticulating interprimary veins that are deflected by and merge with random reticulate higher-order veins (Figs. 16A-16D). Primary veins, interprimary veins, and reticulum terminate in the fimbrial vein (Fig. 16D). Elliptical perforations (fenestrae) are present near the costa (Figs. 16A-16C), located between or interrupting the primary veins, orientation roughly parallel to the primary veins. Description. Laminar shape long-elliptic, with a quilted texture; leaf length > 7.6 cm; width > 2.7 cm (n = 1). Margin unlobed and entire. Costa thick. Primary veins in at least 12 pairs, spacing regular, separated by ca. 6-7 thin interprimary veins oriented subparallel to the primaries, course deflected by and merging with reticulum of higher-order veins. Primary veins, interprimary veins, and reticulum terminate in the fimbrial vein. Base and apex not preserved. Perforations irregular-ellipsoidal, dimensions somewhat distorted preservationally, long axis length 2.4-3.5 mm, short axis length 1.0-1.9 mm; aligned in two rows on either side of, near to, and parallel to the costa; each oriented roughly parallel to or interrupting primary veins; frequency one per primary vein. Remarks. Much of the basal preserved portion of the single specimen (as seen in Fig. 16C) is folded down vertically into the sediment on one side, distorting the margin and leaf shape, although this area best preserves the venation and the quilted texture. The quilted texture is also observed in the unfolded apical portion of the fossil, confirming this feature. The overall architecture of the specimen is typical of some Araceae genera, including the combination of a narrow-elliptic blade with a thick costa, numerous parallel primary and interprimary veins, and a fimbrial vein (Mayo, Bogner & Boyce, 1997). Several of the ca. 28 genera of Araceae in the Brunei flora (Coode et al., 1996) have species with very similar features to the fossil, among them Homalomena Schott, Rhaphidophora, Schismatoglottis Zoll. & Moritzi, and Scindapsus Schott (Mayo, Bogner & Boyce, 1997;Boyce, 2001;Boyce & Yeng, 2016). The presence of numerous elliptical fenestrae on the blade (Figs. 16A-16C) is typical of some araceous genera (Mayo, Bogner & Boyce, 1997) and adds confidence that the fossil does not belong to a different family with somewhat similar leaves that lack this morphology, such as Zingiberaceae. We considered whether these features could represent insect damage, but they lack reaction rims and are too regular in size and placement to represent hole feeding. The only possibility related to insect activity would be bud feeding, but this would require the unlikely scenario for this family of the leaf being scrolled longitudinally in the bud rather than laterally and for two feeding events to have occurred in parallel. Accordingly, clear examples of bud feeding that we found in herbarium material of Rhaphidophora are oriented perpendicular to the costa (not parallel as seen here) and show substantial (ca. 3-6x) secular size increase related to instar stages (not more or less constant size as seen here). Araceous genera with commonly well-fenestrated leaves, the majority in Tribe Monstereae, mainly occur in the Neotropics and Africa and have very different leaf shapes (i.e., sagittate, pinnatifid) from the fossil (Mayo, Bogner & Boyce, 1997). However, there are two monsteroid genera in the Brunei and regional flora with well-perforated leaves in some species: Amydrium Schott, which has an entirely different leaf form from the fossil, and Rhaphidophora, which, as just discussed, has very similar leaf architecture. Like the fossil, Rhaphidophora has several species with numerous ellipsoid perforations very close to or intersecting the costa, as well as a quilted blade appearance due to the thick costa and primary veins (i.e., R. foraminifera (Engl.) Engl. and others; see Mayo, Bogner & Boyce, 1997: plates 14A, 109B;Boyce, 2001: figs. 6, 7;Boyce et al., 2010: plate 6A, B). Thus, by simple elimination, the fossil can be provisionally assigned to Rhaphidophora, a widespread, West African and South and East Asian genus with about 100 species mostly of herbaceous root-climbers, including 16 species in Borneo (Boyce, 2001;Boyce et al., 2010). Distinguishing features. Morphotype BR23 is a blade fragment preserving at least ten parallel veins and irregular, oblique, sinuous cross veins. Description. Morphotype BR23 is represented by a single leaf fragment, ca. 3.7 cm by 2.0 cm, with no base or margins visible. At least ten parallel veins are preserved, along with oblique, variably angled, curved to sinuous cross veins. Remarks. The irregular, oblique, sinuous cross veins connecting parallel veins are characteristic of palms (Arecaceae) but not diagnostic without other information (e.g., Gómez-Navarro et al., 2009). The most similar fossils in the collection are the fruit wings of Shorea sp. BR04, the largest of which is about half the width of the BR23 fragment, with much closer vein spacing and comparatively straight, not sinuous cross veins. Distinguishing features. BR24 is strap-shaped and thick-textured with a strong midvein and thickened margin; numerous but indistinct parallel veins cross the blade longitudinally. Remarks. Morphotype BR24's strap-like shape, single midvein, and parallel venation indicate a monocot, but no diagnostic features of lower taxa are preserved. Distinguishing features and description. BR25 is a probable fruit or seed of rounded shape with diameter about 4.0 cm. Remarks. Morphotype BR25, though indistinct, is the only probable non-dipterocarp fruit or seed fossil in the collection. DISCUSSION This article reports the first fossil compression floras from Brunei and the first paleobotanical collections of significant size from the Cenozoic of the Malay Archipelago for a century or more. The macroflora and co-occurring palynoflora provide valuable new information about late Cenozoic coastal rainforests in Borneo. The data show that the ancient floristic composition and ecosystem structure were very similar to modern, including the characteristic dipterocarp dominance, thus heightening the historical conservation significance of the living analog forests. All floristic and physiognomic (e.g., only one species toothed, some leaves very large) indicators strongly support an everwet, megathermal climate, consistent with palynological data that show the persistence of aseasonal equatorial rainforests in Borneo and equatorial Malesia through the Pliocene and Pleistocene (Morley & Morley, 2013;de Bruyn et al., 2014). In a broad sense, this work affirms Wallace's (1869) idea that the fossil leaves he observed in the region were nearly identical to modern taxa, although we do not know which fossil deposits Wallace observed. The nearly-entirely different compositional and abundance signals from pollen vs leaves demonstrate their complementary value and the importance of compression fossils for clarifying the region's Cenozoic vegetation history, which had been known almost entirely from palynological data and some fossil woods (see Introduction). Our study shows that a great deal of information can be extracted from the overlooked compression floras of the Asian wet tropics through extensive collecting and careful preparation, despite the general likelihood of poor preservation. For instance, less than a fourth of the collected slabs had identifiable material at Berakas Beach, but that fraction contributed significantly to the floristic information recovered (Table 1). General reconstructions We reconstruct the Berakas Beach assemblage as the remains of a (most likely) early Pliocene, coastal fern-dominated swamp with mostly still, at times brackish water and restricted marine influence. Near the depocenter, diverse semi-aquatic and terrestrial ferns with a wide variety of presumed life habits were present, along with several types of lycophytes and enormous numbers of fungi. Nearby mangroves included Nypa, Avicennia, and Rhizophora. Dipterocarps dominated the adjacent lowland coastal rainforest, including Dipterocarpus, Dryobalanops, and Shorea, and other families included Ctenolophonaceae, Lecythidaceae (Barringtonia), Rubiaceae, Rutaceae, and Sapotaceae. Probable non-arborescent (understory, parasitic, or climbing) angiosperms included Loranthaceae, Melastomataceae, Merremia, Ziziphus, Calamus, and Rhaphidophora. Podocarpus and Ilex pollen occurrences potentially indicate more remote hill forests or additional components of the lowland communities. The Plio-Pleistocene of Kampong Lugu had a low-energy, mangrove-swamp depocenter with abundant rotting, submerged wood, large numbers of fungi, and diverse mangrove taxa, including Nypa, Sonneratia, Avicennia, Rhizophora, and the mangrove fern Acrostichum. Other ferns were varied and abundant, with several likely life habits from semi-aquatic to ground cover, tree ferns, climbers, and epiphytes. The bordering lowland rainforest was dominated by dipterocarps, especially Dryobalanops, and a large-leaved form similar to extant Dipterocarpus confertus. Additional rainforest elements included Combretaceae, Euphorbiaceae, Lecythidaceae (Barringtonia), Malvaceae, Melastomataceae, Myristicaceae, Rubiaceae, and Sapotaceae. Potential hill-forest contributions included Podocarpus, Myrica, and Ilex. Dipterocarp dominance and fossil history Dipterocarp macrofossils are clearly dominant at both fossil sites, even though the total sample size was somewhat limited by preservation (Table 1). Dipterocarps comprised 79% of all identifiable specimens (Table 1), providing the first localized evidence of ancient dipterocarp-dominated forests in Malesia and establishing new regional macrofossil records for Dipterocarpus, Dryobalanops, and Shorea. This result complements earlier observations that Neogene woods in the region, mainly from Java, are often dipterocarpaceous (van Slooten, 1932;Schweitzer, 1958;Mandang & Kagemori, 2004; see Introduction) as well as the ubiquity of dipterocarp-sourced amber in Neogene Brunei sediments . Moreover, it is likely that many of the unclassified morphotypes (e.g., morphotypes BR13 and BR18) and unplaced leaves also represent dipterocarps, especially those that are elliptical with regular, straight secondary veins and opposite percurrent tertiary venation. This report is the only direct comparison from Malesia of dipterocarp macrofossil and pollen relative abundances using unbiased counts from the same strata. In sharp contrast to the macrofossils, the palynology assemblages yielded only rare specimens of dipterocarp pollen, always <1% abundance in all four samples from both sites (Appendix 1). This result is notable in light of the central role of dipterocarp pollen in the historical understanding of biogeography and assembly of Southeast Asian lowland rainforests (e.g., Morley, 2000;Morley, 2018;Ashton et al., 2021). The pollen of dipterocarps is a simple but distinctive tricolpate type, and its rarity in the Brunei assemblages could be explained by a combination of very little pollen reaching the depositional sites and further dilution from the dominant, locally sourced fern spores and fungal bodies. Among other authors (see Introduction), Hamilton et al. (2019) inferred that dipterocarp pollen production, pollination strategy, and pollen preservation lead to its species being commonly underrepresented in the fossil record. Bera (1990) performed one of the only relevant actualistic studies on Sal (Shorea robusta) in Madhya Pradesh (India), finding that the trees produce enormous amounts of pollen that are nevertheless poorly dispersed and progressively degraded in the air column until the grains are nearly all filtered out at soil level. From all these observations, dipterocarp pollen abundance, although valuable data, might have little general relationship to the actual standing biomass of dipterocarps through time, a hypothesis that should be further tested using other macrofossil deposits. The absence of dipterocarp pollen probably means very little about whether or not dipterocarps were present at a site, but the preservation of any amount of dipterocarp pollen may well be linked to the dominance of the family, as seen here. For now, our work helps to make sense of the frequent observation that dipterocarp pollen is absent or very rare in many Southeast Asian pollen assemblages where it might be expected to be very abundant (see Introduction), including in Brunei (Anderson & Muller, 1975;Roslim et al., 2021). Several of the dipterocarp macrofossil occurrences are significant, in addition to the general novelty of the collection and its location. The Dryobalanops leaves (Fig. 7), the most abundant fossils in the whole sample (Table 1), complement Neogene wood records of Dryobalanoxylon from Sumatra, Indonesian Borneo, and Java (where the genus is extinct; e.g., Schweitzer, 1958;Srivastava & Kagemori, 2001;Mandang & Kagemori, 2004). Outside Indonesia, Neogene Dryobalanoxylon woods are reported from southern India, Thailand, and Vietnam (summarized by Bande & Prakash, 1986;Biswas, Khan & Bera, 2019). Thus, other than an interesting anecdotal report of a Dryobalanops-like leaf from the Neogene of West Java (van Slooten, 1932), the new fossils appear to represent the only non-wood macrofossil record of this ecologically significant genus of large tropical trees. The Dipterocarpus fossils at both sites (Figs. 5,6) represent at least two species, including one with notably large leaf size and architecture comparable, in the living Brunei flora, with the emergent tree D. confertus. To our knowledge, these are the only recently reported non-wood macrofossils of the genus from the Malay Archipelago. However, several fossil leaves and fruits from the region are attributed to Dipterocarpus in historical literature in need of revision (see Introduction; summarized in Khan et al., 2020), and some illustrations show the characteristic plicated foliage (e.g., Miocene of Sumatra: Kräusel, 1929a: plate 6.1). Palynological data support the presence of the genus in northern Borneo since the Oligocene (Muller, 1981;Ashton et al., 2021). Outside Malesia, Dipterocarpus fruits are known from middle Miocene strata of southeastern China (Fujian Province; Shi & Li, 2010;Shi, Jacques & Li, 2014;Chen et al., 2021). Woods and pollen potentially related to Dipterocarpus are found far beyond the current range, including pollen from the Late Cretaceous (Maastrichtian) of Sudan (Morley, 2018;Ashton et al., 2021;Chen et al., 2021). Many leaf fossils attributed to Dipterocarpus are reported from the Neogene Siwalik sequence in India, Nepal, and Bhutan (summarized in Khan et al., 2020). Dipterocarpus leaf fossils are most convincing when the typical architecture of straight, regular, robust secondaries and opposite percurrent tertiary veins (features found in several plant groups) are combined with the taxonomically restricted feature of visible plications, as seen in our fossils and several previous examples (Kräusel, 1929a;Srivastava et al., 2017). Khan et al. (2020) recently assigned leaf fossils to Dipterocarpus from Deccan sediments, close to the Cretaceous-Paleogene boundary in Madhya Pradesh, central India, and considered these specimens as evidence for the popular out-of-India model for the introduction of Dipterocarpoideae into Asia (e.g., Dutta et al., 2011). However, the fossils lack plications and have comparatively thin and irregular secondary venation, unlike most living Dipterocarpus; thus, they could belong to many other taxa despite some similarities in the cuticle. A few comments on dipterocarp origins are warranted in light of recent reviews of this fascinating topic (Kooyman et al., 2019;Ashton et al., 2021;Cvetković et al., 2022). The dipterocarps are widely held to have originated on the supercontinent of Gondwana and to have arrived in Asia via the post-Gondwanan movements of India or Africa (e.g., Ashton et al., 2021 and references therein). However, the idea of Gondwanan origins is, so far, lacking any direct support from paleobotany. The Gondwana hypothesis, by definition, requires that dipterocarps were present on the Gondwana supercontinent by ca. 110 Ma (latest Early Cretaceous), before India-plus-Madagascar and Africa began to separate from the remaining landmass (e.g., Jokat et al., 2021;Scotese, 2021). However, the literature regarding out-of-Gondwana origins for dipterocarps seems to omit that Early Cretaceous deposits have been sampled across Gondwana for decades, and no fossil dipterocarps or likely relatives have been reported from hundreds of publications (among many others, Archangelsky, 1963;Banerji, 2000;Mohr & Friis, 2000;McLoughlin, Pott & Elliott, 2010;Nagalingum & Cantrill, 2015;Monje-Dussán et al., 2016). Importantly, angiosperms in Gondwanan Early Cretaceous floras are rare, show early stages in the evolution of leaf organization and other characters, and are not allied with derived eudicot families (Mohr & Friis, 2000;Cúneo & Gandolfo, 2005;Nagalingum & Cantrill, 2015;Coiro et al., 2020;Pessoa, Ribeiro & Jud, 2021). All confirmed reports of fossil dipterocarps and related taxa in Africa and India (e.g., Ashton et al., 2021) are from much younger, post-Gondwanan strata, and even the Maastrichtian Dipterocarpus-type pollen from Sudan (Morley, 2018) is ca. 40 million years younger than Africa's separation from Gondwana. The current lack of Gondwanan dipterocarp fossils does not rule out the often-conflated idea that dipterocarps evolved in post-Gondwanan, pre-India-collision India or Africa (i.e., ca. 110-50 Ma). The primary evidence for the presence of the family in India and Africa during that interval is palynological (Morley, 2018;Prasad et al., 2018;Ashton et al., 2021;Bansal et al., 2022;Mishra et al., 2022) and quite compelling, appearing to eliminate the rival idea of Asian dipterocarp origins (e.g., Shukla, Mehrotra & Guleria, 2013;Srivastava et al., 2014). However, nearly all potentially supporting macrofossil and geochemical evidence has been contested (Shukla, Mehrotra & Guleria, 2013;Kooyman et al., 2019). A recent, comprehensive study of fossil woods from the Deccan Traps found no dipterocarp specimens among 47 anatomically preserved species (Wheeler et al., 2017), and the family has not appeared in any of a large number of recent studies of silicified reproductive material from the Deccans that used advanced three-dimensional imaging techniques (e.g., Manchester et al., 2019;Matsunaga et al., 2019). The richness of the debate and the varied evidence presented seem sure to lead to many years of discoveries on the topic of dipterocarp origins. Other significant occurrences The Melastomataceae specimens (Fig. 10) may be the only reliable Asian fossil record of the diverse family of ca. 5,000 species, of which ca. 3,500 are Neotropical (Carvalho et al., 2021). The melastomes have an indistinct pollen type and, despite the striking, perfect-acrodromous leaf architecture seen in many species, a notably poor leaf-fossil record globally that was primarily concentrated in North America and Europe (Renner, Clausing & Meyer, 2001;Morley & Dick, 2003;Carvalho et al., 2021). However, Carvalho et al. (2021) recently reported the oldest record of the family, from the Paleocene (ca. 60-58 Ma) Bogotá Formation of Colombia, which was then in Gondwanan South America. In Asia, the only prior records of the family are leaves of Melastomaceophyllum sp. from the Miocene of Labuan Island (Geyler, 1887) and M. geyleri from the late Miocene of Sumatra (Kräusel, 1929a; see also van Gorsel, 2020). The first of these is published only as a line drawing of a small leaf fragment that, pending examination of the corresponding type, does not resemble Melastomataceae. The second, M. geyleri, is published both as a line drawing and a small photograph. Although the line drawing resembles Melastomataceae, Kräusel's photograph clearly shows thickened lateral veins that are not typical of the family; the overall venation pattern more closely resembles some Rhamnaceae such as Zizyphus (discussed next). The new Ziziphus leaves from Brunei (Figs. 12A-12D) appear to be the first reliable fossil record of the family Rhamnaceae in Malesia and contribute to the biogeographic understanding of the ziziphoid Rhamnaceae (Correa et al., 2010;Hauenschild et al., 2018). There are no previous reports of fossil flowers, fruits, pollen, or wood of Rhamnaceae from the Malay Archipelago (e.g., Jud et al., 2017). Prior leaf records are limited to historical reports of Neogene "Rhamnus" and "Ceanothus" from Java, in need of revision (Göppert, 1864;Crié, 1888;see van Konijnenburg-van Cittert, van Waveren & Jonkers, 2004). Rhamnaceae fossils are widely distributed in mainland South and East Asia and are primarily attributed to other extinct or extant genera, such as Berhamniphyllum and Paliurus; Ziziphus records include Eocene fruits from Gujarat (India), Pliocene woods from Rajasthan (India), and a Pleistocene fruit from Thailand (for summaries, see Burge & Manchester, 2008;Jud et al., 2017;Zhou et al., 2020). As discussed earlier, Ziziphus-like isolated leaf fossils are well known from the Paleogene of western North America and elsewhere, but their affinities to the genus are uncertain (e.g., Burge & Manchester, 2008;Correa et al., 2010;Manchester, 2014). The presence of the significant understory and climbing family Araceae and the monsteroid genus Rhaphidophora in the fossil flora also appears to be novel for Malesia. Zuluaga, Llano & Cameron (2019) recently reviewed the biogeography and scarce fossil history of monsteroids worldwide. Some of the more reliable records related to living monsteroid genera are seed fossils of Epipremnum from the Oligocene and Neogene of Europe (see Madison & Tiffney, 1976), and the global pollen record suggests a deeper history for some genera (Hesse & Zetter, 2007). CONCLUSIONS We report two new late Cenozoic compression assemblages from Brunei Darussalam, a nation with extraordinarily biodiverse and intact tropical rainforests. The new plant fossils are the first from that country and the first Cenozoic compression floras from the Malay Archipelago in the modern era. We also report co-occurring palynofloras, and both the macro-and microfossils are unbiased collections. Our results, most broadly, show that the principal features of northern Borneo's coastal vegetation (e.g., Thia-Eng, Loke Ming & Sadorra, 1987;Wong & Kamariah, 1999) have changed little for at least 4-5 million years. Dipterocarps overwhelmingly dominate both macrofossil assemblages, showing for the first time from compression floras, which record localized paleoecological information, that the dipterocarp-dominated rainforests that define lowland forest structure throughout Malesia are ancient. At least three genera (Dipterocarpus, Dryobalanops, and Shorea) and four species of dipterocarps are present, and dipterocarps represent 79% of all identifiable macrofossils. All other elements identified are also present in the living Brunei flora and include the first reliable macrofossil occurrences for the region of Melastomataceae, Rhamnaceae (Ziziphus), and Araceae (Rhaphidophora). Rich palynofloras from the same strata as the leaves detail fern-and mangrove-swamp depositional environments with input from adjacent tropical rainforests and diverse, well-structured communities. The pollen data provide a large number of taxon occurrences that complement the macrofloras, with few overlaps. Dipterocarp pollen is notably rare, at less than 1% abundance. Thus, our work directly tests and supports the idea that the low representation of dipterocarp pollen in many regional assemblages results from significant taphonomic bias, providing a caveat for palynological studies. Macrofossils offer an outstanding opportunity to assess patterns of dipterocarp diversity, abundance, and dominance through time and, more broadly, the evolution of the modern vegetation structure and dominance patterns of Southeast Asia. Our discovery of dipterocarp-dominated coastal rainforests in Borneo from 4-5 million years ago raises the conservation significance of their highly threatened yet still strikingly diverse and ecologically foundational living analogs. Grant Disclosures The following grant information was disclosed by the authors: UBDH and corresponding field numbers apply to individual slabs as collected in the field, each of which contains one or more fossils as indicated. Unidentified (Un) fragments are noted but not counted separately. See Table 1 for the morphotype list with "BR" codes, affinities, and totals. Localities PW1501 and PW1502 are at Berakas Beach, and locality PW1503 is at Kampong Lugu (Figs. 1-3). DT, damage type (Labandeira et al., 2007). UBDH, Herbarium of Universiti Brunei Darussalam.
v3-fos-license
2018-05-08T05:00:07.461Z
2017-12-25T00:00:00.000
13756534
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/wcmc/2017/3279452.pdf", "pdf_hash": "7a27644f107884e77a71c2086812657d74790f1f", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42832", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "7a27644f107884e77a71c2086812657d74790f1f", "year": 2017 }
pes2o/s2orc
Adaptive and Blind Wideband Spectrum Sensing Scheme Using Singular Value Decomposition The Modulated Wideband Converter (MWC) can provide a sub-Nyquist sampling for continuous analog signal and reconstruct the spectral support. However, the existing reconstruction algorithms need a priori information of sparsity order, are not self-adaptive for SNR, and are not fault tolerant enough. These problems affect the reconstruction performance in practical sensing scenarios. In this paper, an Adaptive and Blind Reduced MMV (Multiple Measurement Vectors) Boost (ABRMB) scheme based on singular value decomposition (SVD) for wideband spectrum sensing is proposed. Firstly, the characteristics of singular values of signals are used to estimate the noise intensity and sparsity order, and an adaptive decision threshold can be determined. Secondly, optimal neighborhood selection strategy is employed to improve the fault tolerance in the solver of ABRMB. The experimental results demonstrate that, compared with ReMBo (Reduce MMV and Boost) and RPMB (Randomly Projecting MMV and Boost), ABRMB can significantly improve the success rate of reconstruction without the need to know noise intensity and sparsity order and can achieve high probability of reconstruction with fewer sampling channels, lower minimum sampling rate, and lower approximation error of the potential of spectral support. Introduction Spectrum resource has become increasingly scarce with emerging wireless services.Nevertheless, assigned radio spectrum to authorized users is mostly underutilized.Cognitive radio (CR) technology can improve frequency spectrum utilization by detecting and accessing the frequency range that has not been occupied by authorized users (primary users).Therefore, one of the crucial tasks in the cognitive radio system is to constantly monitor the potential spectrum bands and detect the activities of primary users [1,2].The methods of signal detection on narrow band mainly include energy detection [3], coherent detection [4], and feature detection [5].Since the wideband spectrum sensing can provide more spectrum access opportunities for cognitive users (secondary users), multiband based wideband spectrum sensing has gained much more research attention in recent years [6,7]. At the receiving side, the traditional method of acquiring RF (Radio Frequency) information is to use demodulation under the condition of known carrier.However, in practical scenarios, it is often required to directly perform blind sensing for high frequency wideband analog signals, which brings great challenges to spectrum detection.Firstly, extremely high sampling rate has exceeded the limit of physical ADC (Analog-to-Digital Converter) capability.Meanwhile, the storage and transmission for the sampled data can bring huge overhead.In addition, the carrier frequency of the received multiband RF signal is usually unknown.To address these challenges, compressed sensing theory is applied to wideband spectrum sensing based on compressed sampling and signal reconstruction [8,9].In order to perform sub-Nyquist sampling on continuous spectrum signal, Mishali et al. proposed the MWC (Modulated Wideband Converter) scheme which uses multichannel compressed sampling and reconstruction with parallel structure based on compressive sensing and time-frequency transform theory [10,11].In MWC system, the multiband analog sparse signal can be collected with sub-Nyquist sampling, and the spectral support of signal can be reconstructed by a CTF (continuous-to-finite) block. Wireless Communications and Mobile Computing From [10], it is known that the time domain reconstruction model of MWC can be attributed to the Multiple Measurement Vectors (MMV) problem.MWC has been considered to be an effective approach for compressed sampling and signal reconstruction with its applications on radar, broadband communication, and cognitive radio spectrum sensing [12][13][14]. The accurate reconstruction of spectral support of signal is the key to realize the spectrum sensing.At present, there is still much room for improvement in the success rate of sensing, the required minimum number of channels, and the maximum number of subbands that can be reconstructed.The solvability of MMV problems is important for MWC reconstruction ability.To enhance the solvability of MMV problem, Reduce MMV and Boost (ReMBo) algorithm, which can transform MMV problem into single measurement vectors (SMV) problem is proposed in [15].In ReMBo, the sampling value matrix is projected on a random vector which is independently and identically distributed in the interval [−1, 1] and obtains a column vector V, V = .Then, it tries to obtain the estimated spectral support of signal by iteratively computing SMV problem (V = Φ = Φ) for many times.Although ReMBo is better than most of the existing reconstruction algorithms in the success rate of reconstruction and computational overhead, there is still a big gap between the maximum number of reconstructed frequency subbands and the theoretical upper limit [16].The total sampling rate and the minimum number of required channels of ReMBo algorithm are still much higher than the theoretical lower bound [10,17].To improve the performance of MWC system, a dimension adjustable Randomly Projecting MMV and Boost (RPMB) framework algorithm, which transforms the initial MMV problem into a series of low dimensional MMV problems with the same sparsity order, was proposed in [18].In the solving process, the solution is tested several times until a satisfactory solution is obtained.Compared with ReMBo, RPMB can reduce the number of required hardware channels and improve the maximum number of reconstructed frequency subbands in the condition of precise reconstruction.In spite of this, the RPMB framework is still faced with difficult problems to be solved: (1) The reconstruction performance of RPMB is greatly reduced under low SNR because the influence of the noise intensity is not considered in the RPMB solver; (2) the RPMB is not flexible enough since the decision threshold of RPMB is a fixed value, which has no self-adaptive ability for the uncertainty of the noise; (3) RPMB needs to know the sparsity order of the signal in advance, which is difficult in some practical applications; (4) although the RPMB solver improves the algorithm's fault tolerance by finding more potential support bands, the effect of fault tolerance remains to be improved. To address the above-mentioned problems, this paper proposes a Self-Adaptive and Blind Reduced MMV and Boost (ABRMB) scheme for MWC based on SVD [19].Our main contributions are summarized as follows. (1) ABRMB can first reveal the intrinsic properties of signal using SVD and perform the linear fitting on the noise singular value by using the tail singular values as noise data. Then, it can estimate the noise intensity, and the threshold of deciding support elements can be set adaptively. (2) Our proposed scheme calculates the contribution of the estimated noise singular on the singular values of signal and figures out the sparsity order by using gradient and difference operation.Hence the number of signal subbands can be estimated. (3) In the solver of ABRMB, the fault tolerance can be improved by the optimal neighborhood selection method.Compared with ReMBo and RPMB, when noise intensity and sparsity order are unknown, our proposed scheme can use fewer channels to achieve the high probability reconstruction of spectral support in low SNR range and increase the maximum subband number of reconstructed signal. The rest of this paper is organized as follows.In Section 2, the related work is introduced.In Section 3, the multiband sparse signal model is introduced, and sampling principles of MWC are described as well as some of the present major issues.Section 4 introduces the key ideas of the noise intensity estimation, sparsity order estimation, and optimal neighborhood selection.In Section 5, the ABRMB scheme includes its solver which are specifically described, the convergence proof of which is also given.Section 6 performs validation and analysis on ABRMB from four important performance evaluation indexes.Conclusions and discussion are in Section 7. The main notations used in this paper are listed in Notations to make them clearer and easier to read. Related Work Current research on the performance of spectrum sensing can be mainly classified into the following two categories.The first category, especially in the design of CRN, focuses on investigating the optimal trade-off between energy consumption and throughput for secondary users (SUs).The second category, especially for wideband spectrum sensing, mainly focuses on studying how to achieve a good balance between noise interference and sensing accuracy. As for the first category, some earlier researches focus on the trade-off between the interference and throughput.Specifically, for the unslotted CRN, Yang et al. [20] designed an interference-constrained novel simultaneous sensing and transmission scheme.This scheme exploited the statistic information of the activities of primary users (PUs), and the transmission duration is adaptively adjusted to avoid the interference.In the same period, energy efficient techniques, in cooperative spectrum sensing (CSS), have also gained a lot of attention [6].In [21], considering the energy consumption problems of each sensing node in cooperative CR sensor network, a user selection scheme was proposed to minimize the energy consumed on each CR node based on the binary knapsack problem and its dynamic solution.After that, Ejaz et al. [22] indicated that an optimal sensing, reporting, and transmission duration can be found while providing a given throughput and reliability constraint.The solution can obtain the best trade-off between energy consumption and throughput for SUs.As for the second category, in order to overcome the limitation of hardware technology and achieve wideband spectrum sensing, the research work mainly concentrated on how to get a good trade-off between the noise interference and sensing accuracy [4].Firstly, based on the compressed sensing theory, a parallel MWC architecture for wideband spectrum sensing was designed by Mishali et al. [10,11], which can achieve sub-Nyquist sampling for the wideband signal.Then, a MWC system with SwSOMP algorithm was proposed in [23].Given the noise interference, the algorithm can get higher reconstruction precision of spectral support.In [24], in order to reduce the high hardware complexity of MWC, a compressive wideband spectrum sensing scheme based on a single channel was designed. The work of this paper belongs to the second category.All of the aforementioned literatures only consider ideal noise interference model (NIM) and assume that the signal sparsity is known.However, in reality, NIM is always nonideal, where the intensity of noise varies with the change of communication environment.Moreover, the signal sparsity is always unknown in reality.In addition, we also hope to adopt a better reconstruction strategy and use fewer parallel channels to obtain higher reconstruction accuracy.Therefore, these problems need to be further investigated. Problem Statement of MWC Model and Our Proposed Problem Solution 3.1.MWC System Model.The reconstruction principles of the spectral support of MWC system is shown in Figure 1. Figure 1(a) shows the sparse multiband analog signal model.Sparse multiband analog signal is common in cognitive radio [25].Suppose the received signal () is a sparse bandpass analog signal.The spectrum is distributed in frequency interval [− nyq /2, nyq /2], and nyq is Nyquist rate, which usually can reach GSPS order of magnitude.Suppose the spectrum of () only contains subbands whose bandwidth is 푖 ≤ ( ≥ > 0), and there is no overlap between subbands. is the maximal bandwidth in subbands, and the center carrier frequency of each subband is unknown. 푁 and can be defined as In ( 1), 푁 is union of all subbands whose amplitude is not zero, which represents the effective frequency component of (). As shown in Figure 1(a), the whole band is equally divided into consecutive narrow band channels, every bandwidth of which is no less than .Then the spectrum of () has at most 2 parts which get energy in the whole frequency band.If the channel is marked [1, . . ., ], the set of channel numbers corresponding to each subband 푖 () is the spectral support of (), which is defined as Λ = supp(()).|Λ| is the number of nonzero elements in Λ, that is, the potential of Λ.The frequency bands corresponding to these numbers are called support bands.Since 2 is much smaller than , () can be considered as sparse multiband signal.Figure 1(b) shows the reconstruction process of the support of ().Assume that the number of bands is 4, 푠 = 푝 , and 푝 ≥ .We divide the wideband spectrum into spectrum slices, where = 2 0 + 1.In order to ensure that the discrete Fourier transformation result of sampling sequence contains all the components of the original signal spectrum (), 0 must satisfy 0 = ⌊( nyq + 푠 )/2 푝 ⌋ − 1.After mixing and lowpass filtering, the spectrum information of the original signal appears in the sampling interval [− 푠 /2, 푠 /2], and the mixing Wireless Communications and Mobile Computing coefficient of each spectrum slice is 푖푙 , where is the index of spectrum slices.According to the theory of compressed sensing, we can obtain the spectrum support of the multiband signal. In conclusion, the support bands of () must meet two conditions: (1) it is distributed in an extremely wide frequency range; (2) the support bands of signal only exist in a few discrete frequency spectrum. Sampling in MWC System . MWC contains multiple parallel sampling channels, and each channel has same structure with mixer, low-pass filter, and ADC [10].The received signal () is input to parallel channels at the same time, and each channel is multiplied by periodic mixing signal 푖 () with different mode, which makes spectrum of () move to baseband.Each of the channels' 푖 () is not related.The period of 푖 () is 푝 = 1/ 푝 , and is used to show the number of random ±1 switches in a cycle. 푝 is defined as the switching frequency of mixed signals.The mixed signals pass through the low-pass filter whose cut-off frequency is 1/2 푠 .At last, it passes through ADC whose sampling rate is 푠 = 1/ 푠 and obtains groups low-speed digital sampling sequences 푖 []. On analysis of the th channel, the Fourier series expansion of the random mixing function is where When = 0, 0 = 1/, and when ̸ = 0, 푙 = (1 − −푗(2휋푙/퐿) )/2.In (2), 푖 () denotes a pseudorandom sequence of ±1, which is used as a mixing signal of the th sampling channel; is index of spectrum slice; 푖푙 is the coefficient of Fourier series expansion. Next, after passing through the low-pass filter with frequency characteristic of () = { 1 |푓|≤푓 /2 0 |푓|>푓 /2 , the relationship between DTFT (Discrete Time Fourier Transform) of 푖 [] and the Fourier transform () of () is (3) and 0 is the smallest integer that can satisfy = 2 0 + 1 ≥ nyq / 푝 .Formula (3) shows that the spectrum of the output sequence 푖 [] is equivalent to the shift weighted sum of the original signal spectrum () with 푝 as its step, which is intercepted into spectral fragments with 푠 width by low-pass filter.If 푖 ( 푗2휋푓푇 ) is considered as the th component of m-dimensional column vector () and ( − 푝 ) is considered as the th component of 2 0 + 1dimensional column vector (), then (3) can be expressed as In ( 4), Φ is a × measurement matrix, For any frequency ∈ [− 푠 /2, 푠 /2], (5) is a typical compressed sensing problem.When sampled values matrix and an observation matrix Φ are known, a sparse vector can be recovered.Since MWC is a MMV problem, the one-dimensional measured value vector turns into a twodimensional matrix.As a result, paper [15] gets a onedimensional vector after projecting ingeniously and then reconstructs the signal support bands by using compressed sensing technique.Furthermore, the reconstruction by building a CTF (continuous-to-finite) module is described in [10]. Problem Statement. The principle of reconstructing spectral support for MWC is shown in Figure 1 where Figure 1(b) gives a specific reconstruction process.As can be seen in Figure 1, if the number of support bands of () is , the maximum reconstructed subbands are 2 after passing through the MWC system; that is, the maximum possible sparsity order is 2.In the MOMPMMV (Modified Orthogonal Matching Pursuit for MMV), the solver mentioned in paper [18], the initial solution Z is obtained via the OMPMMV, which needs to know beforehand.In addition, in the condition judgment of RPMB, the algorithm uses | Λ| ≤ to determine whether the potential of spectral support meets the requirements.However, in cognitive radio applications, it is difficult to know the sparsity order of signal.Therefore, aiming at the practical application, an effective method to estimate the sparsity order of signal is needed. Secondly, in [18], the condition of deciding spectral support is ‖ Z푖 → ‖ 2 ≥ ; and the condition of determining whether the spectral support meets the requirements in the iterative process of RPMB is ‖ Ŷ − Φ Ẑ‖ 퐹 ≤ .As mentioned in Section 1, in the case where there is no noise or high SNR, can be figured by taking a fixed small value.If the SNR is low or fluctuates greatly, the value of will have to perform self-adaptive change according to the intensity of the noise.In paper [18], is a fixed value without considering SNR.Hence, RPMB algorithm is neither suitable nor flexible under low or fluctuated SNR.Thus an effective method is needed to estimate the intensity of the noise and adaptively change the decision threshold . Finally, in MOMPMMV, to improve the fault tolerance ability, the method is Λ = max(‖ Z 푖→ ‖ 2 , ), where is the number of sampling channels, that is, taking maximum norm 2 as the initial spectral support.This method is effective, but it is not the best way.For example, as shown in Figure 1(a), the /2th subband 푁/2 () of () is located at the 14th and 15th narrow channels, and the spectrum energy of the 15th narrowband channel is quite small, as a result of which if the fault tolerance method in [18] is used, under low SNR, number 15 will not be able to merge into the initial support Λ.So it is necessary to design a better fault tolerance strategy. Brief Description of Our Proposed Problem Solution. Our proposed problem solution is shown in Figure 2 where multiband analog signal () passes through the MWC system to obtain compressed sampling data .Then, in order to reduce complexity, we reduce the dimension of by means of random projection.Subsequently, the signal sparsity and noise intensity are estimated in the preprocessing, namely, K and Noise .Meanwhile, to improve the fault tolerance ability, we adopt the optimal neighborhood selection strategy, which is described in detail in Section 4.3.Finally, by using the prior knowledge and the compression sensing reconstruction algorithm, we can obtain the estimated spectral support Λ.It is worth noting that, in the reconstruction algorithm, we can adaptively adjust the decision threshold by using the estimated noise intensity Noise . Preprocessing of ABRMB Scheme 4.1.Estimation of Noise Intensity.Singular value decomposition is one of the most basic and important tools in modern numerical analysis.The SVD is performed on the measured values with noise using (6). ∈ R 푚×푟 , ≥ > 0, ≥ > 0, and < , where is sampling length. In (6), unitary matrix ∈ R 푚×푚 is the left singular vector of , and unitary matrix ∈ R 푟×푟 is right singular vector.Σ ∈ R 푚×푟 is a diagonal matrix, and the main diagonal element is the singular value with a descending order.Since rank() = , (6) can be simplified as Let Σ 퐾 = diag( 1 , 2 , . . ., 퐾 ) and Σ 푚−퐾 = diag( 퐾+1 , 퐾+3 , . . ., 푚 ). 푖 is th singular value of , and ≥ ≥ 1.After SVD, an important characteristic is that most of the energy of signal is concentrated in the first singular values, while the energy of the noise is distributed in all of the singular values; but noise can be reflected in the tail singular value.Assume that the singular value of can be decomposed into the singular value Σ 푠 of the original signal and the singular value Σ 푛 of the noise, and Σ 푠 = diag( 푠1 , 푠2 , . . ., 푠푚 ), Σ 푛 = diag( 푛1 , 푛2 , . . ., 푛푚 ). Figure 3 describes the contribution of Σ 푠 and Σ 푛 to Σ under different SNR, which are represented by 푠 and 푛 respectively.The definitions are as follows: The contribution of the original signal to the tail of the singular values is small.The energy of the signal is concentrated in the first singular values, and the tail of the singular values is mainly determined by noise.Figure 3 shows that the best data source for noise intensity estimation is bottom 50% of the singular values.Because the available singular values are also reduced with the decrease of the sampling channels, in order to ensure the accuracy of estimation, the bottom 30% of the singular values is adopted to estimate the noise intensity in this paper. Figure 4 illustrates the variation of the singular values of with different noise intensity and different sampling channel number.As shown in Figure 4, the singular value is larger when the SNR is lower or the sampling channels number is more.It should be noted that since the initial part of the singular value is mainly determined by the energy of the signal, the initial part of the singular value has little change versus different SNR and channel numbers.Yet the tail of the singular values changes a lot under different SNR because it is influenced significantly by noise intensity as the contribution of real signal is much less for those values compared with noise.This fact is utilized in this paper to estimate the noise intensity. Next, the distribution of the noise singular values is investigated.Since the noise is the uniformly distributed white Gaussian noise (WGN), it can be seen in Figure 5, the singular values of noise are largely distributed on a straight line under the circumstances with different SNR and different sampling channel number.Using this fact, the singular values of tail 30% of are used as singular values of noise to perform linear fitting.The fitting line is used to estimate the noise singular values, and the noise intensity is estimated by (9).The fitting result is shown in Figure 6.Since there is little The value of The value of energy of real signal in the tail of the singular values, there is a certain deviation between the fitted singular values and original singular values of noise.However, under the low SNR, the deviation will be diluted due to the relatively large noise intensity.In addition, in order to reduce the influence of the deviation, this paper uses the optimal neighborhood selection strategy in Section 3.3 to carry out fault tolerance processing.Suppose Nosie ∈ R 푚×푟 is a uniformly distributed WGN, then the intensity Nosie of the noise can be calculated according to the formula where 푛 = { 푛 (1), 푛 (2), . . ., 푛 (), . . . 푛 ()}; 푛 () denotes the th noise singular value. Estimation of Sparsity Order. According to the previous analysis, we can get the estimated noise singular values Σ푛 = diag(σ 푛1 , σ푛2 , . . ., σ푛푚 ).The contribution of noise singular values on the singular values of is calculated by If the noise intensity is strong, 푅 is first obtained by performing gradient operation on R푛 , and then 푅 is obtained by performing difference operation on 푅 .The results of the operation are in an ascending order.The position of the minimum value in 푅 plus 1 is the estimated sparsity order K.If the sampling channel number is close to the theoretical lower limit, K need to add one adjustable parameter , and empirical value is 1.The calculation methods of 푅 and 푅 are shown in (11).Figure 7 is a sketch of estimation of sparsity order under different SNR. When noise is low, the signal energy is dominant.The sparsity order can be estimated directly by using the singular values of .First, all singular values are shifted to the left one time, and the last empty position is filled by 푚 , represented as Σ 푎 = diag( 2 , 3 , . . ., 푚 , 푚 ).Then, is calculated by (12) and is listed in a descending order.The index of maximum value in is the estimated sparsity order K.Although K has some deviations, the deviation has little impact on the success rate of reconstruction due to the joint sparse reconstruction and fault tolerance mechanism. Optimal Neighborhood Selection Strategy. From the signal model showed in Figure 1(a), we know that the energy of subband can only locate on two adjacent narrow channels.If support Λ 푖1 is obtained, support Λ 푖2 can only be its neighborhood channel; that is, Λ 푖2 ∈ nei(Λ 푖1 ).However, there are usually two neighborhood channels of Λ 푖1 .Obviously, the channel with the largest norm 2 value is chosen to be the other support channel, as given in where the function of pos() is to obtain the optimal neighborhood support of Λ 푖1 .‖ nei(Λ 1 )→ ‖ 2 denotes the signal energy on two neighborhood channels of Λ 푖1 .Obviously, this strategy is very helpful to obtain the approximate spectral support. Proposed ABRMB Optimization Scheme 5.1.Structure of ABRMB Scheme.It was pointed out in [18,26] that random projection can be used for joint reconstruction to improve reconstruction performance.In particular, it can be known from [18] that the sensing performance tends to be stable if the number of preserved measured vectors is equal to after the operation of projection.The ABRMB also inherits this idea, and it is combined with the pretreatment methods mentioned in Section 3 to improve the integrated system performance.The structure of ABRMB is shown in Figure 8. Pseudocode of ABRMB Scheme . Pseudocode 1 provides the pseudocode to describe the ABRMB scheme. represents a uniformly distributed random matrix whose values are continuously selected from [−1, 1]. As can be seen in Pseudocode 1, the ABRMB allows a certain degree approximation deviation for spectral support with the aid of estimated noise intensity and sparsity order.In addition, if the loop iteration is ended, the optimal spectral support is still not found; the ABRMB can select the spectral support, with minimum potential, as the optimal support set from all of the stored spectral support.If there are more than one minimum potential, then choose the one with smallest residual as the best spectral support.Because of the guarantee of fault tolerance mechanism, the experiment shows that this method is very effective. The solver of ABRMB scheme is described in Pseudocode 2. The Convergence of ABRMB Scheme Theorem 1. Assume that () is multiband signal described as in Figure 1(a); we use the MWC structure for sample signal as shown in Figure 8.If the following conditions are established, for any ∈ [− 푠 /2, 푠 /2], () is the only N-sparse solution of (4). 𝑚 ≥ 2𝑁. 3. The number of ±1 symbols in a periodic sequence 4. Any 2 column of Φ is linearly independent. The proof of Theorem 1 is shown in [10]. Next, we prove that the spectral support of the original MMV problem Λ = supp() can be obtained by solving the new MMV problem = Φ, where is the result after projection. Lemma 2. Suppose ∈ 퐿 is a known nonzero vector, and ∈ 퐿 is a random set of vectors obtained from a continuous probability distribution.Then, the probability that the event " on is not 0" is 1. In addition, MWC can achieve sub-Nyquist sampling.Without affecting the success rate of reconstruction, the subsequent processing of the system is more favorable with higher degree of compression.The total sampling rate of MWC is Σ = 푠 .The theoretical minimum of sampling rate for multiband signal, that is, the Landau rate [15], is defined as where ( 푁 ) is Landau rate; that is, it is the sum of all subband frequency widths, 푁 denotes the union of effective frequency components in multiband signal, and ( 푖 − 푖 ) is frequency width of th subband. Since the number of the channels and total sampling rate Σ are directly relevant, obviously, the cost of the system and the corresponding sampling rate are lower for smaller . The Maximum Number of Subbands That Can Be Reconstructed in Signal.From [16], the upper bound of the reconstruction capability of the MMV problem is In (15), () is the joint sparsity order of , (Φ) is the Kruskal rank of Φ, and Rank() is the rank of .As can be learned from (15), the smaller the rank , the lower the sparsity order of the signal that can be reconstructed, which is also the reason that the upper bound of the sparsity order in [15] is particularly small.The number of subbands is = ()/2.Therefore, the performance of the system is better when the number of subbands that can be reconstructed is larger. The In (16), | Λ| is the potential of estimated support, (| Λ| − |Λ|) represents the difference between the estimated support potential and the actual support potential, and "s.t." is the abbreviation of "subject to".In (17), |Λ| max is the potential of maximum spectral support for multiband sparse signals. is the number of narrow bands, and is the number of support bands of the original signal.Apparently, smaller approximate error is better.Since ≫ |Λ|, in the application of wideband spectrum sensing in cognitive radio, as long as Λ ≤ upper , the impact on the secondary users can be negligible. The Successful Probability of the Reconstruction. In the analysis of recovered spectral support, we refer to the successful recovery criteria in [10]; that is, when the estimated support Λ and the actual support Λ satisfy (18), where Λ ⊇ Λ, and Φ ↓ Λ is with full column rank, it is considered as a successful reconstruction. In (19), 푖 , 푖 , 푖 , and 푖 represent the energy coefficient, bandwidth, carrier frequency, and time offset of the th band, respectively. is the number of subbands in the signal.() is white Gaussian noise.The following procedure is repeated 500 times to calculate the probability of success.4. Estimate the spectral support using ReMBo, RPMB, and ABRMB, respectively, and determine whether it is successfully recovered. Firstly, under the same conditions, the changes of the success rate of reconstruction with channel number using ABRMB, RPMB, and ReMBo are studied under different SNR.As can be seen in Figure 9, when SNR equals 10, 20, and 30, respectively, the success rate of the reconstruction of spectral support using ABRMB is better than using ReMBo or RPMB.Particularly, when = 20 and SNR = {10, 20, 30} dB, the improvements of the success probability of ABRMB compared with RPMB and ReMBo are shown in Table 1.Therefore, ABRMB scheme is more effective and has better adaptability in the case when SNR is low or fluctuated due to the reasonable estimation of the noise intensity and the sparsity order and the optimal neighborhood selection strategy. Secondly, in the same circumstances, the needed minimum channel number and the minimum sampling rate of high probability reconstruction are studied.As shown in Figure 9 and Table 2, under different SNR, the number of hardware channels and the sampling rate of ABRMB for the the system can be saved as the number of hardware channels needed by the scheme which can be reduced. Next, under the same conditions, the changes of success rate of reconstruction with the number of the bands are studied using these three algorithms.The number of the subbands is chosen in the range [2,12] with 2 as its advancing step.The parameters are SNR = 20 dB, = 20, 푖 ∈ {1, 2, 3, 4, 5, 6}, and 푖 ∈ {0.4,0.7, 0.2, 0.9, 1.2, 1.5} s.As shown in Figure 10, since the number of the bands is directly related to the sparsity order of the signal, with the increase of , the signal is not sparse enough and the success rate of reconstruction for three algorithms is reduced.Nevertheless, the reconstruction performance of ABRMB is obviously better than that of RPMB and ReMBo for = 4, 6, and 8. At last, we compare the average approximation error on the potential of estimated spectral support.We set = 6, SNR = 20 dB, and is in the range [15,40] with 1 as its step.As can be seen in Figure 11, since the potential of the support reconstructed by RPMB is related to the number of channels, when = 24, the average approximation error of the spectral support of RPMB has already reached the upper limit.From Figure 11, the average approximation error of ABRMB is the smallest in these three algorithms, and the potential of the spectral support reconstructed by ABRMB is the closest to the actual number of frequency channels.In this way, the ABRMB can provide more spectrum access opportunities for the secondary users in cognitive radio networks. From the analysis of above four performance metrics, we can see that ABRMB scheme can achieve low-speed sampling by utilizing the sparse characteristics of wideband signals in frequency domain.The sampling rate can be reduced to 14.9% of the Nyquist sampling rate when SNR = 10 dB.Meanwhile, for noise uncertainty, the detection threshold can be adaptively changed as the noise power changes in ABRMB scheme.Based on the neighborhood selection strategy and sparsity estimation, ABRMB can obtain higher success reconstruction probability.Then, the projection reduction operation can reduce the computational complexity, and it does not affect the convergence of the scheme.Finally, ABRMB scheme can find better trade-off between noise interference and sensing accuracy. Conclusions By using SVD theory, this paper proposes a self-adaptive and blind wideband spectrum sensing MWC scheme which leads to a flexible and high performance solver.The SVD is performed before signal reconstruction in the preprocessing block.The noise intensity and sparsity order of the signal are estimated in this block, and then the subbands number of the signal can be obtained.ABRMB scheme can use the estimated noise intensity and sparsity order to process the multiband 15 cFigure 1 : Figure 1: The reconstruction principles of the spectral support of MWC system. Figure 2 : Figure 2: The flowchart of our proposed problem solution. iFigure 3 : Figure 3: The contribution of the singular values of the signal and noise to total singular values. Figure 4 :Figure 5 : Figure 4: The comparison of singular value of under different SNR. Figure 6 :i 12 − 20 Figure 7 : Figure 6: The linear fitting of the noise singular values. Figure 8 : Figure 8: The support reconstruction scheme based on ABRMB. Figure 9 : Figure 9: The comparison of recovery success rate of support under different SNR. Figure 10 : Figure 10: The effects of subbands number on the support recovery. Figure 11 : Figure 11: The comparison of the approximation error of the support potential. is performed on both ends of (4), we can get the corresponding relationship between the sequence [] = [ 1 [], 2 [], . . ., 퐿 []] 푇 and the sampling data 1 ≤ ≤ , and < .If IDTFT (Inverse Discrete Time Fourier Transform) Approximate Error of the Support Potential.For the original signal, the potential of spectral support is |Λ|, that is, the length of spectral support.Obviously, |Λ| = ‖Λ‖ 0 .If | Λ| = |Λ|, then the approximate error is 0. If | Λ| > |Λ|, then the approximate error existed.In this paper, the approximate error Λ and the upper bound of the error upper are defined as 3. Generate new sinc signal according to 푖 . Table 1 : Comparison of maximum promoting rate of the reconstruction success probability. high probability reconstruction are smaller than those of the ReMBo and RPMB.Thus, our proposed scheme can use fewer hardware channels and lower sampling rate to achieve the high success rate of reconstruction.Obviously, the cost of Table 2 : The comparison of the minimum channel and minimum sampling rate needed.
v3-fos-license