added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2016-05-12T22:15:10.714Z
|
2015-01-01T00:00:00.000
|
12443341
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "6b978716073c036e65177dbbaa63d64317d16512",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43017",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "6b978716073c036e65177dbbaa63d64317d16512",
"year": 2015
}
|
pes2o/s2orc
|
Medical Hypothesis, Discovery &Innovation
More than 100 different mucosal types of human papillomavirus (HPV) have been identified. The existence of different HPV types at different sites of the human body was recognized in the late 1960s. Human papillomavirus infection is considered the most common sexually transmitted disease and can infect the ocular surface, as well (1-3). The mode of transmission of HPV infection to the conjunctiva in adults is considered autoinoculation from contaminated fingers in the majority of cases.
More than 100 different mucosal types of human papillomavirus (HPV) have been identified. The existence of different HPV types at different sites of the human body was recognized in the late 1960s. Human papillomavirus infection is considered the most common sexually transmitted disease and can infect the ocular surface, as well (1)(2)(3). The mode of transmission of HPV infection to the conjunctiva in adults is considered autoinoculation from contaminated fingers in the majority of cases.
Human papillomaviruses have been tangled in the pathogenesis and recurrence of conjunctival neoplasia, including conjunctival papillomas, conjunctival intraepithelial neoplasia (CIN) and even squamous cell carcinoma of the conjunctiva (SCCC). Human papillomavirus may coexist in SCCC lesions with other oncogenic viruses, such as the human immunodeficiency virus (HIV). According to their oncogenic potential, HPVs are divided into low -and high-risk types. The oncogenic properties of HPVs are attributed mainly to the viral oncoproteins E6 and E7. The involvement of HPV in the pathogenesis of pterygium remains controversial, although suggested by several studies using polymerase chain reaction (PCR) and immunohistochemical techniques.
Human papillomaviruses are DNA viruses that have a marked tropism for squamous epithelium explaining the association of HPV infection with squamous cell papilloma of the conjunctiva. On the other hand, the role of HPV infection in the etiology of SCCC remains unclear (1)(2)(3)(4).
Human papillomaviruses types 6 and 11 are the most frequently found in conjunctival papillomas. Low-risk (LR) HPV 6 and HPV 11 are found in the majority of conjunctival papillomas along with dysplasia in several cases. In spite of such a dysplasia, carcinoma rarely develops in conjunctival papillomas. Other types found are HPV 33, HPV 45, and HPV 13. In addition, 6a and 45, DIAGNOSIS AND TREATMENT OF HPV OCULAR SURFACE INFECTIONS two new subtypes, have been reported to be associated with conjunctival papilloma. On the other hand high-risk (HR) HPV 16 and HPV 18 have been also found in conjunctival papillomas; these types are strongly associated with the occurrence of high-grade uterine cervical intraepithelial neoplasia progressing to cervical cancer. However, according to the 2007 International Agency for Research on Cancer, available evidence on conjunctival carcinogenicity of HPV in humans is limited (4).
Squamous cell carcinoma of the conjunctiva is a rare tumor that has been strongly linked with UV radiation and immunosuppression (particularly in HIV patients). Conjunctival intraepithelial neoplasia (CIN) is a precursor of SCCC but the role of HPV infection in the etiology of SCCC remains unclear. The DNA and mRNA of HPV 16 and HPV 18 corresponding to the E6 region have been detected in CIN. A few other studies have identified the presence of HPV 16 and HPV 18 in severe dysplastic lesions and carcinomas of the conjunctiva. HPV 6 and HPV 11 have also been found in severe dysplasias and carcinomas of the conjunctiva. In addition, mainly in HIVpositive patients, cutaneous HPV types (commonly HPV 5 and HPV 8) have been found in SCCC lesions. In contrast, a strong relationship of HPV and SCCC was not found in multiple other studies (4).
Prevalence of conjunctival papillomas depends on geographical area, but generally is higher than that of conjunctival carcinomas. In some African countries with high HIV prevalence (i.e. Uganda, Tanzania) a possible role of both HPV and HIV as co-factors in SCCC pathogenesis has been suspected, even though controversial. It is noteworthy that even though there are no cross-sectional epidemiological studies, evidence suggests that people without explicit clinical presentation may host the virus and HPV DNA can be identified in an even asymptomatic conjunctiva. However, the worldwide dissemination of HPV infection deserves our attention for the successful management of ocular morbidity (5).
We have to bear in mind that, even though conjunctival papillomas are not life-threatening, they may be large enough to be displeasing or cosmetically unacceptable and affect vision. Furthermore, the recurrence rate for infectious papillomas is high; limbal papillomas have a recurrent rate of 40%. Therefore, accurate diagnosis is an indispensable step in preventing recurrences. The major clinical findings indicating conjunctival papilloma are papillomatous lesions (of exophytic growth pattern, sessile or pedunculated) characterized by reiterating fibro-vascular cores with a geometrically arranged set of red dots. Squamous cell papilloma with an infectious viral etiology has the tendency to recur after medical and surgical treatment. Most papillomas are benign but rarely they can undergo malignant transformation what is visible as inflammation, keratinization, or symblepharon formation (6).
The differential diagnosis of conjunctival papillomas includes a variety of tumors:
2)
Malignant lesions of the surface epithelium as CIN and SCCC,
There is a diffuse variant of conjunctival squamous cell neoplasia that can mimic chronic conjunctivitis and the differential diagnosis is difficult in cases of tumor thickening. Therefore, a conjunctival biopsy should be considered in cases of conjunctivitis lasting more than three months. The diagnosis of squamous conjunctival neoplasia is typically made by biopsy and invasion spreading into the substantia propria beneath the epithelium defines these lesions as carcinomas (2,6).
Tumor within the conjunctival epithelium does not have access to the lymphatic system (no metastatic potential). This tumor can extend onto the cornea (avascular and opaque in appearance), around the limbus but rarely inside the eye and orbit. Squamous conjunctival neoplasia commonly contains characteristic corkscrewshaped blood vessels (6).
The medical history, slit-lamp examination, and the specific clinical and histopathological features of each tumor are used for accurate diagnosis. In the presence of a papillomatous growth pattern along with koilocytosis (nuclear pyknosis and cytoplasmic clearing), the morphological hallmark of HPV infection and mild epithelial dysplastic changes, further laboratory investigation to confirm the clinical diagnosis of HPV infection must be performed. Therefore, Immunohistochemical staining, in situ hybridization and PCR are the appropriate laboratory procedures in the detection of HPV and combination of the mentioned methods increases the diagnostic reliability (2,5).
Immunohistochemical staining concerns the detection of HPV and p16 protein. The p16 INK4a (p16) is a cyclindependent kinase inhibitor and shows marked overexpression in cancerous and precancerous cervical lesions caused by persistent infections with HR HPV types. Immunohistochemical staining of the biopsy specimens is a natural crucial step regarding conventional histopathological findings in HPV infections. It is possible to directly isolate HPV DNA from a biopsy specimen with in situ hybridization (ISH). This method, whatsoever, needs a large quantity of purified DNA, and its sensitivity is relatively limited, especially in obtaining cells from the ocular surface via non invasive methodologies (for example exfoliation cytology techniques). In cases where the biopsy specimen is small with a limited quantity of HPV DNA, nucleic acid amplification assays can be used to increase the sensitivity and specificity of the test. Therefore, the material provided by invasive methods is preferred for laboratory examination of conjunctival lesions with suspected HPV infection (5).
Hybrid Capture II (HC-II) is a non radioactive signal amplification technique, accurate for mucosal lesions but is not appropriate for genotyping; it is useful in distinguishing HR from LR HPV types. Conversely, due to its high sensitivity, PCR is frequently associated with a high frequency of false-positive results. Southern blot, dot blot, reverse dot blot, digestion with restriction endonucleases or direct sequence analysis performed after DNA amplification can help increase the sensitivity and specificity of the test. More specifically real-time PCR or quantitative PCR (qPCR) permits rapid detection and quantification of the material during the various cycles of the PCR process (real time), considered the first choice assay for the detection of viral gene expression (5, 7).
However, the technique of sample collection, affecting the quantity of the HPV DNA of the isolated sample, and the use of various HPV DNA detection techniques with different sensitivity and specificity, are factors that may determine the detection rates of HPV infections. Combination of the described methods increases the rates of HPV detection. Nevertheless, it is imperative that a careful excision of the lesion and appropriate fixation of the specimen are preconditions for success of the diagnostic procedure. An excisional biopsy is preferred to an incisional biopsy whenever possible and consultation with a general pathologist or, ideally, an ophthalmic pathologist is compulsory. Performing an excisional biopsy is recommended to exclude premalignancy in adults (6).
Regarding squamous cell neoplasia, the lesion is removed surgically. After that, cryotherapy is applied to the adjacent conjunctiva and appears to be an effective technique, especially for squamous cell papillomas. Carbon dioxide (CO2) laser has also been used, what allows for precise tissue excision with minimal trauma and blood loss. Rapid healing occurs without significant scarring, edema, or symblepharon formation. Recurrence is low, resulting from the destruction of viral particles and papillomatous epithelial cells. Mitomycin-C (MMC) is an antineoplastic agent applied as 0.2 or 0.3 mg/mL dose via a cellulose sponge to the involved area(s) after surgical excision. The sponge is held in place for 3 minutes followed by meticulous irrigation. It is adjuvant to surgical removal but is also indicated for recalcitrant conjunctival papillomas or those refractive to previous multiple treatments; it is sometimes even administered to prevent recurrences of CIN and SCCC. In addition, amniotic membrane transplantation is used to restore extended conjunctival defects (2). In spite of MMC's effectiveness and its setting up as a therapy asset, one should always keep in mind the potential of some rare complications such are symblepharon, corneal edema, corneal perforation, iritis, cataract, and glaucoma. Therefore, a close follow-up is recommended.
Beside surgical approach additional treatments concern (6): 1) Cimetidine (H2-receptor antagonist), indicated for recalcitrant and quite large conjunctival papillomas; cimetidine has been also found to enhance the immune system by inhibiting suppressor T-cell function and augmenting delayed-type hypersensitivity responses.
2) Interferon is an adjunct therapy to surgical excision of nonrecurring or recurrent multiple lesions. Alpha DIAGNOSIS AND TREATMENT OF HPV OCULAR SURFACE INFECTIONS interferon is given intramuscularly for several months. Because of its antiviral and antiproliferative properties, this form of therapy is designed to suppress tumor cells. Additionally, topical interferon alpha-2b has been shown to be an effective adjunct therapy for small-to-medium size lesions but not for large lesions without surgical excision. Topical interferon alpha-2b can be utilized as an adjunctive therapy for recurring conjunctival papilloma and it is successful in treating also CIN lesions.
3) Dinitrochlorobenzene (DNCB), an immune modulator, may induce delayed hypersensitivity reaction causing the tumor to regress. The mechanism is not known. DNCB is applied directly to the papilloma once the patient has been sensitized to DNCB. This treatment modality is reserved for cases when surgical excision, cryoablation, and other treatment modalities fail (8).
Concerning squamous cell papillomas, the prognosis is generally good. However, recurrences of viral papilloma are not uncommon. On the other hand, recurrences of completely excised squamous cell papillomas are uncommon.
Surgical excision alone of CIN and SCCC lesions has been associated with frequent recurrences. This is because the tumor's edges and deep margins (often clear and avascular) are generraly difficult to determine, misleading the operator pretending the tumor is smaller than it is. Local freezing of the tumor nest (sclera and adjacent conjunctiva) has improved local management and decreased the incidence of tumor's recurrence (2).
According to another approach, radiation therapy is administered to decrease tumor's recurrence rate. In addition, topical chemotherapy, or "chemotherapy eye drops" have been found effective in several clinical trials. Therefore, large clinical trial is needed to compare the effectiveness of topical chemotherapy to excision, cryotherapy and combinations (6).
Intraocular penetration of tumor is extremely rare in developed societies; it is typically treated by enucleation of the eye or eye-wall resection. Orbital invasion brings along the risk of spread into the sinuses and brain, the most common cause of death related to this tumor. When squamous conjunctival neoplasia metastasizes outside of the eye and orbit, it can affect regional lymph nodes (preauricular, submandibular, and cervical) or/and the lungs and bone. In general, early detection allows for removal of these tumors with excellent local cure rates. Outdoor occupation, living close to the equator, tendency to sunburns and history of actinic skin lesions are risk factors. Theoretically, decreasing of sun exposure may prevent squamous cell lesions (6).
As already mentioned, the role of HPV infection in human eye disease is controversial, but it is likely that HPV6/11 plays a role in the pathogenesis of conjunctival papilloma (2,7). However, some cases illustrate the possible role of HPV in SCCC and the potentially devastating effects of this disease. The development of two vaccines, for prophylaxis from HPV infection concerning the types most commonly associated with anogenital cancers, has trigerred controversies with regard to the real benefit by a national immunization program to prevent cervical cancer. It seems that the greatest benefit in eye disease would be achieved administering the quadrivalent vaccine. The impact of the quadrivalent prophylactic vaccine against HPV types 6, 11, 16, and 18 on all HPV-associated genital disease was investigated in a population of sexually inexperienced (HPV-unexposed) women. Prophylactic vaccination was 95%-100% effective in reducing HPV 16 and 18 types related high-grade cervical, vulvar, and vaginal lesions and 97% effective in reducing HPV 6 and 11 types related genital warts (9).
Thus, in ophthalmology, the quadrivalent vaccine is expected to decrease the incidence of conjunctival papillomas due to HPV infection. In contrast, the HPV vaccination is not expected to prevent SCCC because HPV is not the main oncogenic agent in SCCC. Nevertheless, it will take many years before the benefits of a vaccination program become apparent, because even though papillomas occur in a relatively young, conjunctival carcinoma is usually a disease of the elderly (9).
In addition, despite controversies in the medical literature concerning HPV involvement in pterygium development, the results of most studies agree that HPV is detected in at least a subgroup of pterygia. In these cases HPV infection may affect both pathogenesis and clinical behavior (including recurrence) of pterygium (10). Therefore, it would be interesting to explore the possibility of antiviral medications or even vaccination, which may represent novel options in the therapy of selected HPV-infected pterygia. DIAGNOSIS AND TREATMENT OF HPV OCULAR SURFACE INFECTIONS However, clinicians should be aware of a possible bilateral uveitis and papillitis following HPV vaccination (11). Multiple transitory white dots of the retina as a manifestation of multiple evanescent white dot syndrome (MEWDS) have also been reported in the early period after HPV vaccination.
DISCLOSURE
The authors report no conflicts of interest in this work.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2007-02-01T00:00:00.000
|
20554317
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3201/eid1302.060808",
"pdf_hash": "6568bb89ffcc87a6019523bf3404eee3d671256c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43018",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "6568bb89ffcc87a6019523bf3404eee3d671256c",
"year": 2007
}
|
pes2o/s2orc
|
African Tickbite Fever in Travelers, Swaziland
To the Editor: African tickbite fever (ATBF), which is caused by Rickettsia africae, is well documented in travelers to southern Africa (1–3) and transmitted by ungulate ticks of the genus Amblyomma. Positive serologic results were reported in 9% of patients (1) and 11% of travelers (4) from southern Africa. We report an outbreak of ATBF with an attack rate of 100% among 12 Dutch travelers to Swaziland.
African Tickbite Fever in Travelers, Swaziland
To the Editor: African tickbite fever (ATBF), which is caused by Rickettsia africae, is well documented in travelers to southern Africa (1)(2)(3) and transmitted by ungulate ticks of the genus Amblyomma. Positive serologic results were reported in 9% of patients (1) and 11% of travelers (4) from southern Africa. We report an outbreak of ATBF with an attack rate of 100% among 12 Dutch travelers to Swaziland.
The 12 travelers (9 male and 3 female) visited Mkhaya Game Reserve in Swaziland in May 2003 for several days. Upon retuning to the Netherlands, they consulted our clinic for assessment for fever, malaise, and skin eruptions. Epidemiologic and clinical data were obtained after the patients provided informed consent. All symptomatic patients were treated before serum samples were collected.
Acute-phase and convalescencephase serum samples were obtained from 8 patients at 3 and 9 weeks, respectively, after symptoms were reported. Only convalescent-phase serum samples were obtained from the other 4 patients. Serologic assays were conducted for screening and confirmation in Rotterdam, the Netherlands (Department of Virology, Erasmus University Hospital) and Marseille, France (Unité des Rickettsies, Faculté de Médecine, Université de la Mediterranée), respectively.
In Rotterdam, immunofluorescence assays for immunoglobulin G (IgG) and IgM against R. conorii, R. typhi, and R. rickettsii were performed with multiwell slides on which antigens were fixed (Panbio Inc., Columbia, MD, USA). Serum samples with fluorescent rickettsiae at dilutions >1:32 were considered positive.
LETTERS
In Marseille, a microimmunofluorescence assay for IgG and IgM against R. africae, other members of the spotted fever group, and R. typhi of the typhus biogroup was used. Western blotting for R. africae and R. conorii was performed with reactive serum samples and repeated after cross-adsorption that removed only antibodies to R. conorii (5). Serologic evidence for infection with R. africae was defined as 1) seroconversion; 2) IgG titers >64, IgM titers >32, or both, with IgG and IgM titers >2 dilutions higher than any of the other tested spotted fever group rickettsial antigens; 3) a Western blot profile that showed R. africae-specific antibodies; and 4) cross-adsorption assays that showed homologous antibodies against R. africae (1).
All 12 travelers had a diagnosis of ATBF. Epidemiologic, clinical, and serologic results are shown in the Table. Two patients had a history of a tickbite. Lymphadenopathy in the groin was the only clinical sign observed in 2 other patients. For all 10 patients with symptoms, the symptoms abated within a few days after treatment with doxycycline, 100 mg orally twice a day (5 patients) for 7 days, or ciprofloxacin, 500 mg orally twice a day (5 patients) for 7 days. No relapses or complications were noted 1 year later.
Assays in both locations showed serologic reactivity against R. conorii and R. rickettsiae. Specific antibodies against R. africae were detected by Western blot in 8 patients (Table). All 12 travelers were infected with R. africae. In 3 other patients, immunofluorescence assays demonstrated seroconversion for specific antibodies. One patient with no clinical symptoms had low IgG (32) and IgM (16) titers against rickettsiae by immunofluorescence and IgG by Western blot.
Tick vectors of R. africae attack humans throughout the year. The pro-portion of patients having multiple eschars, which indicate the aggressive behavior of the tick, varies from 21% (6) to 54% (2). The 100% attack rate observed in this study emphasizes the risk for ATBF in sub-Saharan travelers. In our study group, only 2 persons had multiple eschars, but serologic analysis showed that all patients were infected with R. africae. Most cases of ATBF have a benign and self-limiting course with fever, headache, myalgia, and a skin rash. However, patients who are not treated show prolonged fever, reactive arthritis, and subacute neuropathy (7).
The long-term sequelae of ATBF remain to be established. Early treatment would not likely have prevented these complications. Jensenius et al. reported that travel from November through April was a risk factor for ATBF (1). The travelers in our study visited Swaziland in May. We speculate that tick bites were likely caused by larvae or nymphs, which are often LETTERS unrecognized stages. Many affected travelers may not seek medical attention or may have received a wrong diagnosis. Therefore, surveillance based only on reported cases is likely to underestimate the true incidence of travel-associated R. africae infection.
Catheter-related Bacteremia and Multidrug-resistant
Acinetobacter lwoffii To the Editor: Acinetobacter species are ubiquitous in the environment. In recent years, some species, particularly A. baumannii, have emerged as important nosocomial pathogens because of their persistence in the hospital environment and broad antimicrobial drug resistance patterns (1,2). They are often associated with clinical illness including bacteremia, pneumonia, meningitis, peritonitis, endocarditis, and infections of the urinary tract and skin (3). These conditions are more frequently found in immunocompromised patients, in those admitted to intensive care units, or in those who have intravenous catheters, and those who are receiving mechanical ventilation (4,5).
The role of A. baumannii in nosocomial infections has been documented (2), but the clinical effect of other Acinetobacter species has not been investigated. A. lwoffii (formerly A. calcoaceticus var. lwoffii) is a commensal organism of human skin, oropharynx, and perineum that shows tropism for urinary tract mucosa (6). Few cases of A. lwoffii bacteremia have been reported (3,(5)(6)(7). We report a 4-year (2002-2005) retrospective study of 10 patients with A. lwoffii bacteremia admitted to a 600-bed teaching hospital in central Italy.
All 10 patients were immunocompromised; 8 had used an intravascular catheter (peripheral or central) and 2 had used a urinary catheter. Blood cultures of the patients were analyzed with the BacT/ALERT 3D system (bioMérieux, Marcy l'Etoile, France). Isolates were identified as A. lwoffii by using the Vitek 2 system and the API 20NE system (both from bioMérieux).
Macrorestriction analysis of the A. lwoffii isolates identified 8 distinct PFGE types. Two MDR strains (strains 2 and 3 in the Table), which
|
v3-fos-license
|
2022-03-02T16:26:19.352Z
|
2022-02-26T00:00:00.000
|
247182959
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-3417/12/5/2463/pdf?version=1646117274",
"pdf_hash": "de48bd535e215e0e5265e858a53131c2c4b06b0d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43022",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "bdfdca443e39b377c1aab61722d1d00cfb5cb714",
"year": 2022
}
|
pes2o/s2orc
|
Limiting Wind-Induced Resuspension of Radioactively Contaminated Particles to Enhance First Responder, Early Phase Worker and Public Safety—Part 1
An accidental radiological release or the operation of a radiological dispersal device (RDD) may lead to the contamination of a large area. Such scenarios may lead to health and safety risks associated with the resuspension of contaminated particles due to aeolian (wind-induced) soil erosion and tracking activities. Stabilization technologies limiting resuspension are therefore needed to avoid spreading contamination and to reduce exposures to first responders and decontamination workers. Resuspension testing was performed on soils from two sites of the Negev Desert following treatment with three different stabilization materials: calcium chloride, magnesium chloride, and saltwater from the Dead Sea in Israel. Two and six weeks post-treatment, resuspension was examined by inducing wind-driven resuspension and quantitatively measuring particle emission from the soils using a boundary-layer wind tunnel system. Experiments were conducted under typical wind velocities of this region. Treating the soils reduced resuspension fluxes of particulate matter < 10 μm (PM10) and saltating (sand-sized) particles to around background levels. Resuspension suppression efficiencies from the treated soils were a minimum of 94% for all three stabilizers, and the Dead Sea salt solution yielded 100% efficiency over all wind velocities tested. The impact of the salt solutions (brine) was directly related to the salt treatment rather than the wetting of the soils. Stabilization was still observed six weeks post-treatment, supporting that this technique can effectively limit resuspension for a prolonged duration, allowing sufficient time for decision making and management of further actions.
Introduction
An accidental radiological release or the operation of a Radiological Dispersal Device (RDD) may lead to the contamination of a large area with radioactive materials.During the immediate emergency phase of a response, life-saving operations and securing of critical infrastructure must be conducted for the safety of the public and first responders [1,2].During the operations, emergency responders, as well as decontamination workers assisting with the response, may be further exposed due to inhalation of resuspended particles and direct contact, owing to the tracking of contamination from the contaminated areas, i.e., roads, other construction materials and soils.Containment of the contaminated area to prevent resuspension could reduce the overall exposure for emergency responders and decontamination workers and also reduce the spread of contamination.Hence, stabilization technologies and methodologies to minimize this exposure are needed [1,2].
Aeolian (wind-induced) soil erosion, and the following process of dust emission, results in the resuspension of soil-derived particles to the atmosphere and air pollution [3][4][5].Stabilization technologies are designed to prevent the spread of particles (such as by resuspension) and are routinely used in industries, such as road constructions and mining sites, for dust control [2].The application of rapidly available and easily applied stabilization technologies has the potential for accomplishing multiple goals following the release of radioactive particles from a radiological contamination event.Primarily, the application of a stabilization material may reduce exposures to first responders and decontamination workers assisting with the response due to tracking.In addition, such technologies would limit the wind-induced spread of contamination to other non-contaminated, less-contaminated, or recently decontaminated areas, subsequently reducing the time and resources needed for additional decontamination operations [2].
The United States Environmental Protection Agency (EPA) previously conducted work on stabilization technologies [1,2,6,7].From these studies, the list below presents some options recommended by stakeholders and experts that may be suitable for stabilization:
•
Soil2O ® 1 dust control wetting agent (available in the US);
•
Capping with locally available gravel, mulch, sand or clay; • Misting with water or saltwater (brine), with the possible addition of additives; • Application of a polymer coating/gel.
There is a lack of fundamental research examining the applicability of stabilization materials required in an event leading to the contamination of a large area with radioactive materials.Stabilization materials suitable for large areas of contaminated soils are expected to be cheap, easily applied and highly effective in limiting wind-induced contamination dispersal.Previous works showed the potential of specific brines to reduce dust emission from unpaved roads of different soils [3,8,9], with low environmental salinization risk [10].The current study aimed to test the effectiveness of different brines to stabilize arid soils that may be subjected to soil contamination and are already associated with natural dust emission.
Soil Sampling and Physicochemical Characterization
Soils were sampled from two sites that are undisturbed and associated with dust emission in the field: Ze'elim sandy area (31.16°E/34.53°N) at the western Negev Desert [11], and the Yamin plateau (31.04°E/35.08°N) at the northeastern Negev Desert in Israel [12].The soil samples were analyzed for elemental composition by X-ray fluorescence (XRF) using an Axios spectrometer (PANanalytical, Malvern, UK).Mineralogical phase identification was performed by X-ray powder diffraction (XRPD) using an Empyrean Philips 1050/70 diffractometer (PANanalytical, Malvern, UK).Particle size distribution (PSD) was performed by laser diffraction using Analysette 22 MicroTec Plus (Fritsch International, Idar-Oberstein, Germany).XRF, XRPD and PSD analyses were performed at the Ben-Gurion University of the Negev in Beer-Sheva.pH was measured using a Metrohm pH meter (Metrohm, Herisau, Switzerland).Water content in soils was measured gravimetrically.Total organic content (TOC) was determined by titration of the dissolved organics with ammonium iron sulfate using an 848 Titrino plus (Metrohm, Herisau, Switzerland) at the Geological Survey of Israel. 1.
Application of Stabilizers-Soils
were placed in trays customized to fit the wind tunnel dimensions (surface area of 0.5 m × 1.0 m and height of 0.02 m) (Figure 1).Brine solutions were applied to the soils by spraying the soil using a sprayer at equal volume to surface area ratios (1.5 L m −2 ).As controls, soils were either untreated or sprayed with tap water (clean drinking water).After applying the solutions and prior to the wind-tunnel experiments, the trays were left in the laboratory in order to avoid any environmental effect on the soils (e.g., wind-induced resuspension).Table 2 summarize the stabilization experimental matrix.
Boundary-Layer Wind Tunnel Experiments: Resuspension Testing and Calculations
Resuspension testing was performed at the Aeolian Simulation Laboratory, Ben-Gurion University of the Negev, using a boundary-layer wind tunnel [13].Untreated and treated soils were tested following either 2 weeks or 6 weeks beginning from the day of treatment.
The different times were chosen to represent different periods of aging following an incident.Experiments were conducted under four wind velocities, 5.3, 6.8, 8.1, and 9.6 m s −1 , representing typical natural winds associated with dust emission in this region.PM 10 dust concentrations were recorded by light-scattering laser photometers DustTrak DRX 8534 (TSI Inc., Shoreview, MN, USA) placed 25 cm above the tunnel bed.Before placing the soil trays in the wind tunnel, PM 10 background levels of up to 20 μg m −3 were recorded.Background levels were subtracted from the PM 10 measurements, which were taken at different wind velocities.Each sample was measured for a duration of 30 s, at 1 s intervals.This short duration is enough to determine the dust emission patterns in controlled experiments [3,5].Mass flux values of PM 10 resuspended from the ground (g m −2 s −1 ), expressed as F PM 10 , were calculated according to the following [13]: where C PM 10 is the recorded PM 10 concentration (μg m −3 ), Vt is the air volume in the wind tunnel (3.43 m 3 ), Ap is the area of the experimental plot (0.25 m 2 ) and t is time (in seconds).
Mean mass flux values of PM 10 (F − PM 10 ) were calculated by averaging all FPM 10 results per sample, i.e., 30 calculated flux values obtained over 30 s per wind velocity.
Saltating particles associated with the initiation of the dust emission process from soils [4,5] were collected by traps placed 2.5 to 10.5 cm above the tunnel bed and along the wind direction.Collected particles were weighted at the end of each experiment.Mean mass flux values of saltating particles (g m −2 s −1 ), expressed as F − (saltation), were calculated according to the following: where m saltation is the measured weight of the saltating particles (g), At is the crosssectional area of the traps (0.02 m −2 ) and t is time (in seconds).
Suppression efficiencies (SE) of PM 10 or saltating particles (in percentage) were calculated for each stabilizer and soil type at each wind velocity according to the following: where F − is the mean mass flux values of PM 10 or saltating particles (see above) and is the mean flux of the control sample (untreated) for the same wind velocity and soil type.
Physicochemical Characteristics of the Soils
Soils were collected from two sites.The first sampling site was the Yamin plateau at the northern Negev Desert in Israel, and the second site was the Ze'elim sandy area at the western Negev Desert in Israel.Both soils are mainly composed of quartz (SiO 2 ), silicate minerals (anorthite (CaAl 2 Si 2 O 8 ), sanidine (CaAl 2 Si 2 O 8 )), carbonate minerals (dolomite (CaMg(CO 3 ) 2 ) and calcite (CaCO 3 )) and clay-sized minerals (hematite (Fe 2 O 3 )), as characterized by XRF and XRD analyses (Table 3, Figure 2).Additional analysis showed the soils were alkaline and contained low water and organic matter contents (Table 4), which are typical characteristics of desert soils.
PSD analysis showed different characteristics in grain size, whereas the Ze'elim soil was classified as sand, the Yamin soil was classified as silt loam (Figure 3).The Ze'elim soil demonstrated a higher mean grain size (170 μm vs. 50 μm) and a lower PM 10 content (3% vs. 28%) than the Yamin soil (Table 5).It was found that the Ze'elim soil is mainly composed of fine and medium sand fractions, while silt and fine sand are the main fractions in the Yamin soil.
Effectiveness of Brine Stabilizers on Resuspension Suppression from the Ze'elim Soil
To test the impact of the brine stabilizers on the resuspension from the soils, soils were treated with different stabilizers, left to dry for two weeks, and then tested for wind-induced dust emission.Untreated soils served as non-stabilized controls (NSCs).Soils were treated with either of the following stabilizers: CaCl 2 , MgCl 2 and saltwater from the Dead Sea in Israel.Soils were also treated with tap water in order to control for the impact of wetting (Table 2).
PM 10 concentrations recorded during the wind tunnel experiment, representing wind-induced dust emissions from the Ze'elim soil, are presented in Figure 4. Higher wind velocities resulted in higher PM 10 resuspension levels from the untreated soil (control).
Resuspension was slightly reduced from soils sprayed with tap water (followed by drying) at all wind velocities tested, with a significant reduction at the lowest wind velocity.Extremely low resuspension levels were detected in brine treated soils, demonstrating that the soils were effectively stabilized following the treatments.The most effective dust suppressor was the Dead Sea salt treatment, yielding average PM 10 concentrations similar to background levels (~20 μg/m 3 ).
Based on the PM 10 concentrations recorded during the wind tunnel experiment and the mass measurements of the collected salting particles, mean PM 10 fluxes and mean saltation fluxes were calculated, respectively.Figure 5 show the mean PM 10 fluxes and mean saltation fluxes from the Ze'elim soil under different treatment conditions, tested under four wind velocities.
From these results, it was evident that the resuspension fluxes of saltating particles were significantly lower (by at least an order of magnitude) than dust particles, supporting that PM 10 are the major resuspension contributors under natural conditions.
To quantitatively evaluate the impact of the treatments on the resuspension of PM 10 and saltating particles, suppression efficiencies were calculated (Tables 6 and 7).Treating the Ze'elim soil with brine solutions resulted in effective stabilization, as shown by significantly reduced fluxes compared to the control and high resuspension suppression efficiencies of >97% (Figure 5, Tables 6 and 7) for all experimental conditions.The impact of the brine solutions was directly related to the salt treatment, as slightly reduced PM 10 fluxes and unchanged saltation fluxes were observed in soils misted with tap water only.
While all salt solutions efficiencies may be operationally relevant, interestingly, the most effective suppression effect on overall resuspension was achieved by the Dead Sea salt treatment, yielding 100% suppression efficiency over all wind velocities tested.For the prepared calcium and magnesium salt solutions, the efficiencies were less for lower wind speeds.
To evaluate the durability of the stabilization technique, re-testing was performed four weeks following the wind tunnel experiments described above (six weeks from the day of treatment).These time points were chosen because while operations may start immediately, they may continue over several weeks, so it is necessary to study the longerterm effectiveness.Re-testing resuspension of PM 10 concentrations from the Ze'elim soil is presented in Figure 6.Treatment with all three stabilizers resulted in low average PM 10 concentrations similar to background levels (~20 μg/m 3 ).Resuspension levels of saltating particles were undetected (no particles were collected).These results demonstrated that treating the Ze'elim soil with brine solutions resulted in effective stabilization six weeks post-treatment.
Effectiveness of Brine Stabilizers on Resuspension Suppression from the Yamin Soil
Yamin soil was subjected to treatments and resuspension testing similar to those performed on the Ze'elim soil.Soils were treated with different stabilizers, left to dry for two weeks, and then tested for particle emission in the wind tunnel.PM 10 concentrations recorded during the experiment, representing wind-induced dust emission from the Yamin soil following different treatments, are presented in Figure 7.
As shown for the Ze'elim soil, higher wind velocities resulted in higher PM 10 resuspension levels from the untreated Yamin soil (control).In contrast, significantly lower (by at least an order of magnitude) resuspension levels were observed from this soil when compared with the Ze'elim soil, as demonstrated by lower PM 10 concentrations recorded under identical conditions (Figures 4 and 7, control).
As shown in Figure 7, extremely low resuspension levels were detected in brine treated soils, demonstrating that the soils were effectively stabilized following the treatments.The Dead Sea salt treatment was the most effective dust suppressor for the Yamin soil, similar to the results obtained for the Ze'elim soil.
Figure 8 show the mean PM 10 fluxes and the mean saltation fluxes from the Yamin soil under different treatments, tested under four wind velocities.As shown for the Ze'elim soil, the resuspension fluxes of saltating particles from the Yamin soil were significantly lower (by at least an order of magnitude) than dust particles, supporting that PM 10 are the major resuspension contributors under untreated conditions.
Tables 8 and 9 present the calculated suppression efficiencies of PM 10 and saltating particle resuspension from the Yamin soils.Suppression efficiencies could not be calculated under the lowest wind velocity because PM 10 measurements were low (around background levels), and no saltating particles could be collected and measured (noted NA).Treating the soil with brine solutions resulted in effective stabilization, as shown by significantly reduced fluxes compared to the control, along with resuspension suppression efficiencies (>94%).The most effective suppression effects on overall resuspension were achieved by the MgCl 2 and Dead Sea salt treatments, yielding 100% suppression efficiency over all wind velocities tested.
Analogous to the Ze'elim soil, the durability of the stabilization technique was evaluated on the Yamin soil by retesting resuspension from the treated trays following four additional weeks.Figure 9 present the PM 10 concentration recorded during the wind tunnel experiment from the Yamin soil.Treatment with all the three stabilizers resulting in average PM 10 concentrations similar to background levels (~20 μg/m 3 ).Resuspension levels of saltating particles were undetected (no particles were collected).These result demonstrated that treating the Yamin soil with brine solutions resulted in effective stabilization even six weeks post-treatment.
Discussion
Treating the two soils with salt/brine solutions resulted in reduced particle resuspension, as shown by extremely low PM 10 fluxes (equivalent to background levels) and high resuspension suppression efficiencies (>94%).The impact of the brine solutions was directly related to the salt treatment rather than the wetting of the soils since similar particle resuspension fluxes were obtained from untreated soils or soils sprayed with tap water only.Brine solutions are, therefore, effective stabilizers, leading to reduced resuspension of soil particles.These results are consistent with previous work performed by Katra et al. [3], which tested the impact of diverse dust control products of synthetic and organic polymers (Lignin, Resin, Bitumen, PVA, Brine) on unpaved roads.The authors showed that some products significantly reduced dust emission from quarry roads, especially when using magnesium chloride (Brine).
All three salt\brine solutions tested in this study function primarily by helping cement small particles into larger ones that are more difficult to resuspend [2].Their capability to enhance the cohesion of smaller particles is expected to vary with the composition of the salt solution, as well as the specific particles involved.Aiding in this cohesion is the fact that salts such as CaCl 2 and MgCl 2 are hygroscopic, so when they dry out after being applied (usually by spraying an aqueous solution), some water may be present, which helps enhance cohesion [2].The effectiveness of the stabilizers is expected to occur immediately after the applied solutions dry, which in desert climates is expected to not take long, as shown in this work.While all salt solutions have operational relevance, the most effective stabilizer was the Dead Sea salt solution, yielding 100% resuspension suppression efficiency of PM 10 and saltating particles over all wind velocities tested.The motivation to test the Dead Sea salt solution as a stabilizer was it being an easily available, natural resource of salts.Saltwater from the Dead Sea can be derived directly from the sea or procured locally.MgCl 2 and CaCl 2 were also highly effective but slightly less effective than the Dead Sea salt in limiting PM 10 resuspension from the Ze'elim soil (>97%).CaCl 2 was also slightly less effective in limiting the resuspension of saltating particles from the Yamin plateau (>94%).
The Dead Sea solution is expected to contain other substances, such as specific ions and humic substances that help retain hydration, which may enhance the cohesion of small particles.
Significantly lower resuspension levels were observed from the Yamin soil when compared with the Ze'elim soil (>10-fold difference), indicated by lower PM 10 concentrations recorded under identical conditions (Figures 6 and 9).This may result from differences in the cohesiveness of the soil particles between the two soil types, rather than the content of the PM 10 in the soil (Table 5), which is significantly higher in the Yamin than the Ze'elim soil (28 wt% and 7 wt%, respectively).It demonstrates the role of sand transport in dust-PM 10 emission from sandy soils [14].
Resuspension fluxes of saltating particles from the two soils were >10-fold lower than dust particles, demonstrating that PM 10 are the major resuspension contributors under natural conditions.This result confirms that dust emission is expected to cause the major spread of the contamination in the case of an emergency event in the Negev desert, highlighting the importance of limiting resuspension of contaminated dust.Treating the soils with brine solutions resulted in effective stabilization six weeks post-treatment, supporting that this technique can effectively limit resuspension of contaminated soil after an emergency event for a prolonged duration, allowing sufficient time for decision making and management of further actions.This is particularly important in desert environments where continued drying could otherwise lead to increased resuspension.
Our results highlight the importance of considering the soil properties at a specific site when considering the impacts and mitigation of resuspension.The two soils in this study have characteristics that contribute to their ability to be resuspended, e.g., small organic content and low moisture content.Therefore, they may be considered "worst cases", such that the results may also be applicable to many other types of soils for which resuspension may be inherent less favored.
While the salt solutions appear to increase the cohesiveness of small particles and thus reduce wind-induced resuspension, complex mechanisms appear to govern the disintegration of the cohesive/cemented particles and their subsequent resuspension.Therefore, to validate the applicability of stabilization techniques, it is essential to test the impact of stabilizers in specific situations which induce different types of physical stresses other than wind.Two operationally relevant cases are the movement of vehicles and foot traffic.EPA investigated simulated vehicle and foot traffic in controlled laboratory studies [15].Together, the results of the present study, along with the EPA study, suggest the relevancy and urgency of testing stabilization techniques on a larger scale area under natural environmental conditions.
Funding:
This research was funded by The Nuclear Research Centre Negev, Beer-Sheva, Israel.
EPA Author Manuscript
EPA Author Manuscript EPA Author Manuscript Wind-driven PM 10 emissions from the Ze'elim soil six weeks following treatment with stabilizers.
EPA Author Manuscript
EPA Author Manuscript EPA Author Manuscript Wind-driven PM 10 emissions from the Yamin soil six weeks following treatment with stabilizers.Appl Sci (Basel).Author manuscript; available in PMC 2023 September 11.
Figure 1 .
Figure 1.Samples of Dead Sea salt solutions collected in 3 L containers (left side) and trays of Ze'elim soil treated with different brines (surface area of 0.5 m × 1.0 m and height of 0.02 m).
Figure 3 .
Figure 3. Particle size distribution of the Ze'elim and the Yamin soils in Israel.
Figure 4 .
Figure 4.Wind-driven PM 10 emissions from the Ze'elim soil two weeks following treatment with stabilizers.Note the differences in the values of the Y-axis between Control and Water to the brines.
Figure 5 .
Figure 5.Wind-driven PM 10 and saltation fluxes from the Ze'elim soil treated with stabilizers.
Figure 7 .
Figure 7.Wind-driven PM 10 emissions from the Yamin soil two weeks following treatment with stabilizers.Note the differences in the values of the Y-axis between Control and Water to the brines.
Figure 8 .
Figure 8.Wind-driven PM 10 and saltation fluxes from the Yamin soil treated with stabilizers.
Table 1 .
Chemical composition of the stabilization solutions.
Table 3 .
χ-ray fluorescence (XRF) measurements of soils from the Ze'elim area and the Yamin Plateau in Israel.
Table 4 .
Soil properties of the Ze'elim and the Yamin soils in Israel.
Table 5 .
Particle size fractions of the Ze'elim and the Yamin soils in Israel.
Table 6 .
Suppression efficiencies of wind-driven PM 10 emission from the Ze'elim soil treated with stabilizers.
Table 7 .
Suppression efficiencies of wind-driven saltating particle emission from the Ze'elim soil treated with stabilizers.
Table 8 .
Suppression efficiencies of wind-driven PM 10 emission from the Yamin soil treated with stabilizers.Not available (NA) means values could not be calculated because the mean mass of flux of the control sample was zero.
Table 9 .
Suppression efficiencies of wind-driven saltating particle emission from the Yamin soil treated with stabilizers.Not available (NA) means values could not be calculated because the mean mass of flux of the control sample was zero.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2011-10-28T00:00:00.000
|
6736575
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcsportsscimedrehabil.biomedcentral.com/track/pdf/10.1186/1758-2555-3-25",
"pdf_hash": "239f43f4ac7a41b54cd9adbf9a7acbe68cea222c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43023",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "239f43f4ac7a41b54cd9adbf9a7acbe68cea222c",
"year": 2011
}
|
pes2o/s2orc
|
Clinical examination of the knee: know your tools for diagnosis of knee injuries
The clinical evaluation of the knee is a fundamental tool to correctly address diagnosis and treatment, and should never be replaced by the findings retrieved by the imaging studies carried on the patient. Every surgeon has his own series of exams with whom he is more confident and on whom he relies on for diagnosis. Usually, three sets of series are used: one for patello-femoral/extensor mechanism pathologies; one for meniscal and chondral (articular) lesions; and one for instability evaluation. This review analyses the most commonly used tests and signs for knee examination, outlining the correct way to perform the test, the correct interpretation of a positive test and the best management for evaluating an injured knee both in the acute and delayed timing.
Introduction
The introduction of highly effective imaging tools like Computed Tomography and Magnetic Resonance in the clinical practice in Orthopaedics and Traumatology has stolen the central role of clinical evaluation, so that nowadays there's a common feeling, between patients but also between surgeons, that the diagnosis of a thorn meniscus or a ruptured ACL has to be ruled out only on the basis of an imaging study.
But the efficacy and affordability of a correct clinical examination needs not to be forgotten: this paper presents an overview of the most known tests and signs for knee examination, grouped in the three aspects of knee injuries: 1) patello-femoral joint/extensor mechanism; 2) articular (meniscal and chondral) lesions; and 3) knee instability.
Patient Interview
In all cases, the clinical evaluation should be introduced by a careful interview of the patient, in order to address the subsequent exam to the affected area of the knee, and to choose the correct series of tests and signs.
The beginning of the interview should be carried in order to 1) localize the pain/dysfunction in one aspect of the knee (extensor mechanism; articular: medial vs lateral vs patellofemoral; ligaments on the medial vs lateral compartment vs central pivot); 2) define the timing of onset of the injury/dysfunction: acute vs previous injury; chronic disease; overuse; 3) collect the actual symptoms felt by the patient: pain vs discomfort vs disability.
Extensor mechanism pathology is often related to a chronic, repetitive trauma. Nevertheless, recent injuries should be enquired, as anterior knee pain can be associated to a recent patellar subluxation or dislocation, or to ruptured patellar or quadriceps tendons, particularly in older patients.
Anterior pain during activity and at rest is mostly associated with chondral lesions, while pain associated with prolonged flexion can be raised by a slight instability or malalignement.
Meniscal lesions are almost always consequence of a single trauma, but chronic lesions and degenerative tears of the menisci as well as chondral defects secondary to overuse should not be forgotten. The interview should be focused on the mechanism of injury (direct trauma, sprain, complex trauma) and on the pre-existing condition of the knee (e.g. previous injuries, history of overuse). Most patients do not report a real trauma, but rather an acute pain occurred after a weight-bearing twist on the knee or a knee flexion.
Locking of the knee is usually associated with bucket handle tears of the meniscus, and must be carefully inquired. An haemorrhage around the posterior capsule and medial collateral ligament, with subsequent hamstring spasm can mimic a locking. Snaps, clicks, catches or jerks can be reported by the patients and the examiner should try to reproduce them with the manipulative manoeuvres.
Painful giving away of the knee is a common symptom, and is often reported as caused associated to rotatory movements and often associated with a feeling of "the joint jumping out of place". This symptom is nonspecific and also reported in case of loose bodies, patellar chondromalacia, instability, quadriceps weakness.
In case of instability, the onset of the lesion can most of the times be related to a single injury, and the patient usually remembers it. However, it is often difficult to recall the exact mechanism of injury, and the patient should be forced in trying to reproduce the "twist" or impact sustained by the knee at the time of the injury: this can strongly help in estimating the anatomic structure(s) involved in the lesion. Additionally, the rupture of ligament such as ACL and PCL produces many times an audible "snapping" or "cracking" sound: the patient should be questioned if he/she heard such a sound.
Clinical Examination
The clinical examination of a knee is addressed to evaluate three aspects: 1) patello-femoral joint/extensor mechanism; 2) articular (meniscal and chondral) lesions; and 3) knee instability.
The series of the most known exams, signs and tests used for each of the three aspects will be here discussed.
1-Patello-Femoral Joint Q Angle
The Q-angle is the intersection between a line drawn from the anterior superior iliac spine to the center of the patella and a line drawn from the center of the tibial tubercle to the center of the patella. The angle can be measured in full knee extension, but the patella had better be centered in the trochlear groove to be more stable; therefore it is recommend to measure the Q-angle at 30°of knee flexion to move the patella into the proximal portion of the trochlea. The range of normality is usually considered 10°to 20°. An increased Q-angle may indicate a tendency to lateral tilt or glide. However, the clinical usefulness of the Q-angle is debated, and it has been reported no correspondence between Q-angle and PF pain measurements with patients' clinical symptoms [1]. A sitting Q angle (tubercle sulcus angle) has been advocated to better represent the relationship between the patellar and quad tendon vectors. An increased Q-angle (15°-20°) is associated to lateral patellar subluxation [2]. It has been noted that a static measure should not be used to assess a dynamic condition such as PF maltracking. It is still debated whether Q-angle correlates to PF pain syndrome [3] or not [4], and as far as this topic is still unclear, the Q-angle alone should never be used as a diagnostic tool for PF joint pathology.
Patellar Tilt and Glide
Patellar tilt and glide are often cited together, and many times are considered synonyms. Actually the patellar tilt indicates tightness of lateral restraints; it is performed with the patient supine with the knee in full extension. If the lateral side of the patella can not be elevated above the horizontal the test is positive.
The glide test is performed with the knee flexed at 30°: if the patella glides laterally over 75% of its width, a medial restraints laxity is diagnosed; while when it glides less than 25%, lateral restraints tightness is predicted [5] ( Figure 1).
The main restraint to the lateral dislocation is the medial patellar femoral ligament (MPFL). The MPFL can be evaluated with the knee in full extension and the patella medially subluxated with the thumb as in the glide test. This manoeuvre tightens the MPFL; if an area of tenderness is palpated, this usually identifies the location of the tear. A lateral glide greater than 75% of the patellar width is abnormal and indicates MPFL insufficiency ( Figure 2).
Patella tracking
A careful observation of patella tracking is mandatory, to rule out any muscular/ligamentous deficiency.
The quadriceps muscle is composed by: the rectus femoris and vastus intermedius muscles, that apply an axial load to the patella; the vastus lateralis and vastus medialis which have oblique insertions pull the patella in either direction.
The medial retinaculum and the lateral retinaculum act as static constraint to patella tracking.
The tracking of the patella from full extension into flexion should be recorded visually, and should be smooth, without abrupt or sudden movements. During knee flexion the patella moves more centrally and the facets increase their contact with femoral condyles. The iliotibial band has an expansion to the lateral retinaculum, and during the knee flexion determines a lateralization of the patella. Lateralization of the patella during flexion can be determined by weakness of medial muscles and retinaculum, or tightness of lateral structures.
Additionally, the patella engages the femoral condyles (the trochlea) at about 20 to 30 degrees of knee flexion: in case of condyles hypoplasia the facets do not engage in the trochlea and the patella can glide easily.
Besides maltracking, any articular pain or crackling can be elicited by applying pressure on the patella during flexion and extension: this does not always indicate PF chondromalacia, but other causes of pain must be considered, such as neuroma, patellar tendonitis, plica, referred pain, meniscus derangement, synovitis, and osteocondritis dissecans [6].
J sign
When the patella is subluxated laterally and, it suddenly shifts medially, when engaging the femoral trochlea, following a J shaped path. This sign indicates excessive lateral patellar shift in terminal extension [7]; external rotation facilitate its identification [8].
Meniscal and chondral lesions
A meniscal tear can be difficult to diagnose, as symptoms are mostly non-specific and other injuries can disguise the meniscal lesion. A meniscal lesion has to be suspected every time in case of pain occurred after a weight-bearing sprain of the knee or after a prolonged squatting or a real trauma.
Chondral lesions are more often related to chronic degeneration, but acute lesions can also occur.
As the menisci have no direct innervation, pain is related to synovitis in the adjacent capsular and synovial tissues, as it happens for chondral lesions. Thus, discriminating between meniscal and chondral lesions can be sometimes difficult.
Crepitation during flexion and extension against resistance may indicate cartilage pathology. The patient may walk with an externally rotated gait to avoid contact of the medial femoral condyle with the medial tibial spine in case of chondral lesion at that level [9].
All tests for meniscal and chondral lesions are a combination of knee flexion, tibial rotation and a stress on the joint line: this is the position where the posterior condyles roll back and the joint space becomes narrow, tightly engaging the menisci.
Meniscal Palpation Tests
In McMurray test the knee is flexed while the leg is externally rotated, palpating the joint line with a finger. Then, the knee is slowly extended. The test for lateral meniscus is carried out by internally rotating the leg. Pain or a crackling sound is felt when the condyle engages in the meniscal lesion ( Figure 3).
In Bragard's test, external tibial rotation and knee extension bring the meniscus more anterior: if tenderness is felt along the joint line palpation, an articular surface irregularity (i.e. chondral lesion) or a meniscal tear is suspected.
In Steinmann's second test joint line tenderness migrates posteriorly with knee flexion and anteriorly with knee extension, following the movements of the meniscus.
In the figure of four meniscal stress manoeuvre, the knee is held in a "figure of 4" (Cabot's) position, then the knee swings rapidly from a varus to a valgus stress, while a finger is pushed in the joint line. This brings the meniscus toward the periphery of the joint while the finger pushes it toward the centre of the joint: the combination of these two opposite forces stresses the meniscus, raising a sharp pain in case of meniscal tear ( Figure 4).
Meniscal Rotation Tests
Apley's (grinding) test is carried out with the patient prone and the knee flexed to 90°. Then the leg is twisted and pulled, then pushed. If pain is felt only while pushing, a meniscal lesion is diagnosed, while if no difference between distraction and compression is detected, a chondral lesion is more likely ( Figure 5). In Bohler's test a varus stress and a valgus stress are applied to the knee: pain is elicited by compression of the meniscal tear.
Squat test, duck walking test and Thessaly test consist in several repetitions of full weightbearing flexions on the knee, in different positions (squatting, walking in full flexion, and at 5 and 20°flexion, respectively) [11].
Merke's test is similar to Thessaly test performed with the patient in a weightbearing position: pain with internal rotation of the body produces an external rotation of the tibia and medial joint line pain when medial meniscus is torn. The opposite occurs when lateral meniscus is torn.
In Helfet's test the knee is locked, and cannot rotate externally while extending, and the Q angle cannot reach normality with extension.
In Peyr's test the patient is asked to sit in Turkish position, thus stressing the medial joint line: if the position raises pain, the test is positive for a medial meniscal lesion.
In Steinmann's first test the knee is held flexed at 90°a nd forced to external rotation, then internal rotation: the test is positive for medial meniscal tear if raises pain upon externally rotating, while it is positive for lateral meniscal tears in case of pain during internal rotation.
Knee Instability
Instability is usually defined with a direction (anterior, posterior, medial, lateral, rotatory), which is the position the proximal tibia can abnormally reach, with respect to the distal femur. The direction of instability depends on the single, or multiple, structures involved: the main structures involved in knee (in)stability are: ACL, PCL, MCL, LCL, posterolateral corner and posteromedial corner. Many manoeuvres are available to rule out the type of instability and test the knee structures involved. All tests can be divided in 4 groups: stress tests, slide tests, pivot shift (jerk) tests and rotational tests [6,9,10,12].
Stress Tests
The standard stress tests include valgus (abduction) and varus (adduction) tests; additionally, Cabot manoeuvre is a commonly used stress test.
Valgus (Abduction) stress test and Varus (Adduction) stress test are among the most known and used knee tests.
The key point in performing these tests is taking care not to perform them carelessly. The test should be carried out at 30°flexion rather than in full knee extension: by flexing the knee, all tendinous structures and posterior capsule are released allowing to test the MCL and LCL isolated. Palpating the joint line with one finger can be useful to determine the amount of opening. According to the American Medical Association (AMA), the amount of opening is graded as: grade I = 0 to 5 mm opening, with a hard endpoint; grade II = 5 to 10 mm, with a hard endpoint; grade III = over 10 mm opening, with a soft endpoint. Positivity of the test should not be referred to pain but to the degree of joint opening; in fact pain can be suggestive for partial rupture of the MCL, while a completely ruptured ligament is not stressed by the test therefore only mild pain is evoked ( Figure 6).
Cabot's manoeuvre is another stress test, that evaluates the LCL. The knee is held in a 'figure of four' position, while giving a varus stress to the joint: the LCL, when intact, can be distinctively palpated as a tight chord stretched between the fibular head and the lateral epicondyle.
While keeping the patient's knee in this position the figure of four meniscal stress manoeuvre can also be perfomed (for details, see previously: figure of four meniscal stress manoeuvre).
Cabot's manoeuvre and figure of four manoeuvre can raise severe pain, and are difficult to perform in an acute setting (Figure 7).
Slide Tests
With these tests the examiner slides the tibia, trying to subluxate it from the distal femur.
Anterior and Posterior Drawer Test: the most commonly used test for ACL and PCL evaluation, they are easy to perform, but require some attention to avoid mistakes and for correct interpretation. The tests have to be carried out in three different tibial rotational positions: neutral and at 30°of internal and external rotation. Internal rotation tightens the PCL and the posterolateral corner, so that the anterior drawer can become negative in this position. Anterior and posterior drawer test are performed simultaneously, and the examiner has to take care to rule out the amount of anterior and posterior tibial translation. Indeed in some cases when a PCL deficient knee has a posteriorized starting position, the reduction to a neutral postion can mimic an anterior drawer test: careful evaluation is required to avoid this mistake. In order to determine the correct starting point, palpation can be useful: in the neutral position the tibial plateau and the medial condyle face one another, with a slight anterior step-off of the tibia (approximately 0.5 -1 cm); this has to be taken as the "zero point" for anterior and posterior drawer evaluation.
In an acutely swollen knee the test can be done keeping the knee in a less flexed position, at 60 to 80°thus avoiding excessive pain due to haemarthrosis.
The menisci can mimic a hard stop, giving a false negativity to the test, when they engage in the joint space under the femoral condyles during the anterior dislocation movement. This 'doorstop' effect is more often given by the lateral meniscus, rather than the medial meniscus (Figure 8).
The Lachman test is the test for ACL evaluation easier to be performed in all settings: it can be particularly useful in those cases when the knee is examined in the first days after injury, with the knee swollen and highly painful. The test is performed holding the knee in full extension and at 30°flexion, and slightly externally rotated. As in the drawer test, besides the amount of anterior dislocation it is important the quality of the endpoint: a soft stop is highly predictive for ACL rupture, while a hard stop can indicate an intact ACL, even in case of a sensible amount of tibial traslation ( Figure 9).
The use of mechanical quantification of the tibial translation with measure instruments such as the KT-1000 (R) is useful for follow-up but not for diagnostic purposes.
The posterior Lachman test evaluates the PCL, with less efficacy than anterior Lachman.
Many test evaluate the tibial "sag", or subluxation, that can be encountered in PCL deficient knees: with the knee flexed, the tibia falls in a posterior subluxated potion; by contraction of the extensor apparatus this subluxation is reduced (anteriorly). The tibia can be held in three positions: with the patient supine, hip flexed at 90°and knee flexed at 90°( Figure 10); in the drawer position; with the knee slightly flexed as in Lachman's test. In these positions, a tibial draw back is noted (particularly in the first position: Passive Tibial Sag sign). Then the tibia is actively reduced by contraction the quadriceps muscle.
In Quadriceps Active Test the patient is asked to contract the muscle maintaining the knee in a flexed position: this pulls upward the tibia, obliterating the sag ( Figure 11).
In the Lachman-type position, the patient is asked to lift his leg against resistance (Active Resisted Extension Test).
In the drawer-type position, the patient is asked to lift his leg against resistance (Active Resisted Extension Test II), or contraction of quadriceps muscle is obtained by evoking the patellar reflex (Patellar Reflex Reduction Test).
Pivot Shift (jerk) Tests
These tests evaluate the rotatory instability that affects ACL deficient patients: this determines discomfort or frank pain with a shift or jerk of the knee joint, usually felt when squatting or changing direction.
An isolated ACL rupture produces a slight shift, often highly uncomfortable for the patients, while a posterolateral corner lesion is required to determine a huge, visible and sometimes audible jerk.
These tests are painful, and most of the times after the first attempt the test is no more reproducible, but they are the most effective in detecting an ACL rupture. In the hours before the injury, as the knee starts to swell, the tests becomes more and more difficult to be performed, and painful, so it has to be carried out in the most acute setting or in the chronic one.
McIntosh firstly described the test as the 'pivot shift' test (McIntosh's Pivot Shift (Jerk) Test), quoting a hockey player with an unstable ACL deficient knee who reported: -when I pivot, my knee shifts-.
The mechanism of these tests is that in the first degrees of flexion the tibial plateau forced in a valgus and internal rotation stress subluxes anteriorly, then at about 30°flexion, it suddenly reduces posteriorly as the iliotibial band passes posterior to the center of rotation pushing backwards the tibial plateau inside the joint line ( Figure 12).
In Noyes' Glide Pivot Shift Test the tibial subluxation is achieved not by internally rotating the leg, but rather by compressing the tibia axially towards the femur and lifting it anteriorly. The examiner tries to dislocate the whole tibial plateau (antero-posterior instability), not only the lateral aspect (rotatory instability), so a 'glide', rather than a clear clunk is evoked.
Hughston's Jerk Test produces the subluxation by extending the knee from the flexed position, applying the same valgus and internal rotation stress as in Noyes' test.
The Slocum's Anterolateral Rotary Instability (ALRI) Test is performed with the patient on the side, in a semilateral position, resting on the unaffected limb, with the affected knee extended and the limb supported by only the heel resting on the examining table. In this position the foot and tibia rotate internally, translating anteriorly the lateral tibial plateau. Vertical (valgus) stress is applied to the knee, then the knee is progressively flexed. In the first 20 degrees of flexion the tibia subluxes, while at approximately 40°it reduces, with a sudden reduction shift (or clunk); a finger placed at the joint line can help detecting the reduction. The position of the pelvis, held on the side and slightly posteriorly, avoids the rotational bias of the hip. This test is reported to be more effective than other pivot shift tests, and less painful for the patient (Figure 13).
The Reverse Pivot Shift Sign evokes the same shift as in pivot shift signs, but for PCL deficient knees: in these cases the lateral tibial plateau subluxes posteriorly when the tibia is stressed in external rotation and valgus, and reduces in extension.
The test can also be performed in the reverse direction, from the extended reduced position to the flexed subluxed one.
External Rotation Tests
These tests evaluate the posterolateral corner: a PLC deficient knee presents an external rotatory instability. PLC lesions are often associated to ACL or PCL tears, so it not uncommon to underestimate or misdiagnose a PLC lesion; the following tests are intended to electively evaluate the posterolateral corner [13,14].
The Tibial External Rotation (Dial) Test evaluates the amount of increased passive external rotation of the tibia in different positions of the knee. The supine position is more comfortable for the patient, but in the prone position the hip is held in its position by the patient's weight, thus eliminating the rotator effect of the hip. The leg should be held at 30°and 90°flexion, while in full extension the lateral gastrocnemius tendons are tightened, and reduce the external rotation drive. The test is positive when the affected knee rotates externally 10°more than the unaffected knee. In the flexed position the proximal tibial not only rotates externally, but subluxes posteriorly as well: reducing the tibia on the lateral femoral condyle with a finger further increases the amount of external rotation ( Figure 14).
The External Rotation Recurvatum Test evaluates the PLC and the posterior capsule. The knee is progressively extended from 10°flexion to the maximum extension, while rotating externally. Positivity is given by the combination of increased external rotation and hyperextension (recurvatum). Alternatively, the test can be evaluated by lifting the extended lower limbs by the toes, thus applying a force combining varus, external rotation and recurvatum ( Figure 15).
A great amount of recurvatum suggest an associated PCL rupture.
The Posterolateral External Rotation (Drawer) Test is a combination of the posterior drawer and external rotation tests: with the knee flexed at 30°and then at 90°, the tibia is forced posteriorly and in external rotation subluxating the tibia. If subluxation occurs at 30°but not at 90°an isolated PLC injury is supposed, while if subluxation occurs also at 90°a combined PCL and PLC is suspected.
Injury Patterns
Different injury patterns show different tests positivity. The patterns and relative tests are listed in the table below, in order of sensitivity (see table 1, below) [15].
Conclusions
This paper reviews the most known and used tests and signs for knee examination, outlining the importance of performing the tests in the correct way, in the right setting and timing and giving the correct interpretation. Knowing well these tests, the surgeon has a powerful tool for diagnosis and followup of the pathologies involving patellofemoral compartment, meniscal and chondral lesions and instability of the knee.
|
v3-fos-license
|
2021-12-08T16:14:27.054Z
|
2021-12-01T00:00:00.000
|
244942184
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/21/23/8059/pdf",
"pdf_hash": "a5f522f8f78da341d32abc2f870527e15dc43264",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43024",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"sha1": "11bf5b9c245e2211c61d895f1aacf9fa329c0e5a",
"year": 2021
}
|
pes2o/s2orc
|
Joint Power and Subchannel Allocation for Distributed Storage in Cellular-D2D Underlays
Wireless distributed storage is beneficial in the provision of reliable content storage and offloading of cellular traffic. In this paper, we consider a cellular device-to-device (D2D) underlay-based wireless distributed storage system, in which the minimum storage regenerating (MSR) coding combined with the partial downloading scheme is employed. To alleviate burdens on insufficient cellular resources and improve spectral efficiency in densely deployed networks, multiple storage devices can simultaneously use the same uplink cellular subchannel under the non-orthogonal multiple access (NOMA) protocol. Our objective is to minimize the total transmission power for content reconstruction, while guaranteeing the signal-to-interference-plus-noise ratio (SINR) constraints for cellular users by jointly optimizing power and subchannel allocation. To tackle the non-convex combinational program, we decouple the original problem into two subproblems and propose two low-complexity algorithms to efficiently solve them, followed by a joint optimization, implemented by alternately updating the solutions to each subproblem. The numerical results illustrate that our proposed algorithms are capable of performing an exhaustive search with lower computation complexity, and the NOMA-enhanced scheme provides more transmission opportunities for neighbor storage devices, thus significantly reducing the total power consumption.
Introduction
The explosively growing mobile data traffic has dramatically burdened current wireless networks and posed a great challenge to the future 6G communications. To alleviate the limited wireless bottlenecks, distributed storage over wireless links has been introduced as a promising technique for offloading the ever-increasing cellular traffic [1][2][3]. For a distributed storage system, the popular content files can be pre-stored across multiple distributed storage devices (called content helpers). Users requiring the stored content (content requesters) can directly download them from neighboring content helpers (CHs) instead of from the serving BS, resulting in lower power consumption and content delivery delay [4][5][6].
In practical communication scenarios, storage devices may be individually unreliable when some storage device fails or leaves the network, and thus loses its stored content. To maintain high reliability, there has been a large body of related work [7][8][9][10] researching storage coding schemes to facilitate a reconstruction of the original content, as well as repairing the lost data. Among them, the minimum storage regenerating (MSR) codes invoked in [7] could achieve the optimal tradeoff between repair bandwidth and storage efficiency. However, in most applications of MSR codes, the content requesters (CRs) tend to download all the data stored in specific CHs to reconstruct its desired content [11][12][13]. Considering the limited bandwidth of wireless links between CHs and CRs, it could be more advantageous to allow CRs to download only a small part of the stored symbols from any CH. As proved in [14], a partial downloading scheme could provide more freedom in terms of downloading choices, and consequently consume less power for content reconstruction. Inspired by these ideas, the MSR coding scheme, combined with the partial downloading scheme, will be employed in our proposed wireless distributed storage system.
For more efficient content delivery without additional infrastructure costs, deviceto-device (D2D) communications have emerged as a potential candidate for direct transmission between CHs and CRs. For example, our previous work [15] investigated a D2D-assisted wireless distributed storage system to provide power-efficient content delivery while meeting the reliability requirements. Work [16] addressed the repair problem when a D2D device storing data failed and derived the analytical expression for power consumption of data repair, which was verified to be significantly lower compared with the traditional, cellular-only communications. On the other hand, to avoid the incompatibility issues with unlicensed spectrum, reusing the licensed spectrum (i.e., cellular resources) for D2D transmission provides much better spectral efficiency through careful interference coordination. Towards this end, leveraging the cellular-D2D underlay mode for distributed storage systems has attracted increasing interest in the recent literature [17][18][19][20]. Based on graph theory, the optimization of spectrum resource allocation among CHs and cellular users (CUs) has been analyzed in [17,18], targeting the minimization of content reconstruction costs. Considering the mobility and different interests of CHs, the authors in [19,20] focused on socially enabled D2D communications over cellular links. To search for and assign qualified D2D links for content reconstruction, they evaluated the success rate for content delivery based on the statistic social interaction information, as well as the D2D transmission effects on cellular communications. Unfortunately, all the aforementioned work adopted the full downloading scheme to reconstruct the desired content and assumed that different D2D links are orthogonal with each other for mathematical tractability, in which the disadvantages stem from the scarcity of spectrum resources, limiting the number of feasible CHs, and thus may not be applicable in cellular-D2D underlays with densely deployed storage devices.
Differing from the orthogonal multiple access (OMA) technique, non-orthogonal multiple access (NOMA) is able to address both the massive connectivity and spectral efficiency enhancement issue by allowing multiple users to share the same resources simultaneously. Recently, several approaches have been proposed to apply the NOMA technique in D2D-enabled cellular networks for an enhanced system performance. For instance, the work [21,22] considered the non-orthogonal resource-sharing between cellular users and D2D pairs, for which the fractional frequency reuse technique and a cell sectorization method were proposed to mitigate the uplink interference, and both the overall throughput and spectral efficiency were demonstrated to be greatly improved. By delicately designing algorithms for resource allocation under the NOMA protocol, the system sum-rate achieved in [23][24][25][26] greatly outperformed the conventional OMA scheme. The potential benefits of NOMA technology motivated us to reconsider the pattern of spectrum utilization in wireless distributed storage systems, especially when we employ the partial downloading scheme and the available cellular resources are not affordable when solely occupied by each CH. However, to the best of the authors' knowledge, none of the existing work has been devoted to problems regarding NOMA-enhanced distributed storage in cellular-D2D underlays.
Against this background, we will consider the setting of a wireless distributed storage system in cellular-D2D underlays, where multiple storage devices are allowed to simultaneously reuse the same uplink cellular resources in this paper. To mitigate the uplink interference, a joint optimization on power and subchannel allocation is formulated to min-imize the total transmission power while guaranteeing both the SINR constraints at CUs and successful content reconstruction at the CR. Specifically, the original combinational optimization is proposed to be solved by taking the alternative minimization approach, for which a low-complexity greedy-heuristic algorithm and a matching-based algorithm are employed to efficiently deal with each subproblem. In summary, the contributions in this paper are as follows: • A practical framework for distributed storage in cellular-D2D underlays with the NOMA protocol is proposed, where the MSR coding and partial downloading scheme are combined for more power-efficient choices. The joint optimization of power and subchannel allocation is formulated, which aims to minimize the total transmission power for content reconstruction while guaranteeing the SINR requirements for CUs. • Given fixed subchannel allocation, a low-complexity power allocation algorithm modified from the greedy-heuristic approach is developed. In particular, a new sorting coefficient is introduced, which considers the interference effects from the CHs to CUs. The simulation results show that our proposed algorithm will closely approach the performance of the exhaustive method, and the newly introduced coefficient will bring a higher transmission rate from CHs rather than from the serving BS, which contributes to the lower power consumption. • Based on the fixed power allocation, the matching game with externalities is applied to model resource pairing between CHs and CUs, and a low-complexity subchannel allocation algorithm is proposed. Then, the joint optimization can be performed by alternatively updating power and subchannel allocation. Simulation results verify the convergence and near-optimal property of the proposed algorithm, and demonstrate that the NOMA-enhanced transmission scheme and partial downloading can significantly improve the performance gain over the conventional OMA and full downloading scheme.
The remainder of this paper is organized as follows. Section 2 presents the system model for distributed storage in cellular-D2D underlay, and formulates the problem of joint power and subchannel allocation. In Sections 3 and 4, the original problem is decoupled into two subproblems and then solved. Simulation results are reported in Section 5 to evaluate the performance of our proposed algorithms and investigate the superiority of the NOMA technique as well as the partial downloading scheme. Finally, conclusions are given in Section 6.
System Description
To offload traffic of cellular network and avoid unreliable individual storage devices, we investigated a wireless distributed storage mechanism in cellular-D2D underlays, where a specific content requester (CR) can directly reconstruct its desired content files from multiple adjacent content helpers (CHs) that have pre-stored the content instead of downloading it from the serving BS, as shown in Figure 1. In more detail, we assume that there exist N cellular users (CUs), denoted as CU = {CU 1 , CU 2 , ..., CU N }, communicating with the BS in traditional cellular links, and assume M CHs, denoted as CH = {CH 1 , CH 2 , ..., CH M }, attempt to communicate with the CR via D2D links by reusing the uplink cellular subchannels (SCs) occupied by CUs. By further assuming a fully loaded cellular network with available cellular resources denoted by SC = {SC 1 , SC 2 , ..., SC N }, each CU j ∈ CU for j ∈ N = {1, 2, ..., N} is allocated to SC j ∈ SC and all SCs are orthogonal. Note that when uplink cellular resources are not sufficient for exclusive assignment, i.e., N ≤ M, more than one CH may share the same SC to communicate with the CR based on non-orthogonal multiple access (NOMA) protocols. denote the signal channel gain from CU j to the BS and the interference gain from CU j to CR, respectively. In addition, let β i,j represent the resource reuse indicator for CH i ∈ CH and SC j ∈ SC, where β i,j = 1 when CH i reuses the resource of CU j ; otherwise, β i,j = 0. It is assumed that the perfect CSI is available at the serving BS and the considered CR. Then, the received signal-to-interference-plus-noise ratio (SINR) at the BS corresponding to CU j can be expressed as where σ 2 is the noise variance and ∑ M i=1 β i,j P i |g i | 2 is the interference from the CHs sharing the subchannel SC j with CU j . Let Q j and P i be the transmission power of CU j and CH i , respectively. In this paper, we assume that all involved CUs have fixed transmission power and our objective is to minimize the total transmission power of CHs, i.e., ∑ M i=1 P i , while guaranteeing successful content reconstruction at the CR as well as acceptable communication rates for CUs.
Inspired by the principle of NOMA, for the case in which multiple CHs tend to reuse the same SC to transmit content simultaneously, the technique of successive interference cancellation (SIC) could be employed at the CR to mitigate the inter-user interference. Based on SIC, the messages from stronger communication links will be successively decoded while the other messages from co-channel interferers are all treated as noise. Without a loss of generality, we assume that CUs are geographically closer to the BS and generate less interference to the CR than CHs. Let M j = {∀i ∈ M|β i,j = 1} denote the index set of CHs using SC j with size t j = |M j |, and denote π j (t) with t ∈ T j = {1, 2, ..., t j } as a sort function, indicating the decreasing order of channel coefficients in M j , i.e., |g (CR) π j (t j ) | 2 . Then, the received SINR at the CR from each CH can be obtained, following where Q j |h (CR) j | 2 is the interference from the CU j occupying the subchannel resource SC j . By further assuming the minimum transmission unit from each CH to the CR is a symbol containing B bits, and setting each subchannel with bandwidth W and duration T, the number of symbols that can be downloaded from CH π j (t) is equivalent to
Partial Downloading Scheme
For reliability and efficiency, the desired content of CR is supposed to be encoded and stored in CHs using the minimum storage regenerating (MSR) coding scheme in this paper. Note that, for analytical simplicity, we only focus on the storage and downloading process for each specific content and assume the requested content can always be found in CHs; the content popularity distribution is beyond our scope.
By referring to the MSR coding scheme invoked in [7], the desired content file consisted of L symbols, denoted as s = [s 1 , s 2 , ...s L ] T , will be stored across M distributed CHs. Each If the desired content has already been encoded and stored in M CHs following the standard MSR procedure, according to the conventional full downloading scheme [12], the CR can reconstruct the content by downloading all α stored symbols from K (K ≤ M) CHs with However, due to the channel fading and bandwidth constraints of wireless links in practical scenarios, the CR may not be able to download all the stored symbols from each CH. Considering the exponential nature of the transmission power as a function of the number of downloaded symbols, we propose using the power-efficient partial downloading scheme invented in [14] for distributed storage in cellular-D2D underlays, for which the CR can download only a small portion of the stored symbols from more than K CHs when the following condition is satisfied: where µ i,j is the number of symbols to be downloaded from CH i over SC j . After determining the number of downloading symbols µ i,j , the CR can further decide which specific symbols t download by using the symbol selection scheme proposed in [14]. Simulation results in Section 5 will verify the superiority of the partial downloading scheme over the conventional full downloading scheme in reducing power consumption.
Problem Formulation
In this paper, we will investigate the joint optimization of power and subchannel assignment for distributed storage devices in cellular-D2D underlays, which aims to minimize the total transmission power for content reconstruction at the CR while guaranteeing the SINR constraints at CUs. For the acquisition of CSI information, suppose each CU and CH will first send some pilots to the BS and the CR for channel estimation before downloading the desired content. Then, after estimating the channel gains, the CR and the BS will perform the joint optimization and coordinate with each other to ensure the SINR constraints. Finally, the corresponding solutions will be fed back to the CHs to determine the transmission power and subchannel.
Specifically, let µ π j (t) denote the number of symbols to be downloaded from CH π j (t) over SC j . Then, from (2) and (3), we can obtain the following expressions: where κ = B WT . After solving the requested transmission power P π j (t j ) , P π j (t j −1) , . . . , P π j (1) from (6) one by one, the transmission power for transmitting µ π j (t) symbols from CH π j (t) can generally be expressed as Then, the total transmission power over SC j is given by Note that the power allocation for CHs should also be delicately designed, without causing severe interference to CUs, which means that the content downloading should be performed with the minimum SINR requirements of CUs guaranteed. Therefore, the joint resource allocation problem can be formulated as follows: where Γ min denotes the SINR thresholds for CUs. Constraint (10) restricts the interference received at cellular links from the D2D links. Constraint (11) and (12) dictate that, at most, one SC can be allocated to each CH and, at most, q max CHs can share the same SC. Constraint (13) and (14) guarantee the successful content reconstruction at the CR. The above formulation is an integer program. According to (7), we find that the variables β i,j and µ i,j are coupled with each other in P i , which leads to the non-convexity of constraint (10).
To this end, to deal with the combinational problem, in the following, we will decouple the original problem into two subproblems and provide the solutions for (1) power allocation among all CHs; (2) subchannel allocation over all the available SCs. After dealing with each subproblem, a joint algorithm can be then implemented, in which the subchannel and power allocation are performed alternatively until an acceptable suboptimal solution is obtained.
Power Allocation for Content Reconstruction
In this section, supposing the subchannel allocation is settled, we solve the subproblem of power allocation among all CHs, such that the total transmission power for content reconstruction is minimized. Due to the non-linear constraints in problem (9), the decoupled subproblem of power allocation is still intricate. Therefore, we first consider dropping some of the constraints that complicate the power allocation and provide a greedy-heuristic approach, which has been proven to achieve the optimal solutions to the relaxed problem. After modifying some of the selecting steps, we further propose a suboptimal algorithm with lower computational complexity, which is capable of recovering the originally dropped constraints. Simulation results will show that our proposed power allocation algorithm is almost close to the performance of the exhaustive method.
Optimal Power Allocation for the Relaxed Problem
By fixing the channel allocation variables {β i,j } i∈M,j∈N and dropping the constraints (10) and (13), the relaxed subproblem of power allocation can be reformulated as Assume that the number of downloaded symbols from CHs that reuse the cellular channel SC j is defined as [µ π j (1) , µ π j (2) , ..., µ π j (t j ) ]; the total transmission power over SC j is then given by where the specific value for P π j (t) can be referred to (7). Before presenting the optimal solutions to the relaxed problem (15), we obtain the following theorem about the optimal choice over each SC j .
Theorem 1. The minimized sum power among CHs over each SC is achieved by downloading all the potential symbols from the CH with the strongest D2D link coefficient, i.e., Proof. Based on (18), we have the following comparison Hence, we prove Theorem 1.
According to Theorem 1, the total power across all available SCs can be minimized by simply downloading symbols from the CHs with the strongest D2D link coefficient over each SC. Subsequently, we present the following result regarding the optimal solution to problem (15). Theorem 2. The greedy-heuristic approach shown in Algorithm 1 provides an optimal solution to power allocation to the relaxed problem (15).
Proof. According to Theorem 1, since the symbol allocation [µ π j (1) , µ π j (2) , ..., µ π j (t j ) ] can always be written in the form of [µ π j (1) , 0, ..., 0, 0] to minimize the total power over SC j , we simplify the notation of transmit power P j [µ π j (1) , 0, ..., 0, 0] as P j (µ). Let P (µ) j = P j (µ + 1) − P j (µ) denote the power increment for CH π j (1) transmitting µ + 1 symbols compared to transmitting µ symbols. By assuming the ultimate number of symbols transmitted over SC j is µ j , the total power across all SCs is given by where P total is equivalent to the sum of L power increment components. Based on the above definitions, the total power can be minimized by finding L smallest power increments among all the candidates { P (µ) j } j∈N ,0≤µ<L . In the following, we prove by induction that the greedy-heuristic approach will select the L smallest power increments. Specifically, by computing we verify that the power increment P (µ) j is an increasing function of µ, based on which we can obtain the smallest power increment as min j∈N ,0≤µ≤L which is exactly the selection that Algorithm 1 makes. This process will continue to proceed until all the L smallest power increment components are observed and, as a consequence, we obtain the optimal solutions to problem (15).
To illustrate the convergence of Algorithm 1, we observe that the iterations will terminate when ∑ j∈N µ π j (1) = L. Since, for each greedy-heuristic search, the value of ∑ j∈N µ π j (1) = L will always be increased by 1, the termination conditions can be satisfied soon after L iterations. Besides, given the fact that the desired content size L is usually finite, Algorithm 1 will converge to the optimal solutions after L iterations.
Suboptimal Algorithm for Power Allocation
In the previous subsection, we offer Algorithm 1 to optimize the relaxed power allocation problem (15), which has dropped the original constraints (10) and (13). To proceed with the provision of an algorithm considering both the MSR coding constraints and the SINR constraints for CUs, we need to make some adjustments to Algorithm 1 .
Constraints for MSR Coding Scheme
As stated in Section 2.2, by employing the MSR coding scheme for distributed storage, each CH only stores α symbols obtained from a linear combination of desired content, thus leading to the constraint for the maximum number of symbols downloaded from each CH, i.e., µ i,j ≤ α for i ∈ M and j ∈ N . However, the outputs of Algorithm 1 tend to violate such constraints due to the assumption that only the CH with the strongest link coefficient would transmit symbols. Therefore, at each iteration, we need to check whether the updated allocation satisfies the MSR constraint. To be specific, let U j = {µ i,j } i∈M j denote the current allocation over SC j ; the eligible set under the MSR constraint should conform to
Constrains for SINR Requirements
Except for the MSR constraint, before the gradual increase in the number of downloaded symbols, we should also check whether the SINR requirements for all CUs are fulfilled. According to (1), the feasible set of symbol allocations that meets the SINR thresholds can be expressed as In addition, for the case violating the SINR constraint, we make the following remark: Remark 1. If the SINR conditions are not met, regardless of how the power allocation is assigned, the desired content of CR will be downloaded from the serving BS.
Since transmitting symbols from the BS may cause more power consumption as well as waste the available cellular links, to maintain the stability of the cellular-D2D underlay and minimize the total transmission power for content reconstruction, we should try to reduce the interference from CHs to CUs as much as possible.
Low-Complexity Power Allocation Algorithm
In this subsection, we will propose a low-complexity power allocation algorithm which is capable of including both MSR and SINR constraints while reaching near-optimal solutions. To achieve a better trade-off between the power consumption for content reconstruction and the interference effects on CUs caused by CHs, we introduce a new sort function ω j (·), which specifies the priority of CH selection over SC j and defines a relative . Simulations in Section 5 will demonstrate the rationality and superiority for a selection of ω j (·) instead of π j (·). We denote k = [k 1 , k 2 , ..., k N ] as an index set indicating the current selection order of CH ω j (k j ) over SC j . Given these definitions, our proposed low-complexity power allocation approach is shown in Algorithm 2.
2: while ∑ j∈N ∑ t∈T j µ ω j (t) < L do 3: Set P j = In f for j ∈ N 4: for j ∈ N do 5: if µ ω j (k j ) = α Then 6: k j = k j + 1 7: end if 8: Get U j by modifying µ ω j (k j ) ∈ U j to µ ω j (k j ) + 1 9: if U j ∈ C SI NR and U j ∈ C MSR then 10: end if 12: end for 13: if min j∈N P j > 10 6 then 14: Ind ← 0 and P total ← P BS In Algorithm 2, we extend the idea of Algorithm 1, which gradually increases the number of symbols downloaded from the last selected CH until the total number of symbols meets the demand for reconstructing the desired content, i.e., ∑ j∈N |U j | = L. However, different from Algorithm 1, which only takes the minimized power increment as the measure for the optimal choice at each iteration, Algorithm 2 will check whether the potential variation, denoted by U j , satisfies the constraints of C MSR and C SI NR . If there no feasible solution exists, i.e., Ind = 0, the CR will download its desired content from the serving BS.
Moreover, to guarantee the superior performance in terms of sum power minimization and avoid high computation complexity, as exhibited in an exhaustive search, which attempts to traverse all the feasible CHs in each iteration, we developed an index set k and limited each selection of CH over SC j to CH ω j (k j ) . To this end, we only need to compute one power increment for each SC j and find the smallest power increment among SC j ∈ SC, rather than compute the power increment of all potential CHs. In this case, the following remark is clear: By specifying the selection order and limiting the selection set, Algorithm 2 will significantly reduce the computation complexity, especially when M N.
Proof. Note that the total number of iterations for Algorithm 2 is related to the content size L, the complexity is mainly dependent on the calculation of power increment. For the conventional greedy search, this calculation will be performed M times for all candidate CHs at each iteration. For Algorithm 2, since the CHs are presupposed to be allocated/grouped into N SCs, and only the power increments concerning {CH ω j (k j ) } j∈N need to be calculated at each iteration, the proposed Algorithm 2 only induces a linear complexity of O(LN).
Subchannel Allocation Based on Matching Theory
In this section, by assuming that the power allocation of each CH has been addressed, we formulate the subproblem of subchannel allocation to minimize the total transmission power as the following: Constraint (27) guarantees the SINR requirements for CUs. Constraint (28) shows that the value of β i,j should be either 0 or 1. Constraint (29) indicates that each CH can be assigned, at most, one SC, while each SC can be allocated to no more than q max CHs. The above-formulated subproblem is still a combinational problem and the complexity of the exhaustive method will exponentially increase with the number of CHs and SCs. Therefore, we consider employing the many-to-one two-sided matching theory [27] to efficiently solve the above problem. In the following, we will first introduce some definitions and notations for the proposed matching model and then develop a low-complexity algorithm to obtain solutions to the subchannel allocation problem.
Many-to-One Matching Model and Notations
We first define the proposed matching model between two disjointed sets CH and SC. Specifically, if SC j is allocated to CH i , we say SC j and CH i are matched with each other and form a matching pair (CH i , SC j ). Then, a complete matching is defined as the set of all the matching pairs of SC allocated to CH, and formally presented as the following: Definition 1. Given two disjoint sets SC and CH, a many-to-one matching Ψ is a function from the set SC ∪ CH ∪ ∅ into the set of all subsets of SC ∪ CH ∪ ∅ such that, for every CH i ∈ CH and SC j ∈ SC: Based on the Definition 1, our objective is to determine the optimal matching function that minimizes the total transmission power among all CHs, as shown in (26). Thus, the decision process of each optimal matching pair should depend on the resulted power consumption. By observing the relationship between the value of P i and the value of variable β i,j , we found that the power consumption of each CH is not only dependent on its matched SC, but also related to the set of other CHs sharing the same SC, which leads to the following remark: Remark 3. The matching model formulated between CH and CR is a many-to-one matching game with externalities, also known as the peer effects [28].
Influenced by peer effects, the transmission power P i for i ∈ M relies on the current choice of matching function or termed as matching status, and the outcome may be changed according to their co-channel peers under different matching statuses. To deal with such peer effects, we introduce the concept of swap matching to adjust the matching status, as shown below: Definition 2. Given a matching function Ψ including pairs (CH i , SC j ) and (CH p , SC n ), a swap matching is defined by the function Based on Definition 2, a swap matching Ψ is directly generated by exchanging two allocated SCs of Ψ, while keeping all the other matching pairs the same. Note that one of the CHs involved in the swap can be a "hole" (denoted by CH p = O), thus allowing for a single CH i matched with SC n when |Ψ(SC n )| < q max and leaving an open spot for SC j = Ψ(CH i ). Similarly, one of the SCs involved in the swap can also be a "hole" when Ψ(CH i ) = ∅, thus allowing for unmatched CH i to be active.
However, not all the swap operations are beneficial compared to the original matching status. To indicate whether a specific swap operation is necessary and approved, we further introduce the concept of swap-blocking pair as follows: Definition 3. Given a matching function Ψ with (CH i , SC j ) and (CH p , SC n ), a pair (CH i , CH p ) is defined as a swap-blocking pair if, and only if, it satisfies where U k (Ψ) represents the utility of CHs or SCs under matching Ψ and, in our paper, has the following definitions: 1.
For each CH i ∈ CH, the utility is defined as the negative power consumption of CH i when it occupies SC j with SC j = Ψ(CH i ), which can be expressed as
2.
For each SC j ∈ SC, the utility is defined as the negative sum power of all the CHs sharing SC j , given by As proved in [29], a two-sided exchange-stable (2ES) matching always exists in the proposed matching model with peer effects. To reach such 2ES matching, swap operations should be kept approved between the swap-blocking pairs until there is no swap-blocking pair. Through multiple swap matchings, the peer effects can be handled and we can obtain a final stable matching status.
Low-Complexity Subchannel Allocation Algorithm
In this subsection, we propose a low-complexity algorithm to efficiently solve the subchannel allocation, as shown in Algorithm 3, which is equivalent to the process of finding a 2ES matching Ψ * between two disjoint sets CH and SC. Algorithm 3 Low-complexity Subchannel Allocation 1: Initialize a random matching function Ψ = Ψ 0 between SC and CH, and set f lag = 1.
2: while f lag = 1 do 3: Set f lag ← 0 4: for ∀SC j ∈ SC and ∀SC n ∈ SC\SC j do 5: for ∀CH i ∈ Ψ(SC j ) ∪ O and ∀ CH p ∈ Ψ(SC n ) ∪ O with O representing the open spot do 6: if (CH i , CH p ) is a swap-blocking pair and (27) is satisfied then 7: Perform swap matching between (CH i , CH p ) and update Ψ ← Ψ . The key idea of Algorithm 3 is to keep executing swap operations until there no swapblocking pair is recorded by the indicator f lag. Moreover, to satisfy the SINR constraints in (27), each time before approving the swap matching, we should also check whether the SINR conditions for all CUs are violated. Through Algorithm 3, we can finally obtain a 2ES matching Ψ * , which also constitutes the suboptimal solution to the problem (26). Simulation results in Section 5 will show that the subchannel allocation solutions obtained from Algorithm 3 can approach the optimal solutions obtained by an exhaustive search.
Note that the output Ψ * of Algorithm 3 is not guaranteed to be the global optimal matching. For example, given a matching Ψ with Ψ(CH i ) = SC j and Ψ(CH p ) = SC n , if a swap matching Ψ satisfies U CH i (Ψ ) < U CH i (Ψ), U SC j (Ψ ) > U SC j (Ψ) and U SC n (Ψ ) > U SC n (Ψ), the sum power will be further reduced after swap operation, but this swap matching is not approved in Algorithm 3 according to Definition 3. In this case, forcing the swap operation to happen may lead to a one-sided exchange matching with weaker stability.
To illustrate the convergence of Algorithm 3, we find that there are, at most, swap-blocking pairs to be checked during each iteration, and each swap matching operation will further reduce the sum power. In this way, given the finite number of N and since the sum power has a lower bound, the proposed Algorithm 3 will finally converge to a stable matching status after limited iterations. Moreover, suppose the total number of iterations is given by I, then the computation complexity of Algorithm 3 is O(IN(N − 1)q 2 max ).
Joint Power and Subchannel Allocation Algorithm
Based on the previously proposed Algorithms 2 and 3, the joint power and subchannel allocation algorithm can be presented as shown in Algorithm 4. In the initialization phase, a random subchannel matching is given. Then, the power allocation and subchannel allocation are performed under the constraint of the maximum number of iterations l max .
3: while l < l max do 4: Update the power allocation P i and U j for i ∈ M, j ∈ N with fixed Ψ (l−1) using Algorithm 2 .
5:
Update the subchannel matching function Ψ (l) under current power status using Algorithm 3.
Numerical Results
We consider a wireless distributed storage system in cellular-D2D underlays, for which a specific CR intends to download and reconstruct the desired content from M = 8 CHs underlaid with N = 4 CUs. Each CH has a storage capacity of α = 3 and the original content size is set as L = 12. Assume that the distance between the CR and any CH is d1 = 0.5; the distance between the serving BS and any CH is d2 = 1.5, the distance between the CR and any CU is d3 = 1; and the distance between BS and any CU is d4 = 1.2. Then, the channel gain can be modeled as the complex Gaussian random variable N C (0, d −2 ). We further assume that the transmit power for each CU is fixed with Q j = 3 for j ∈ N , and the minimum SINR threshold for CUs is set as 0.5. In addition, the transmit power from BS is set to be 100, the system coefficient and the noise power are set as κ = 1 and σ 2 = 0.5, respectively.
In this section, we will first evaluate the performance of our proposed algorithms through simulations, i.e., Algorithm 2 for the subproblem of power allocation and Algorithm 3 for the subproblem of channel allocation. Then, by using Algorithm 4 to jointly perform the power and subchannel allocation, we will further investigate the superiority of the partial downloading scheme as well as the NOMA-enhanced transmission scheme in our proposed cellular-D2D underlay.
Property of the Proposed Algorithms
By randomly fixing a subchannel allocation status, we firstly demonstrate the nearoptimal performance of Algorithm 2 for power allocation. Figure 2 plots the total transmission power obtained from our proposed Algorithm 2 for 100 channel realizations. The exhaustive search is also provided as a benchmark for comparison. It can be observed that most of the solutions of Algorithm 2 nearly attain the performance upper bound, i.e., the optimal solutions for power allocation. Figure 3 further shows the statistic histogram of the power gap between Algorithm 2 and the exhaustive search for 1000 channel realizations. As can be observed, around ninety-eight percent of the results obtained from Algorithm 2 are close to the optimal results. Then, we verify the rationality and effectiveness to introduce the newly relative i | 2 (denoted by "η") to specify the selecting order at each iteration in Algorithm 2, instead of using the channel coefficient |g (CR) i | 2 ("denoted by g"), which is usually chosen for the D2D case, as in the work [15]. Figure 4 illustrates the proportion out of all 10, 000 desired contents that need to be downloaded from the serving BS due to SINR constraints (denote by "BS Serving Proportion"). From Figure 4, we find out that using the relative coefficient η could guarantee that more content files are downloaded from neighbor CHs rather than from the BS, especially with larger κ, which means that the available bandwidth W is limited. Since the BS transmission power is usually much higher, the reduced BS serving proportion would imply a reduced total power consumption for content reconstruction, as can be verified in Figure 5. This makes sense, since the interference effects from CHs to CUs are considered when we select CHs following the order indicated by η, while the original coefficient g omits the co-channel interference from CHs to CUs, and thus may violate the SINR constraints.
Next, given the fixed power allocation among all CHs, we evaluate the convergence and optimality of Algorithm 3 for subchannel allocation, in which we assume that no more than q max = 3 CHs are allowed to share the same SC. Figure 6 shows the cumulative distribution function (CDF) of the requested number of swap operations for Algorithm 3 to converge. We observe that Algorithm 3 can always converge within a small number of iterations and the convergence speed will become faster with a decreased number of CHs. Figure 7 plots the total transmission power obtained from Algorithm 3 for 100 channel realizations. The performances of the exhaustive search and the random pairing between CHs and SCs (denoted by "random matching") are also plotted. It can be seen that the proposed Algorithm 3 brings a greater performance gain over the random matching. Meanwhile, Algorithm 3 is shown to be capable of approximately reaching the optimal results obtained by an exhaustive search in most cases, which, nevertheless, requires lower computational complexity.
Superiority of the Proposed Transmission Schemes
Before demonstrating the superiority of the partial downloading scheme and the NOMA-enhanced transmission scheme for our considered distributed storage systems, we first verify the convergence of the joint power and subchannel allocation optimization, i.e., Algorithm 4, which alternatively implements Algorithms 2 and 3 under the maximum number of iterations l max = 10. Figure 8 describes the convergence behavior of Algorithm 4 as the iterative procedure executes, from which we can see that Algorithm 4 will converge after a limited number of iterations.
Then, we exploit the potential benefits of the partial downloading scheme over the conventional full downloading scheme [16]. Figure 9 compares the total transmission power for content reconstruction by using the partial downloading scheme with the full downloading scheme, where the full downloading scheme is achieved by exhaustively searching the optimal L/α CHs and downloading all their stored symbols. As can be observed in Figure 9, the proposed partial downloading scheme can significantly reduce the total transmission power, especially with an increased number of stored symbols α and restricted channel condition κ, in which case the partial downloading scheme provides more freedom of downloading choices and consequently alleviates the exponential increment of transmission power with the number of downloaded symbols.
Finally, Figure 10 shows the total power consumption by using the NOMA-enhanced transmission scheme versus the conventional OMA-based transmission scheme for distributed storage in cellular-D2D underlays. It can be seen that the performance of the NOMA transmission scheme outperforms the OMA scheme with all possible κ values. This is reasonable because, differently from the OMA scheme, for which each SC is only allocated to one CH, applying the NOMA protocol allows subchannel sharing by multiple CHs, and thus improves the resource utilization.
Conclusions
In this paper, we studied the joint optimization of power and subchannel allocation for wireless distributed storage in cellular-D2D underlays, where the MSR coding and the power-saving partial downloading scheme are employed for content reconstruction. Since the formulated problem was a non-convex combinational optimization, we have decoupled it into two subproblems, i.e., power allocation and subchannel allocation problems. Given a fixed subchannel allocation, a low-complexity, greedy-heuristic algorithm was proposed to solve the power allocation problem. Based on the power allocation results, a matching model with externalities was introduced and a corresponding swap matching algorithm was offered to deal with the subchannel allocation problem. Then, we alternatively performed power and subchannel allocation to obtain the joint optimization. The simulation results verified the convergence as well as the near-optimal property of our proposed algorithms. In addition, it was also shown that the partial-downloading approach outperformed the conventional full-downloading approach, and the NOMA-enhanced distributed storage achieved a larger performance gain.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2019-01-02T23:55:07.685Z
|
2015-02-26T00:00:00.000
|
86283301
|
{
"extfieldsofstudy": [
"Economics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://academicjournals.org/journal/AJAR/article-full-text-pdf/3520FD151065",
"pdf_hash": "5e577e38301807a37c6dbed07cbfa5b4711fe34f",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43027",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Economics"
],
"sha1": "5e577e38301807a37c6dbed07cbfa5b4711fe34f",
"year": 2015
}
|
pes2o/s2orc
|
Causality relationship between agricultural exports and agriculture ’ s share of gross domestic product in South Africa : A case of avocado , apple , mango and orange from 1994 to 2011
The study analysed causality between agricultural exports and its share of gross domestic product in South Africa from 1994 to 2011. Apple, avocado, mango and orange exports in tonnes were used to Granger analyse agricultural exports versa agricultural GDP contribution. The results of the Granger causality test showed a unidirectional causality between exports and GDP. Policies and programmes can help farmers with employees wage to enter the export markets which are ineffectual. Policies can be aimed at redress, such as the Employment Equity Act; which is size dependent on other sectors outside agriculture which discourage growth and export participation.
INTRODUCTION
According to Ukpolo (1998) the notable relationship between exports and growth in developing countries has attracted exhibiting interest because of its policy implications. The establishment of the agricultural sector by the marketing of Agricultural Products Act of 1996 has placed South Africa among the world's exporters of agrofood products not limited to wine, fresh fruit and sugar. South Africa is also an important trader of agricultural exports in Africa and competes for international market with those exports destined for EU and US markets. The debate on the relationship between agricultural export and agricultural Gross Domestic Product (AgGDP) has exhibited considerable interest in the field of development economics due to the nature of the contribution of the agricultural sector. Several empirical studies were conducted to assess the role of exports towards the economic growth of developing countries from various aspects. While the true measure of these nation's development needs to be expressed through improvements in the standard of living , their economic growth plays a significant part in this process by providing increased per capita income, increased revenue for government sponsored social services, leading to export led-growth.
Many researchers (Haleem et al., 2005;Stiglitz, 2007;Shirazi and Manap, 2004;Raza et al., 2012;Jatuporn et al., 2011) believe that agriculture can salvage the declining economy under unstable global economic conditions. Avocado, apple, mango and orange production in South Africa over the decade has shown good growth trends. In South Africa recently, empirical studies carried out (Dlamini and Fraser, 2010;Rangasamy, 2009;Pearson et al., 2010) to appraise the impact of export growth in comparison to economic growth, comparatively yet little has been done to analyse the impact of a single or few produce within the agricultural sector. The issue of how South Africa's agricultural sector can greatly contribute to economic growth is one of the fundamental economic questions which need proper considerations. An export-led growth hypothesis, which states that agricultural exports and other exports in general are keys to promoting economic growth, provides one of the answers to these fundamental questions. According to Abou-Stait (2005), an export-led growth strategy aims to provide producers with incentives to export their produce through various governmental policies. Chambers (1984) showed that the restriction on the openness of the economy depresses the agricultural sector, which later affects its trade, agricultural prices relative to non-agricultural prices and income. The South African agricultural industry has become less dependent on state support and internationally more competitive, although many sectors within the industry experienced a difficult period of adjustment and distress relating to segmented level of farming groups. The country's key and rising agricultural exports generally face relatively low levels of border protection, in part, due to bilateral and general tariff concessions to South Africa following the marketing of the Agricultural Products Act of 1996. However, these preferences do not exclude the country from the seasonal elevation of tariff barriers, export quotas and the implicit constraints of the entry prices built into the European Union (EU) regime for fresh fruits. This needs utmost attention since issues of seasonal elevation of tariffs affects South African's possibility of exporting fruits from provinces which have similar harvesting seasons to those in Europe and competing countries.
On average, South African avocado, apple, mango and orange production, in both commercial and subsistence sectors, has experienced increases on a yearly basis. This growth in production results in surplus quantities in the market. Drawn from a neo classical economic notion, the direct link between agricultural exports and its share of GDP can contribute to the export-led economic growth. This export-led growth can create profit allowing the agricultural economy to balance its finances, surpassing the debts and lowering returns which are challenges in South Africa's agricultural economy. The increased agricultural exports growth can trigger more avocado, apple, mango and orange production, which would create more exports opportunities. Farmers producing avocado, apple, mango and orange for exports purposes can receive export tariff subsidies and better access to the local and international markets. Exports of avocado, apple, mango and orange from South Africa to the African continent have been declining during the past three years, moving from 866 tons in 2007 to 396 tons in 2009. As a result, avocado, apple, mango and orange exports to the Americans have been consistent over the last decade, remaining below 100 tons for most of the decade and only peaking to 160 tons in 2001 (DAFF, 2011).
Avocado, apple, mango and orange exports are chosen because their productions have a higher value adding processing potential and are scattered around the republic. These agricultural produce must be clustered based on their comparative advantage and exports potential. The argument concerning the role of the exportation of these fruits as one of the main determinants of economic growth is not new. Haleem et al. (2005) investigated export supply response of citrus and mangoes in Pakistan. The study reviewed performance of citrus and mango exports for the years 1975 -2004. The fluctuating performance of citrus and mango exports can be attributed to highly fluctuating domestic production, inconsistent export policies, currency devaluation, export duties, non-competitiveness of exports and uncertain situation in the international markets (Ghafoor et al., 2010).
Agricultural exports can play a significant role in analysing the impact of agriculture's share of GDP in South Africa. This can lead to the change in the quantity of produce exported to overseas market hence it can contest an economic decision within the local market for those products. Over the years the world agricultural exports and South African agricultural exports grew per annum. This is due to the export oriented agricultural sector and an instant demand of agricultural produce due to climate change and higher competition which improved the quality produced. The contribution of agriculture's share of GDP in South Africa has been declining while the aggregated agricultural exports are increasing.
The figures show the exports of agricultural products considered for this study. In comparison from the figures, tonnes of mango and avocado exports were lagging behind those of apple and orange exports. The tonnes of avocadoes and mangoes remained at a value less than 100 000 tonnes and fluctuated throughout compared to tonnes of apple and oranges in Figure 1, a factor which economists' debate on based on the fair competition that these produce face in the global market. The favourite climatic conditions that favours both apple and orange production in the country and its value chain analysis that help process them contribute to this higher volume of exports. Throughout in Figure 1, the tonnes of oranges exported were far higher than that for apple and other produce in Figure 2, which shows that other produce may be improved if they can be given the necessary support.
The study analysed the causality between agricultural exports and its share of Gross Domestic Product in South Africa. Apple, avocado, mango and orange exports were used to Granger analyse agricultural exports and agricultural GDP contribution percentages.
Study area and sampling procedure
The study covers the entire South Africa and used secondary time series data that was obtained from National Department of Agriculture, Fishery and Forestry Statistical Directorate. The study covered a sample size of 17 years (1994-2011) of avocado, apple, mango and orange exports in South Africa and the agriculture's share of GDP for the same period.
Analytical technique
The Granger causality test was used for empirical analysis. The export-led hypothesis was specified by a bivariate linear model. The model is described below:
Granger causality test
According to Konya (2004) the concept of Granger causality is centred on the idea that a cause come before its effect. In the case of two variable namely X and Y, X is said to Granger-cause Y, if the current value of Y (yt) is conditional on the past values of X (xt-1, xt-2,..., x0) and thus the history of X is likely to help predict Y. Granger causality test is a better approach to a correlation analysis as it is more efficient than other methods such as Johansson cointegration analysis. Unlike Johansen co-integration analysis which is able to estimate whether the long-run equilibrium exists between two variables, the Granger causality test helps determine the direction of causation. The test however, does not imply causation between correlated variables in any significant way as the name would imply.
Furthermore the Granger test seeks to find out whether the current value of variable y -yt can be explained by past values of another variable can give more insight. In that way, the variable y is said to be "Granger caused" by x if x helps predict y, which is determined by an F-test (Gilmore and McManus, 2002;Granger, 1969). The most common way to test the causal relationship between two variables is the Granger causality proposed by Granger (1969). The test involves estimating the following simple Vector Auto Regressions (VAR): (1) Where it is assumed that the disturbances and are uncorrelated. Equation (1) represents that variable is decided by lagged variable and , so does Equation (2) except that its dependent variable is instead of . It should be noted though that the term Granger causality is somewhat of a misnomer since finding "causality" does not mean that movements in one variable causes a movement in the other, but rather causality implies a chronological ordering of movements of the series (Brooks, 2002).
Agricultural exports equation
Agriculture's share of GDP equation Where, AGEXP represent avocado, apple, mango and orange exports and AGGDP represent agriculture's share of GDP.
Unit root tests agricultural exports and agriculture's share of GDP
The constant and coefficient of EXPORTS are significant; t ratios are less than 2 in absolute values and P-values is less than t ratios. Here the P-value gives the probability that the hypothesis (unit root test of EXPORTS) is not true. It is conventional to reject the hypothesis if the P-value is less than 0.05. The ADF statistic value is 1.468174 and associated one-sided probability value is 0.1642. The constant and the coefficient of GDP are significant, t ratios are less than 2 in absolute values and P-values are less than t ratios. Here the P-value gives the probability that the hypothesis (unit root test in GDP) is not true. It is conventional to reject the hypothesis if the P-value is less than 0.05. The more the ADF statistic test negative, the stronger the rejection of the hypothesis that there is a unit roots at some level of confidence. The ADF statistic value was negative at 1.361641 and above the associated one-sided probability value of 0.1948. This implies that 1% increase in agricultural exports would results in a 19.4% contribution on the share of GDP. ADF was lagged at 3 to minimise bias and avoid suffering the power of the model, which happens when the lag value is too small or large respectively (Tables 1 to 3).
Pairwise Granger Causality Test
From the observation we reject the null hypothesis that agricultural exports does not granger cause agricultural share of GDP because the probability that agricultural exports does not granger cause agriculture's share of GDP was significant at 5%. This probability is less at 3% which helps us reject the hypothesis based on its findings. We accept the null hypothesis that agriculture's share of GDP does not cause agricultural exports with the probability value higher at 20.82%. Based on this observation we accept the null hypothesis that agriculture's share of GDP does not granger cause agricultural exports. Therefore, it is known that if hypothesis H 1 0 is not rejected but Hypothesis H 2 0 is rejected, their linear causality runs unidirectional from Y 1 to X 1 .
Conclusion
The study attempted to analyse empirically the causality between agricultural exports and its share on GDP over a period of 1994 to 2011. The result derived from the Granger causality test played an important role in complementing agricultural exports in South Africa. In conclusion, the study outlined a unidirectional causality from agricultural exports to agriculture's share of GDP. Gross Domestic Product in the agricultural sector matters in the direction of exports in the agricultural scope of the republic. Thus, an increase in agricultural exports is expected to yield an increase in its share of the GDP.
There are three direct implications for policy that can arise out of this export potential. Firstly, policies and programmes that can be planned to help farmers with employees wage to help enter the export market are ineffectual, since these farms are struggling to finance production in South Africa, which in the long-run affect agricultural exports. Secondly, creating more exporters requires creating a larger pool of potential exporters of the requisite size. This means supporting the entry of emerging farmers but also encouraging the expansion of existing exporting farmers. Encouraging new investment (particularly foreign investment) requires competitive returns and guarantees of the security of this investment. These competitive returns should result from, for example, particular market characteristics (access to the Southern African region), competitive labour costs, or tax breaks. Encouraging existing farms to grow requires addressing issues that farmers cite as constraints, such as policy uncertainty, labour regulations, infrastructure investment and anticompetitive behaviour. Thirdly, policies aimed at redress, such as the Employment Equity Act, which is size dependent may discourage growth, increase costs and discourage export participation.
|
v3-fos-license
|
2022-01-22T16:03:21.481Z
|
2022-01-18T00:00:00.000
|
246110821
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/ra/d1ra08619g",
"pdf_hash": "a6a1c156e0b4f968f95eef94a0dead1e69e29f13",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43028",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"sha1": "495bf94e0364f73a296010b5f638ad30e397eed5",
"year": 2022
}
|
pes2o/s2orc
|
Direct band gap and anisotropic transport of ZnSb monolayers tuned by hydrogenation and strain
Using first-principles density-functional theory simulations, we explore the effects of hydrogenation and strain on the mechanical, electronic and transport properties of two-dimensional ZnSb monolayers. We find that the fully hydrogenated ZnSb monolayer exhibits large mechanical anisotropy between armchair and zigzag directions and the biaxial tensile strain reduces the anisotropy. In addition, we find that the hydrogenation can induce a metal-to-semiconductor transition with a direct band gap of 1.12 (1.92) eV using the PBE (HSE) functional. With biaxial strains, the band gaps decrease monotonically and remain direct for strains smaller than 5%. Moreover, large transport anisotropy is demonstrated by computing the effective masses of charge carriers along the asymmetric armchair and zigzag directions. We further reveal that strain can significantly tune the effective masses and a 3% strain can even switch the effective transport direction for holes. Our simulations suggest that the hydrogenated ZnSb monolayer is a promising candidate for electronic and opto-electronic applications with controllable modification via strain engineering.
Introduction
The discovery of graphene in 2004 has triggered rapidly growing interest in exploring novel two-dimensional (2D) ultra-thin materials 1,2 such as hexagonal boron-nitride (h-BN), 3,4 transition metal dichalcogenides (TMDCs) 5,6 and MXenes 7,8 with diverse physical properties. These emerging 2D materials are mainly synthesized via chemical vapor deposition or exfoliated from their mother compounds with inherent 2D layered structures. [9][10][11] Recently, ultra-thin zinc antimonide (ZnSb), a new member of the 2D material family, is made available via transforming the sp 3 -hybridized three dimensional (3D) crystal ZnSb to a structure conguration with dominating sp 2 bonding by lithiation. 12 The intrinsic ZnSb monolayer is found to be metallic while a tunable direct band gap is desirable for a wide range of technological applications. 13 Therefore, it would be of great signicance to identify effective routes of tuning its properties to explore the potential and expand the application of this novel 2D material.
In two-dimensional systems, the surface exposure of a large portion of constituent atoms or even all of them makes surface passivation an effective manner of modifying the properties of a material. [14][15][16][17] In particular, hydrogenation has been demonstrated to have remarkable and diverse impacts. For example, hydrogenating graphene can open up a large gap, which was rst computationally studied and then experimentally veried. 18,19 For TMDCs, hydrogenation can lead to a structural phase transition for the MoTe 2 monolayer and can saturate the sulfur vacancies in the MoS 2 monolayer to achieve tunable doping. 20,21 Crisotomo et al. 22 predict that the hydrogenated TlBi lm is topologically non-trivial with a large band gap of 855 meV and, therefore, could be used in room temperature application. Xu et al. 23 present that hydrogenation can stabilize borophene to get borophane possessing a perfect linear band dispersion and Fermi velocities higher than those of graphene. Moreover, nanostructured materials can endure much larger strain compared with their bulk counterparts. [24][25][26] And strain engineering has been well utilized to tune the electronic and optical properties, band gaps and charge transport properties. [27][28][29][30][31][32][33] Therefore, it is also critical to characterize and understand its strain response to realize potential applications of the emerging 2D ZnSb.
In this paper, by performing density-functional theory calculations, we study the effect of hydrogenation and biaxial strain on the electronic structures to obtain a tunable direct band gap and the underlying mechanism is understood based on the changes in the energy states near the Fermi level. In addition, we report the anisotropic mechanical and transport properties of fully hydrogenated ZnSb monolayers characterized by the Young's modulus and effective masses of charge carriers, respectively. This paper is organized as follows: in Section 1, we review the computational approaches employed in studying the structural energetics and electronic structures. In Section 2, we present and discuss the computational results. First, we study the atomic structures and the mechanical properties. Next, we investigate the electronic structures, density of states and neargap states as well as their strain response. Lastly, we demonstrate the transport anisotropy from the aspects of the band anisotropy and effective masses along different orientations.
Methods
We carry out the density-functional theory calculations using the Vienna ab initio simulation package (VASP), with the exchange-correlation functional described by Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation and the interaction between core and valence electrons by frozen-core projector-augmented wave method. [34][35][36] The cutoff energy used in plane wave basis expansion is set to be 500 eV and a vacuum space of 15Å along the direction normal to the ZnSb sheet is employed to eliminate the interaction between articial periodic layers. For Brillouin zone sampling, we use the Monkhorst-Pack method of a 15 Â 15 Â 1 k grid. All atoms are allowed to relax until the forces acting on each atom are less than 0.001 eV A À1 . We also consider the dipole correction oriented perpendicularly to the 2D surface. It is known that the PBE functional can lead to a band gap underestimation due to the selfinteraction error. 37 Hence, we also perform the electronic structure calculations using the Heyd-Scuseria-Ernzerhof (HSE) hybrid functional. 38,39 Differential hydrogen absorption energy DE H is used to describe the energetics of hydrogenated sheets, which is dened as where E ZnSb+nH and E ZnSb+(nÀ1)H are the total energies for the 2D ZnSb system with n and n À 1 hydrogen atoms absorbed, respectively, and E H 2 is the energy of a gas phase hydrogen molecule. To study the strain effect, we apply biaxial strains dened as 3 ¼ (L À L 0 )/L 0 on the lattice vectors and relax the atomic coordinates for each strain, where L 0 and L are the lattice constants before and aer applying tensile strains, respectively.
For the smooth plotting of iso-energy contour to show the band anisotropy, a ner k-grid is needed and we construct the rst-principles tight-binding Hamiltonian to reduce the computational cost. We rst use Quantum ESPRESSO to carry out the electronic structure and output the wave functions on a coarse k grid (10 Â 6 Â 1), and then perform the Wannier interpolation technique with Wannier90 code to get the eigen energies on a k grid of 200 Â 200 Â 1. 40,41 3 Results and discussion
Structure models
We rst study the structural properties of the ZnSb monolayer. As shown in Fig. 1, the unit cell is rectangular and the lattice constants of a pristine ZnSb monolayer sheet are calculated to be a ¼ 4.52Å and b ¼ 7.45Å, respectively. Along the direction normal to the sheet plane, we can observe a buckling of Dz ¼ 0.64Å which is dened as the distance between two outermost layers consisting of Zn or Sb atoms. There are two Sb atoms in an unit cell. The situations with one or two Sb atoms passivated by hydrogen atoms are referred to as half or full hydrogenation, which will induce different changes in the lattice constants. The choice of the location of H on the ZnSb sheet is based on the difference in electronegativities between Zn and Sb atoms that are 1.65 (Zn) and 2.05 (Sb), respectively. This indicates that there would be a charge transfer from Zn to Sb when they form bonds. The Bader charge analysis shows a charge transfer of 0.23e À , leading to positively charged Zn and negatively charged Sb. H that tends to be positively charged is expected to bond with Sb. We further conrm the choice of absorption position of H by performing the structural relaxation with H initially located close to Zn. The relaxation process drives H to move toward Sb and we still end up with the conguration as shown in Fig. 1. With half hydrogenation, a reduces to 3.86Å while b is slightly increased to 7.48Å. On the contrary, the full hydrogenation increases a to 4.60Å and b to 7.55Å. The ratio a/b is very close between the pristine (0.61) and fully hydrogenated (0.61) monolayer while the half hydrogenated sheet exhibits the smallest value (0.52). The bucklings are increased to 1.39Å for half hydrogenation and to 1.21Å for full hydrogenation. In a pristine sheet, there are two nonequivalent Zn-Sb bonds with bond lengths d 1 ¼ 2.57Å and d 2 ¼ 2.59Å. Half hydrogenation leads to increased bonds (d 1 ¼ 2.64Å and d 2 ¼ 2.66Å), which are nearly the same with the values (d 1 ¼ 2.65Å and d 2 ¼ 2.66Å) in the fully hydrogenated sheet. The thermodynamic energetics of hydrogenated ZnSb sheets are indicated by the negative differential hydrogen absorption energies (À0.65 eV and À0.25 eV). The structural parameters discussed above are listed in Table 1. The effect of hydrogenation on the structure could be understood as follows: the Young's modulus in zigzag and armchair directions are 31.37 N m À1 and 1.55 N m À1 , respectively. When half hydrogenation is introduced and leads to more sp 3 like bonding for the attached Sb atoms, a change in the lattice constant along the zigzag direction can effectively relax the structure, giving us a smaller lattice constant a and larger buckling (1.39Å). The full hydrogenation leads to competitions between the opposite pulling of two sides by the hydrogenation and between the pulling interaction and the bonding interaction of Zn and Sb atoms, making the lattice constants close to the original values (the pristine ZnSb sheet) as shown in Table 1.
Young's modulus
To further investigate the mechanical stability, we perform the stiffness matrix analysis for the fully hydrogenated ZnSb monolayer that contains four independent elastic constants: C 11 , C 22 , C 12 and C 66 . It is shown that the Born-Huang stability criteria is met: 42,43 In addition to serving as an indicator of the mechanical stability, these elastic constants are key parameters used to determine the orientation dependence of Young's modulus Y 2D (q) via the formula in which q denotes the polar angle relative to the a axis (zigzag) in Fig. 1 with A and B dened as sin q and cos q, respectively. We present the angle-resolved Young's modulus Y 2D (q) for the unstrained ZnSb monolayer in Fig. 2(a). The Young's modulus in zigzag and armchair directions are 31.37 N m À1 and 1.55 N m À1 , respectively, indicating a strong mechanical anisotropy. This can be attributed to the structure anisotropy intrinsic to the zigzag-shaped buckling. In contrast, ZnSb monolayer is more exible and its stiffness is lower than that of other well studied 2D materials such as graphene and h-BN, whose Young's moduli are 340 N m À1 and 271 N m À1 , respectively. 44,45 Moreover, the stiffness and mechanical anisotropy can be largely modied by applying biaxial strain as demonstrated in Fig. 2(b) depicting the angle-resolved Young's modulus for ZnSb monolayer under 8% strain. The Young's modulus in zigzag and armchair directions become 15.39 N m À1 and 19.41 N m À1 , respectively. In addition, for the ZnSb monolayers under 3% and 5.5% strains we present their Y 2D (q) in Fig. S1 † to demonstrate the evolution of Y 2D (q) as a function of strain. To further Table 1 Structural parameters of two-dimensional ZnSb monolayers with no, half and full hydrogenation. a, b: lattice constants; Dz: buckling denoted by the distance between two outermost layers consisting of Zn or Sb atoms; d 1 , d 2 : bond lengths of Zn-Sb; d H-Sb : bond length of H-Sb; DE H : differential hydrogen absorption energy quantify the degree of magnitude of mechanical anisotropy, we compute the anisotropy ratio of Y 2D dened as Y 2D (q) max / Y 2D (q) min . Starting with 20.24 at 0% strain, the anisotropy ratio is monotonically decreasing with tensile strains and a nearly isotropic state is observed for ZnSb monolayers at a strain of about 5% and larger.
Electronic structures
Next, we investigate the electronic properties of the ZnSb monolayers and present the electronic band structure and density of states (DOS) in Fig. 3(a) and (b) for the pristine sheet, respectively. The intense DOS peaks around the Fermi level are dominated by the Sb-p orbitals with marginal contributions from Zn-p orbitals. In Fig. 3(b) and (d) we observe non-zero density of states projected onto Zn p orbital although the valence conguration of a Zn atom is 3d(10)4s (2) with 3p orbitals as the inner shell. When Zn forms bonding with Sb, the surrounding environment of Zn atom is changed, leading to charge redistribution and orbital polarization of valence electrons. This polarization is manifested as the nonzero projection of orbitals onto the p orbital. The contribution from Sb-s orbitals are mainly located in the range of À9.5 eV to À8.5 eV while Zn-d orbitals form the at bands lying from À7.0 eV to À6.0 eV. The fact that the Sb-p electrons are very active suggests that passivating those orbitals would have signicant impact on the electronic structures of ZnSb monolayers. When one of the two Sb atoms in an unit cell is bonded to hydrogen as shown in Fig. 1(c), the sheet is still metallic while the intensity of DOS around Fermi level is greatly reduced as indicated in Fig. S2 † depicting the electronic structure and density of states. With all two Sb atoms passivated by hydrogen as shown in Fig. 1(d), we observe a metal-to-semiconductor transition with a direct band gap of 1.12 eV with both valence band maximum (VBM) and conduction band minimum (CBM) at G point as shown in Fig. 3(c). The electronic structure calculation using HSE functional reveals a larger direct band gap of 1.92 eV at G point (Fig. S4 †). Since the metallicity of pristine monolayer ZnSb is a big obstacle for its semiconducting applications, other methods have been studied to open up a band gap which is either indirect or of small values. Bafekry et al. show that the ZnSb bilayers can become semiconducting, but exhibit an indirect gap. 46 The direct band gaps of uorinated and chlorinated ZnSb are calculated to be 0.06 (1.0) eV and 0.5 (1.4) eV using PBE (HSE) method, respectively.
With the fully hydrogenated ZnSb monolayer determined to have a band gap, we then investigate the strain effect on its electronic properties by applying biaxial strains ranging from 0% to 8%. In Fig. 4(a) we plot the variation of band gaps E g as a function of the biaxial tensile strain 3. Overall the band gap exhibits a monotonic reduction from 1.12 eV at 0% strain to 0.14 eV at 8% strain. A close examination reveals that the band gap variation can be partitioned into three regions based on the variation rate and band gap nature: in region I (0 < 3 < 3%), the band gap is direct with VBM and CBM at G point and the variation rate is À8.27 eV; in region II (3% < 3 < 5%), the band gap is still direct at G, but the variation rate is À23.15 eV; in region III (5% < 3 < 8%), the band gap is indirect with VBM at a point away from G to X and CBM still at G and the variation rate is À8.96 eV. We also investigate the electronic structures under biaxial strain using HSE functional (Fig. S4 †). The PBE and HSE functionals give qualitatively and even quantitatively similar band gap variations. So the HSE gaps can be regarded as a nearly constant shi of the PBE values, in agreement with previous reports. 47,48 The computed dependence of band gaps on the strain could be understood from the perspective of bonding nature of VBM and CBM whose energy difference determines the band gap. Two neighboring orbitals can interact to form bonding and anti-bonding energy states that respond to the strain in the opposite manner. The tensile strain leads to an increase in the energy for a bonding state and a reduction in the energy for an anti-bonding state. 49 We can rst focus on the strain-free ZnSb monolayer and plot the partial charge densities for one conduction band state labeled as C 1 in Fig. 3(e) and two valence band states labeled as V 1 , V 2 in Fig. 3(f) and (g). We nd that the C 1 state is of anti-bonding character and is mainly contributed by s orbitals of Zn and Sb. The V 1 is a anti-bonding state mainly composed of Sb-p x orbitals while the V 2 is a bonding state formed between p y orbitals of Sb and Zn. Because the band gap variation as demonstrated in Fig. 4(a) is a result of the relative positions of energy states V 1 , V 2 and C 1 of different bonding natures, in Fig. 4(b) we show how these energy states respond to the biaxial tensile strain. To nd how the VBM and CBM values vary as a function of biaxial strains, a common reference energy in different congurations is needed. For this purpose, we compute the x-y plane averaged electrostatic potential energy. The zero energy is set to the electrostatic potential energy at the vacuum region, located farthest to the ZnSb sheet. In region I, the energy of V 1 state is decreasing while the energy of V 2 state is increasing. As a result, they gradually move towards each other and become degenerate at a strain of 3%. With further increasing strain this trend continues until the VBM moves away from G point and the band gap becomes indirect at 5% strain. For the state C 1 , its energy continuously decreases in a linear manner as a function of strain.
Transport anisotropy
The structural and mechanical anisotropy suggests the anisotropy in transport property which can be qualitatively shown by the energy dispersions of near-gap energy states. For the fully hydrogenated ZnSb sheet, in Fig. 5(a) and (b) we plot the energy contour for VBM and CBM, respectively. For VBM, the isoenergy curves show that the energy values decrease more rapidly along k x direction in comparison with k y direction. Since the band dispersion determines the curvature and therefore, the effective masses of electron carriers are expected to exhibit orientation anisotropy. For the CBM, we have similar trend Fig. 4 (a) The energy gaps E g (PBE) as a function of biaxial strain partitioned into three regions: in region I, the gap is direct at G point; in region II, the gap is still direct, but decrease faster; in region III, the gap is indirect. (b) The energies of near-gap energy states C 1 , V 1 , and V 2 as a function of strain. The band energy values are computed with respect to the x-y plane average electrostatic potential energy in the vacuum region.
along k x and k y directions. This is consistent with the electronic structure plotted in Fig. 3(c). Now we quantitatively look into the transport property related to the band dispersion by performing the calculations of the effective masses of charge carriers. The effective mass m* is dened as ħ 2 d 2 EðkÞ=dk 2 , where ħ is the reduced Planck constant, E(k) is the energy band dispersion and k is the magnitude of the wave-vector in the momentum space. Therefore, the effective masses of the electrons m * e and holes m * h can be computed via parabolic ttings of energy bands near band extremes at G point (around G point for indirect band gaps). Because of the structure anisotropy, the transport along armchair and zigzag directions have been considered. We present the computed effective masses in Fig. 6. For the fully hydrogenated ZnSb monolayer in a strain-free state, the effective masses of the electron m * e are calculated to be 0.08 m e and 0.28 m e in the zigzag and armchair directions, respectively. The effective masses of the hole m * h are predicted to be 0.07 m e and 0.63 m e in the zigzag and armchair directions, respectively. The relative magnitude indicates that the charge carriers prefer to transport along zigzag direction.
When the biaxial strain is applied, m * e and m * h exhibit much different responses. m * e along armchair direction is slightly reduced while m * e along zigzag direction is largely modied and decreases to 0.05 m 0 . So the biaxial strain reduces the transport anisotropy for electrons. For m * h , the strain response is much different. For small strains less than 3%, m * h is not sensitive to the strain and the holes remain to transport along zigzag direction. With further increasing the strain, m * h experiences a jump/drop at about 3% strain, leading to a switching of preferred transport path to be along armchair direction. Later at 5% strain m * h along zigzag direction drops again but still maintain at a level much higher the m * h along armchair direction. Peng et al. report an all-electrical conformal ve-contact (C5C) method to reveal the in-plane crystal orientation by determining the anisotropic resistivity in exfoliated black phosphorus. 50 Similarly, the transport anisotropy and its strain tuning in ZnSb sheets indicate that the C5C transport measurement could be utilized to detect the in-plane crystal orientations and to check the existence of strain.
Summary
In summary, we investigate the physical properties of the hydrogenated two-dimensional ZnSb monolayers with densityfunctional theory simulations, and demonstrate the strain engineering is an effective routine in tuning the mechanical, electronic and transport properties of fully hydrogenated ZnSb monolayer sheets. We observe a large mechanical anisotropy between armchair and zigzag directions and an effective tuning by applying biaxial tensile strains. We nd that with full hydrogenation, the two-dimensional ZnSb monolayer is semiconducting with a direct band gap. During the strain engineering process, the band gap displays a descent trend with the incremental strain and direct-toindirect transition for larger strain. We understand the band gap dependence on strain by analyzing the bonding nature of near-gap states and their strain response. The transport property exhibits a strong orientation anisotropy and strain tunability, demonstrated by the effective masses of charge carriers along armchair and zigzag directions. Our simulations suggest that hydrogenation and strain engineering could be used to effectively tune the physical properties of novel two-dimensional ZnSb materials and enrich the applications of these newly discovered two-dimensional materials in electronics and opt-electronics. 6 The effective masses of electron (a) and hole (b) as a function of strain. The switching of the preferred transport direction occurring at about 3% strain corresponds to the band crossing demonstrated in Fig. 4(b) and S3. †
|
v3-fos-license
|
2017-08-01T03:14:51.868Z
|
2017-07-27T00:00:00.000
|
28480156
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://pilotfeasibilitystudies.biomedcentral.com/track/pdf/10.1186/s40814-017-0168-1",
"pdf_hash": "081adbfde37dc7d1fdec78809d09d01975154095",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43029",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "60ccca8679a19de61b4a681e7801d0f841a99484",
"year": 2017
}
|
pes2o/s2orc
|
Balance Right in Multiple Sclerosis (BRiMS): a guided self-management programme to reduce falls and improve quality of life, balance and mobility in people with secondary progressive multiple sclerosis: a protocol for a feasibility randomised controlled trial
Background Impaired mobility is a cardinal feature of multiple sclerosis (MS) and is rated by people with MS as their highest priority. By the secondary progressive phase, balance, mobility and physical activity levels are significantly compromised; an estimated 70% of people with secondary progressive MS fall regularly. Our ongoing research has systematically developed ‘Balance Right in MS’ (BRiMS), an innovative, manualised 13-week guided self-management programme tailored to the needs of people with MS, designed to improve safe mobility and minimise falls. Our eventual aim is to assess the clinical and cost effectiveness of BRiMS in people with secondary progressive MS by undertaking an appropriately statistically powered, multi-centre, assessor-blinded definitive, randomised controlled trial. This feasibility study will assess the acceptability of the intervention and test the achievability of running such a definitive trial. Methods/design This is a pragmatic multi-centre feasibility randomised controlled trial with blinded outcome assessment. Sixty ambulant people with secondary progressive MS who self-report two or more falls in the previous 6 months will be randomly allocated (1:1) to either the BRiMS programme plus usual care or to usual care alone. All participants will be assessed at baseline and followed up at 15 weeks and 27 weeks post-randomisation. The outcomes of this feasibility trial include: Feasibility outcomes, including trial recruitment, retention and completion Assessment of the proposed outcome measures for the anticipated definitive trial (including measures of walking, quality of life, falls, balance and activity level) Measures of adherence to the BRiMS programme Data to inform the economic evaluation in a future trial Process evaluation (assessment of treatment fidelity and qualitative evaluation of participant and treating therapist experience) Discussion The BRiMS intervention aims to address a key concern for MS service users and providers. However, there are several uncertainties which need to be addressed prior to progressing to a full-scale trial, including acceptability of the BRiMS intervention and practicality of the trial procedures. This feasibility trial will provide important insights to resolve these uncertainties and will enable a protocol to be finalised for use in the definitive trial. Trial registration ISRCTN13587999.
(Continued from previous page) Process evaluation (assessment of treatment fidelity and qualitative evaluation of participant and treating therapist experience) Discussion: The BRiMS intervention aims to address a key concern for MS service users and providers. However, there are several uncertainties which need to be addressed prior to progressing to a full-scale trial, including acceptability of the BRiMS intervention and practicality of the trial procedures. This feasibility trial will provide important insights to resolve these uncertainties and will enable a protocol to be finalised for use in the definitive trial.
Keywords: Secondary progressive multiple sclerosis, Exercise, Self-management, Mobility, Accidental falls, Balance, Quality of life, Feasibility randomised controlled trial Background Multiple sclerosis (MS) affects approximately 100,000 people in the UK [1], with an estimated cost of £1.4 billion/annum to the National Health Service (NHS) and wider society [2]. Although most people start with a relapsing-remitting (RR) disease course, approximately two thirds move to a secondary progressive phase within 8 years [3]. At this point, medical interventions are limited and progression is inevitable [4].
Surveys of people with MS (pwMS) consistently rank mobility as their highest priority and most important yet most challenging daily function [5]. Evaluation of treatments to improve mobility has also been highlighted as one of the top 10 MS research priorities by the James Lind Alliance [6]. Impaired balance and falls are common issues for people with secondary progressive MS (SPMS) and are an important contributory factor to mobility impairment [7,8]. Approximately 70% of pwMS fall regularly [9,10], at an average rate of >26 falls/person/year in SPMS [11]. More than 10% of these falls lead to injuries [12] and pwMS are three times more likely to sustain a fracture than the general population [13].
Falling and fear of falling have a profound impact on individuals, leading to activity curtailment, social isolation and a downward spiral of immobility, deconditioning and disability accumulation [14]. There are also substantial economic and social costs related to increasing immobility, impaired balance and falls in pwMS [15]. Costs of health and social care have been shown to increase steeply with increasing disease severity/immobility, underlining the importance of optimising safe mobility for as long as possible [16]. This is particularly relevant given evidence that pwMS are living longer, leading to a rising population living with the disease [17]. This has important implications for resource provision, as highlighted in a national audit of neurological services [18].
The importance of mobility and falls is further emphasised by their consistent prominence in policy documents for long-term neurological conditions [19]. Work suggests that falls may be an early marker of mobility deterioration associated with disease progression [9,10]. Rehabilitation interventions which improve balance and physical activity and decrease the risk of falls may slow this deterioration, providing a persuasive argument to prioritise provision of effective physical management strategies. However, there is currently minimal evidencebased guidance to inform optimal mobility management and none to inform falls management in people with progressive MS. Whilst evidence is available for older people and those with other neurological conditions, research suggests that translating existing interventions to pwMS is likely to be ineffective [20,21]. Small, limited duration studies have evaluated single elements of MS balance and falls interventions, individually demonstrating short-term improvements in mobility, balance or falls awareness [22][23][24], but these elements have not yet been implemented or evaluated collectively. Moreover, no studies have been confined to people with SPMS. This feasibility trial begins to address all these issues.
Healthcare policy prioritises the need to empower and support patients to self-manage [25]. 'Balance Right in MS' (BRiMS) is an innovative evidence-based, user-focused, self-management exercise and education programme, designed to improve safe mobility and reduce falls for pwSPMS. It is critical to assess the delivery of this programme and proposed evaluation methods prior to undertaking a definitive trial to assess its effectiveness and cost effectiveness.
Following Medical Research Council (MRC) Guidelines [26], this feasibility trial will aid the planning of an anticipated definitive, multi-centre randomised controlled trial (RCT) which will compare BRiMS plus usual care with usual care alone in improving mobility and quality of life (QoL), and reducing falls in people with SPMS. This feasibility trial will provide the necessary data and operational experience to inform the conduct and finalise the design of a definitive trial so that it can be successfully delivered with confidence. Ultimately, this will add significantly to the evidence by reporting results of a robust RCT of a manualised, complex intervention.
Trial design
This is a pragmatic, multi-centre, feasibility RCT with blinded outcome assessment. Figure 1 shows the planned participant pathway.
Trial settings
Four healthcare sites will be involved in this multicentre RCT, which is based in two geographical regions of the UK: South West Peninsula (England) and Ayrshire (Scotland). A full list of study sites is available via www.brims.org.uk.
Sample size
As this is a feasibility trial, the more usual sample size calculation, based on considerations of power for detecting a between-group clinically meaningful difference in a primary clinical outcome, is not appropriate [27]. Instead, the aim is to provide robust estimates of the likely rates of recruitment and follow-up, as well as provide estimates of the variability of the proposed primary and secondary outcomes to inform sample size calculations for the planned definitive trial. Therefore, we aim to recruit a total of 60 participants across the two regions (40 in the South West and 20 in Ayrshire) over 6 months. An estimated 240 people will need to be screened to achieve this sample size. From other studies in similar settings, we anticipate that retention rates will be approximately 80% [28,29]. With our intended sample size of 60 participants, we will be able to estimate the overall retention rate with precision of at least ±13%, and if the 6-month follow-up rate is around 80%, this estimate will have precision of around ±10%. Assuming a nondifferential 6-month follow-up rate of 80%, this should provide follow-up outcome data on a minimum of 24 participants in each of the allocated trial arms.
Inclusion criteria
The trial population will comprise individuals with a confirmed diagnosis of MS as has been determined by a neurologist according to revised McDonald's criteria [30], and who are in the secondary progressive phase.
Participants will: Be aged ≥18 years Be willing and able to understand/comply with all trial activities Score ≥4.0 ≤ 7.0 on the Expanded Disability Status Scale, i.e. people who have some mobility impairment, but who are ambulant for at least a proportion of the time Self-report two or more falls in the past 6 months Be willing and able to travel to local sites for blinded outcome assessments and BRiMS programme sessions Have access to a computer or tablet and to the internet
Exclusion criteria
Potential participants will be excluded if they: Have relapsed/received steroid treatment within the last month (patient-reported relapse is defined as 'the appearance of new symptoms, or the return of old symptoms, for a period of 24 h or more-in the absence of a change in core body temperature or infection') [31] Have had any recent changes in disease-modifying therapies; specifically, if they have ever had previous treatment with alemtuzemab; are within 6 months of ceasing nataluzimab; or are within 3 months of ceasing any other MS disease-modifying drug Have participated in a falls management programme within the past 6 months Report co-morbidities which may influence their ability to participate safely in the programme or are likely to impact on the trial (e.g. uncontrolled epilepsy). Are participating in a concurrent interventional study Identification and recruitment of participants will be via several routes, including identification by healthcare professionals, screening MS databases and promotion via MS support groups and newsletters. This will be supported by National Institute for Health Research (NIHR) Clinical Research Network staff at each site.
Randomisation and allocation concealment
The inclusion of group-based elements as part of the intervention necessitates the confirmed participation of a sufficient number of participants within a recruiting site before randomisation occurs. There are four sites where the intervention will be delivered. Once recruited, participants will ideally be randomised in blocks of 10, but the process can accommodate some flexibility within the limits 8-12 participants in each block. Randomisation will be undertaken when a sufficient number of individuals from a recruiting site have consented, indicated that they are able to attend the same BRiMS group (location, timing, should they be randomised to receive it), and complete baseline data have been collected. The decision to declare a block complete will be made by the research therapist in collaboration with the Trial Coordinator, Chief Investigator and local UK Clinical Research Collaboration registered Clinical Trials Unit (CTU) (Registration number 31). Randomisation will be undertaken a minimum of 3 and a maximum of 7 days prior to the commencement of the BRiMS programme delivery (for each block).
When the block size from a recruiting site consists of 8-12 participants, the participants will be randomised to the intervention or control group, using block simultaneous randomisation. The randomisation will be 1:1 when the block consists of an even number of participants, and when the block consists of an odd number of participants, the allocation ratio will be in favour of the intervention group in order to maximise recruitment potential and learning opportunities in this feasibility trial. Participants in a block will be numbered in the order in which they were first entered onto the trial website. The randomisation process will follow a strict and auditable protocol. Randomisation will take place after completion of all baseline assessments by the CTU Trial Manager via a secure web-based system. The randomised allocations will be computer-generated by the CTU in conjunction with an independent statistician, in accordance with the CTU's standard operating procedure. The randomisation list and the programme that generated it will be stored in a secure network location within the CTU, accessible only to those responsible for provision of the randomisation system.
After randomisation has taken place, an automatic email will be sent by the CTU to the NHS Therapist leading the BRiMS programme locally and to the relevant Principal Investigator to notify them of each participant's allocated group. Notification that randomisation has taken place (but no details regarding individual participant's allocated group) will also be sent to the relevant research therapist and to the CI.
Access to the randomisation code and allocation list will be confined to the CTU data programmer; no one else in the trial team will be aware of allocated trial arms until formal randomisation is completed, hence maintaining effective concealment. Following randomisation, only appropriate members of the trial team will be aware of participants' allocations to intervention or control group; the blinded research therapists will not have access to treatment allocation.
Blinding
Due to the nature of the intervention, trial participants and treating physiotherapists are unable to be blinded. However, the assessors who are undertaking the outcome assessments will be blinded to participant allocations. The initial baseline assessment will be undertaken, following written informed consent obtained by the research therapist, prior to randomisation. Every effort will be made to ensure the two follow-up assessments (at 15 and 27 weeks post-randomisation) remain blinded. At each assessment time point, the assessor will be asked to record if they were un-blinded to group allocation, and if so, the reasons for this.
Interventions
The BRiMS programme is delivered as a 13-week therapy-led personalised education and exercise intervention. It is structured to maximise the development of self-efficacy and support participant engagement. BRiMS aims to address modifiable fall risk factors such as poor balance and mobility and enable self-management by the use of individualised mobility, safety and falls risk management strategies.
The development of BRiMS has been informed by the MRC framework for the development and evaluation of complex interventions [26] through a comprehensive programme of research [9,11,[32][33][34] and input from internationally recognised experts [23,35].
BRiMS includes a strong focus on home-based activities, supported by online resources and three group sessions interspersed over the duration of the programme. The programme also includes two 'one-toone' sessions (in weeks 1 and 2) to enable individualised assessment, goal planning and development of exercise plans. A home-based online work package overarches the programme, supporting both educational and exercise components and enabling participants to personalise the programme and apply the activities in their daily lives from the outset (www.brims.org.uk). Developing and supporting motivation is addressed throughout by using innovative functional imagery techniques [36] to supplement established motivational techniques.
Whilst the BRiMS programme is manualised, it is structured to enable tailoring of the components to meet the individual needs of participants (Fig. 2).
The BRiMS education component aims to improve exercise self-efficacy and enhance the individual's knowledge and skills about falls prevention and management [37]. This is delivered through a mix of home and group activities embedded throughout the course of the programme. It utilises group brainstorming, problem solving and action planning [38] and applies the principles of cognitive behavioural therapy (CBT). During group sessions, peer modelling, vicarious learning, social persuasion and guided mastery are used to boost selfefficacy [39]. Activities which encourage the setting and imagery of short-term exercise goals are employed to boost desire to achieve them, and images of failure are 'rescripted' [40].
The BRiMS exercise component supports the participant to undertake at least 120 min of gait, balance and functional training per week. It has been designed to be predominantly home-based, with exercise planning and progression undertaken in partnership between the participant and programme leader. The group sessions include exercise activities to encourage peer support and problem solving. Additionally, BRiMS integrates an online exercise prescription resource [41] to support and guide home-based practice (www.webbasedphysio.com).
Usual care
This feasibility trial will use a usual clinical care control group. Whilst usual care varies across the country [42], in those with SPMS, physiotherapy input is generally provided when an event has caused a significant deterioration in the person's ability to function (e.g. a respiratory infection or an injurious fall). The standard physiotherapy care pathway usually comprises short intermittent episodes of face-to-face intervention. Typically, presenting problems are managed (e.g. providing mobility aids, a written home exercise programme and advice) rather than focusing on the promotion of longterm self-management strategies. In this trial, the usual care received (including health/social care interventions and medications) will be recorded within the health economic assessment of resource utilisation. In this feasibility study, the therapists providing the intervention are not involved in providing the control group with 'usual care' , thereby avoiding potential contamination.
Data collection and outcome measurements
Participant data will be collected during face-to face visits and through the return of postal diaries (see Table 1 for details). All participants will attend for three trial assessment visits undertaken by the blinded research therapists at baseline, 15 weeks (±1 week) and 27 weeks (±1 week) after randomisation. These dates allow the delivery of the pre-scheduled BRiMS programme to be completed prior to the first follow-up assessment. Deviations from this schedule will be monitored and reported on a protocol deviation form.
The following data will be collected during the trial: A. Feasibility outcomes Data from screening, recruitment and follow-up logs will be used to generate realistic estimates of eligibility, recruitment, consent and follow-up rates.
B. Clinical outcome measures
Standardised clinician-rated and patient selfreported clinical outcomes, which have demonstrated good reliability and validity in people with MS, will be measured. Further, in the main, these measures have been widely used in MS interventional studies, which will enable comparison between studies. 1. Possible primary outcomes for the definitive trial Health-related quality of life EuroQoL (EQ-5D-5L) [44] and the 29-item Multiple Sclerosis Impact Scale (MSIS-29) Version 2.0 [45], which have been specified for use in health economic analyses in MS studies [46].
Possible secondary outcomes for the definitive trial
Falls frequency and injury rates Falls will be defined as "an unexpected event in which you come to rest on the floor or ground or lower level" [47]. In line with best practice guidance, the number of falls, injurious falls and associated use of medical services will be recorded prospectively using a patientcompleted daily diary returned to the CTU in a FREEPOST envelope on a fortnightly basis [48]. Activity level using an activity monitor (activPAL™, Paltechnologies Ltd, Glasgow) [49].
Walking capacity using the two-minute walk test (fastest speed) (2MWT). This has been recommended as the standard objective walking test to be used in MS interventional studies [50].
Fear of falling using the 16-item self-report Falls Efficacy Scale (International) (FESi) [55]. This has been recommended as the standard objective measure of fear of falling by the Prevention of Falls Network Europe [47].
Community integration using the self-report Community Participation Indicators (CPI) [56]. This has been recommended as an objective measure of participation for use in falls prevention studies by the International MS Falls Prevention Research Network [57]. C. Measures of adherence Attendance at the five face-to-face sessions will be recorded; adherence will be calculated as a percentage. Engagement in the home-based programme (BRiMS online exercise package and educational packages) will be recorded based on the participants' web-based activity and participant reported information. This information, alongside the data obtained from qualitative interviews with participants (see E below) will be used to evaluate levels of adherence and to determine whether any amendments to the programme are required to improve engagement.
D. Economic evaluation
Methods for the collection of resource use, cost, and outcome data will be developed and tested in preparation for an economic evaluation alongside a full trial. Data on resource use associated with the setup and delivery of the BRiMS intervention will be collected via within trial reporting, including participant level contact and non-contact time for Qualitative interviews x staffing input on delivery, equipment and consumable costs, training and supervision. Data on health and social care resource use will be collected at participant level using a Participant Resource Use (RU) questionnaire, developed for this trial [58]. The EQ-5D-5L will be used to estimate quality-adjusted lifeyears (QALYs), and is the expected primary economic endpoint (cost per QALY) in any future evaluation. The MSIS-8D [59, 60], an MS-specific preference-based (QALY) measure, will also be used, as this is expected to be of value in future sensitivity analyses. E. Process evaluation Process evaluation is a key part of the intervention development process and will be guided by the MRC Process Evaluation of Complex Interventions Guidelines [61] and the National Institute of Health Behaviour Change Consortium framework [62].
Standardisation and fidelity of the intervention
Two research therapists (employed specifically for the trial) will undertake the blinded assessments using standardised written protocols. Treating therapists from each site will perform the interventions as part of their NHS physiotherapy role. For this feasibility trial, all treating therapists will undertake a training session as a group. They will receive a therapist manual that provides a clear structure to each session, details the session content and provides sample scripts. All treating physiotherapists have access to the programme website (containing comprehensive reference materials and a closed therapist discussion forum) to optimise fidelity to the intervention content and approach. Treatment fidelity of a random sample of a minimum of 25% of the delivered sessions will be assessed using audio recordings of the session. This sample will include at least two recordings of each session type (1:1 assessment, home visit and group sessions) and at least one session from each treating therapist. The assessment of fidelity will be undertaken by two members of the research team who are independent from the intervention delivery, using a checklist which has been informed by the Dreyfus System for Assessing Skill Acquisition [63] and an adaptation of the Motivational Interviewing Treatment Integrity scale [64]. Both reviewers will initially meet to discuss the fidelity assessment process and their expectations for each element of the checklist. They will then independently rate the same recording and compare and moderate their assessments prior to undertaking further reviews. Any uncertainties in further reviews will be resolved through discussion. Safety monitoring Throughout the trial, all possible precautions will be taken to ensure participant safety and wellbeing. Participants will be monitored for adverse events and serious adverse events (defined according to the Medicines for Human Use (Clinical Trials) Regulations, 2004) [65] via completion of their daily diaries and during follow-up assessments. Participants will be asked to report all adverse events in their diaries, whether they are thought to be related to the intervention or not. Diaries will be reviewed on receipt for reports of adverse events and responded to according to pre-defined adverse event and serious adverse event reporting procedures.
Retention rates and withdrawals
Each participant has the right to voluntarily withdraw from the trial at any time, without repercussions. This is distinct from participants in the intervention group terminating their involvement in the BRiMS programme. (a) Discontinuation of the intervention Participants in the intervention group may choose to discontinue the BRiMS programme, or may do so on the recommendation of a health professional, for example following an adverse event. Where appropriate, such participants will be asked to continue to attend blinded assessments as per protocol if this is feasible. (b)Withdrawal from the trial Any participant may at any time after they have consented decide that they no longer wish to be part of the trial. This may be through personal choice (i.e. they withdraw their consent) or in consultation with a health professional, for example where it becomes impossible to provide outcome data or comply with any other trial procedures for whatever reason. In addition, a participant may be withdrawn following a significant protocol deviation, such as being randomised in error. In this event, the decision as to whether they should be removed from the trial completely or retained on an intention to treat (ITT) basis will be made through an independent adjudication by the Trial Steering Committee (TSC) who are blinded to group allocation [68].
Qualitative evaluation
The qualitative evaluation aims to: Assess the acceptability of the trial methods (both trial arms) Evaluate the acceptability of the intervention and identify possible adaptations Identify the components of the intervention perceived to be effective.
One-to-one telephone interviews with trial participants and a telephone focus group [66] with treating therapists will be undertaken by the regional BRiMS trial coordinators at the completion of the programme. A purposive sample of 10 participants will include people from different regions, different BRiMS intervention groups and a sample of control arm participants. Participants will be contacted and a mutually convenient time agreed to undertake a telephone interview within 2 weeks of the completion of their final trial visit. All treating therapists will be invited to participate in the telephone focus group which will be convened within 1 month of the completion of the final BRiMS programme delivery. All interviews will be digitally recorded and transcribed verbatim. The researchers will employ a reflexive approach throughout, utilising research diaries, field notes and critical reflection [67].
Data analysis
In keeping with the aims of a feasibility study, a detailed statistical analysis plan will be developed and approved by the Trial Steering Committee, prior to final database lock and analyses. For the final analysis, the trial statistician will be presented with a database by the CTU containing a group code for each participant but not identifying which group is which; only after the primary analyses will the two groups be identified.
The analyses of the quantitative data will be in two stages, with data summarised according to participants' allocated trial arm. All analyses will be undertaken and reported according to the recently published CONSORT guidelines for pilot and feasibility trials [68].
Stage 1 will summarise the feasibility outcomes: data from screening, recruitment and follow-up logs will be used to generate realistic estimates of eligibility, recruitment, consent and follow-up rates and presented in a CONSORT flowchart. In addition, adherence data (e.g. session attendance and exercise adherence) will be used to contribute to the evaluation of the acceptability and concordance to the BRiMS programme. Completion rates will be estimated for each of the outcome measures at each time point. All estimates will be accompanied by appropriate confidence intervals, to allow assumptions to be made in the planning of the definitive trial. The baseline characteristics of individuals lost to follow-up will be compared to those who complete the feasibility trial to identify any potential bias.
Stage 2 will summarise the clinical outcomes data at each time point. As it is inappropriate to use feasibility trial data to formally test for between-group treatment effects, the analyses will primarily be of a descriptive nature [27,69]. The CONSORT extension for reporting of pilot and feasibility studies [68] and the CONSORT extension for reporting of patient-reported outcomes [70] will be followed. Descriptive statistics of the clinical outcomes data will be produced for each trial arm. Interval estimates of the potential intervention effects, relative to usual care only, will be produced in the form of a 95% confidence interval, to ensure that the effect size subsequently chosen for powering the definitive trial is plausible, but no formal hypothesis testing will be undertaken [27].
Qualitative analysis
The qualitative analysis will employ a constructivist paradigm, described as an approach which allows the co-creation of understandings by respondent and researcher [71]. The qualitative data will include transcripts from one-to-one participant interviews, and the health professional telephone focus group.
Anonymised transcribed data will be entered into NVIVO software (QSR International, Southport, UK). A pragmatic process of data immersion, coding and generation of initial themes will then be undertaken [72]. Subsequently, these themes will be refined in discussion with research team members to maximise credibility of the process [73]. The rigour of the qualitative analysis will be maximised through use of a range of techniques, including exploration of contradictory evidence, respondent validation, and constant comparison [74]. In addition, interview and focus group participants will be invited to review an initial draft to ensure the analysis represents an accurate overview of participants' views, experiences and recommendations. Once this has been verified, the data will be used to (where necessary) revise the BRiMS Operational Manual and Trial Procedures.
Determining progression to the full trial
We shall progress to a full trial application if minimum success criteria are achieved in key feasibility aims and objectives, or if we can identify solutions to overcome any identified issue. These criteria will be finalised in discussion with the Trial Steering Committee, but are likely to include: A minimum of 80% recruitment of the intended 60 participants within the 6-month recruitment window A minimum of 80% completion rate of key outcome measures (including follow-up)
Data management, audit and monitoring
The CTU will be responsible for data management for the study. Data will be recorded on study-specific data collection forms by the blinded assessors, and on selfcompletion forms by study participants. Completed forms will be passed to the CTU and entered onto a secure web-based database. All data will be double entered and compared for discrepancies. Discrepant data will be verified using the original paper data sheets.
Data will be collected and stored in accordance with the Data Protection Act 1998 and will be accessible for the purposes of monitoring, auditing, or at the request of the regulatory agency.
Trial oversight
There are three committees involved in the setup, management and oversight of this trial: the Trial Management Group (TMG), the Trial Steering Committee (TSC) and the Data Monitoring Committee (DMC).
The TMG comprises those individuals involved in the development of the protocol and the day-to-day running of the study. The responsibility of this group is to ensure all practical details of the trial are progressing, and everyone within the trial understands them. This includes monitoring adverse events, recruitment and attrition rates, the project timeline and finances. It will also include responsibility for the release of the trial results and publications. The TMG will meet approximately monthly.
The TSC is responsible for overseeing the conduct of the trial and comprises a group of experienced trialists with majority independent representation. The TSC will meet before the start of the trial and subsequently at least annually. In addition, the TSC and DMC will receive a quarterly report of adverse events, and a telephone conference/additional face-to-face meeting will be instigated by the chair of either group, or the chief investigator (CI) should any issues need to be discussed.
The DMC comprises an independent statistician and two experienced clinical trialists, one of whom will be the chair. This committee will be independent of the study organisers and the TSC; the DMC will maintain the interests of trial participants, with particular reference to safety, and will report to the chair of the TSC. It is anticipated that the members will meet once to agree terms of reference and subsequently at a schedule to be agreed with the TSC.
Ethics
The trial will be conducted in accordance with the Declaration of Helsinki, 1996 [75]; the principles of Good Clinical Practice, and the Department of Health Research Governance Framework for Health and Social Care, 2005 [76]. All ethical approvals will be in place prior to the commencement of trial recruitment activities (see declarations section).
Dissemination plan
The results of this feasibility trial will inform the design of the anticipated definitive trial, rather than directly inform clinical decision making, since clinical and cost effectiveness cannot be determined at this level. Hence, dissemination, regardless of outcome of this feasibility trial, will focus on publication of the feasibility outcomes, and related methodological issues, in open access peer-reviewed journals.
On completion, the full study report will be accessible on the study website (www.brims.org.uk) and via the funding body website, as will the full protocol. This protocol (Version 3.0, dated 07 Dec 2016) has been written in line with SPIRIT Guidelines [77]. Similarly, the Consolidated Standards of Reporting Trials (CON-SORT) [68,78] and the Template for Intervention Description and Replication (TiDIER) Guidelines [79] will be reviewed prior to submitting future publications of the trial results. Authorship of articles will be by the study team; professional writers will not be used.
Results of this feasibility trial will be presented at national and international conferences to engender enthusiasm for the potential future trial. Summaries will be posted on to the websites/newsletters of the organisations who were involved in the recruitment process. In addition, all participants will be offered a lay summary of results and a clinically oriented summary will be provided to recruiting centres. A key output will be an application for funding for a definitive trial, if the results of the feasibility trial meet the criteria for progression.
Discussion
The importance of developing interventions to support pwMS to maintain their mobility and manage falls has been highlighted by service users and providers, and in practice guidelines [1,6,19]. The BRiMS intervention has been developed with the aim of addressing this important issue; however, a full evaluation of its effectiveness is essential to inform evidence-based clinical decision making. Best practice guidance emphasises the need to thoroughly test the feasibility and acceptability of both interventions and trial evaluation procedures prior to undertaking a full-scale assessment of effectiveness [26]. This feasibility trial will provide important insights into the practicality of running a full-scale trial to evaluate BRiMS, including providing estimates of: recruitment, attrition, adherence, baseline scores, standard deviations and completion rates of the measures. It will also enable us to assess the acceptability of the intervention and of participating in the trial from the participant and health professional perspective, and the process of delivering BRiMS, to finalise a protocol for use in the definitive trial. Whilst the trial has been developed according to best practice guidance, the methodology is not without potential limitations. For example, the use of active and attention-matched control groups has been debated in the literature [80,81]; in this trial, the lack of evidence to inform the selection of an active comparator, along with the cost implications of including a third attention-matched group, led to the pragmatic decision to utilise a usual care control group.
One potential scheduling issue that we aim to test is the feasibility of the relatively short (minimum 3 days) timescale between randomisation and commencement of the BRiMS programme for those allocated to the intervention group, necessitated by the group elements within the BRiMS programme. Whilst considerable thought has been given to minimising this challenge, for example by pre-scheduling the BRiMS programme dates, qualitative feedback from participants and treating therapists will be important to finalise this aspect of a future full-scale trial.
This trial specifically targets people with SPMS, whose MS type and level of impaired mobility often makes them ineligible to participate in clinical trials, and for whom medical intervention is limited. Whilst we have estimates of potential recruitment and retention rates from other studies of similar interventions, these trials included participants with a range of MS sub-types. Therefore, it will be important to assess whether these estimates are appropriate for people with SPMS who may have more health issues which could impact on their participation in trials.
Trial status
Recruitment commenced mid-January 2017.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
0001-01-01T00:00:00.000
|
1306898
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/1471-2474-10-154",
"pdf_hash": "f68006016732c1c61ce387bb3899887373f9b53b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43030",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "294ff3bc4e180efa8a3c478ff8038d040b2cd812",
"year": 2009
}
|
pes2o/s2orc
|
Bmc Musculoskeletal Disorders Association between Knee Alignment and Knee Pain in Patients Surgically Treated for Medial Knee Osteoarthritis by High Tibial Osteotomy. a One Year Follow-up Study
Background: The association between knee alignment and knee pain in knee osteoarthritis (OA) is unclear. High tibial osteotomy, a treatment option in knee OA, alters load from the affected to the unaffected compartment of the knee by correcting malalignment. This surgical procedure thus offers the possibility to study the cross-sectional and longitudinal association of alignment to pain. The aims were to study 1) the preoperative association of knee alignment to preoperative knee pain and 2) the association of change in knee alignment with surgery to change in knee pain over time in patients operated on for knee OA by high tibial osteotomy.
Background
Varus and valgus malalignment are associated with medial and lateral knee osteoarthritis (OA) respectively. In natural history cohorts of knee OA, severity of malalignment has been shown to be associated with pain severity [1,2]. Additionally, frequent knee symptoms (i.e. pain, aching or stiffness on most days of the past month) was found to increase with increasing varus malalignment over 15 month [3]. In other studies malalignment was not associated with pain [4][5][6]. The relation of knee alignment and knee pain is thus still unclear and to our knowledge the association of alignment and pain has not previously been assessed in patients undergoing an intervention changing malalignment.
High tibial osteotomy (HTO) is a disease modifying intervention that reduces the tibiofemoral load in the damaged compartment of the knee joint. The purpose of HTO is to decrease malalignment, reduce pain, enhance function as well as delay or avoid the need of knee arthroplasty in younger and/or physically active patients with uni-compartmental knee OA. HTO offers the possibility to study the cross-sectional and longitudinal relation of knee alignment to knee pain.
Our aims were to study 1) the preoperative association of knee alignment determined as the Hip-Knee-Ankle (HKA) angle to preoperative knee pain and 2) the association of change in knee alignment with surgery to change in knee pain preoperatively compared to at one year postoperatively in patients operated on for knee OA by high tibial osteotomy using the hemicallotasis technique (HCO).
Methods
Patients 182 patients (68% men) mean age 53 year (range 34 -69) scheduled for high tibial osteotomy (HTO) for medial knee OA, were consecutively included. The indication of surgery by the HCO is a consideration based on several aspects, as the presence of radiographic unicompartmental knee OA, knee alignment, pain, disability and level of activity both in working life and leisure time. When the orthopedic surgeon, in the present study one surgeon (STL) assessed all subjects, found an indication for HCO, the patient was given written and verbal information in a special outpatient clinic for patients treated by external fixation and the final decision on surgery was taken.
Of the 182 patients, 156 patients (86%) were available at the one-year follow-up. Fourteen patients did not return the questionnaire, two patients were revised to a total knee replacement, two patients had other surgeries, one patient had surgery in the contra lateral knee at time to follow-up and one patient had died.
Radiographic assessment and classification of OA
Standing anteroposterior images of the knee were obtained in 15 degrees of flexion using a fluoroscopically positioned x-ray beam. Axial view of the patellofemoral joint was acquired with vertical beam and the subject standing with the knee in 50 degrees of flexion [7].
The preoperative knee alignment was assessed by the HKA angle. The HKA angle was obtained with the patient standing in a weight bearing position when radiographic anteroposterior and lateral views of the lower limb (hip, knee and foot) were taken. By drawing a line from the center of the femoral head to the midpoint of the tibial eminential spine and another line from this midpoint to the center of the talus surface of the ankle joint, the mechanical axis of the limb can be calculated [10]. The medial angle between the lines is the HKA angle (varus < 180°) ( Figure 1). The accuracy and reproducibility of measurement of the HKA angle has been shown to be within 2 degrees [11]. In non-OA knees the mean HKA angle is 0.9-1.6 degrees in varus [12][13][14]. The HKA-angle was measured preoperatively as a part of the indication for surgery and postoperatively during the correction period to determine the progress of the correction and to determine that the desired alignment was obtained. The goal of correction is 4° valgus for the varus knee. Taking the reproducibility of HKA-angle measurement into account, 2 degrees is accepted as optimal correction. All patients were radiographically examined at the same radiographic department, the radiographs were taken by experienced technicians and the HKA angle was determined by radiologists with expertise in musculoskeletal radiology.
Pain
Pain was measured by the subscale pain of the Knee injury and Osteoarthritisis Outcome Score (KOOS) preoperatively and at the 1 year follow-up [15]. KOOS is a 42-item self-administrated knee-specific questionnaire based on the WOMAC index [16]. KOOS was developed to be used for short-term and long-term follow-up studies of knee injury and knee OA. The KOOS comprises five subscales: pain, symptoms, activities of daily living function (ADL), sport and recreation function (Sport/Rec) and knee related quality of life (QOL). Standardized answer options are given (5 Likert boxes), and each respond is scored from 0 to 4. A percentage score from 0 to 100 is calculated for each subscale; 100 representing the best possible results. 8-10 points of the KOOS score is considered a clinically relevant difference [17]. The KOOS is previously used in HTO [18].
Tibial osteotomy by the hemicallotasis technique (HCO)
HCO is an open wedge osteotomy based on successive correction of the malalignment using an external fixation [18,19] (Figure 2).
Statistics
The association between preoperative knee alignment (HKA angle) and preoperative knee pain (KOOS subscale pain), and change in knee alignment with surgery (the difference between preoperative HKA angle and postoperative HKA angle) and change in knee pain over time was assessed by simple regression analyses. Multiple regres-Radiographic measurement of the Hip-Knee-Ankle angle (HKA-angle) Figure 1 Radiographic measurement of the Hip-Knee-Ankle angle (HKA-angle).
Radiograph of high tibial osteotomy using the hemicallotasis technique Figure 2 Radiograph of high tibial osteotomy using the hemicallotasis technique.
sion analyses were used to control for potential confounding variables on preoperative KOOS pain (sex, age, Body Mass Index (BMI kg/m 2 ), severity of knee OA (Ahlbäck grade 1-5) and preoperative knee alignment (HKA angle)) and on change in KOOS pain from preoperatively to the one year follow-up (sex, age, BMI, complications [septic arthritis, infection of the incision, DVT, replacement of pins, loss of correction and delayed healing], preoperative KOOS pain and change in knee alignment). The Ahlbäck grade 1 was used as reference and analyzed to Ahlbäck grade 2 and Ahlbäck grade ≥3 respectively (the category Ahlbäck grade ≥3 includes 13 patients with Ahlbäck grade 4 and one with Ahlbäck grade 5).
The results were presented with 95% confidence intervals (95% CI). P value < 0.05 was considered as statistically significant.
The study was approved by the Ethics Committee at the Medical Faculty, Lund University (LU-565-1) and was performed in accordance with the Declaration of Helsinki.
Results
Patient characteristics for the 182 consecutive patients (mean age 52.8, 68% men) available at baseline and the 156 patients available at the one year follow up are given in table 1.
Preoperative cross-sectional analysis
Preoperatively, the mean HKA-angle was 170 degrees, i.e. on average the patients had 10° of varus alignment and the preoperative KOOS pain score was 42, (Table 1). There was no association between preoperative varus alignment and preoperative KOOS pain either crude or adjusted (Table 2).
Longitudinal analysis
156 patients (86%) were available at the one year follow up ( Table 1). The preferred correction (4 degrees valgus +/ -2 degrees) was obtained in 178/182 patients. The mean postoperative alignment was 184 degrees (range 171 -185). The mean change in knee HKA-angle was 13 degrees (range 0 -30). The mean change in KOOS pain was 32 points (range -16 -83). There was no association between change in knee alignment with surgery and change in knee pain preoperatively to one year postoperatively either crude or adjusted (Table 2).
Preoperatively, higher BMI and female gender were associated with more pain.
More preoperative pain predicted less improvement in pain postoperatively and patients with Ahlbäck OA grade 2 tended to have less improvement in KOOS pain over time than patients with Ahlbäck OA grade 1 and 3 ( Table 2).
Increasing OA grade was associated with more varus alignment. There was a statistically significant difference between the Ahlbäck categories of knee OA severity and preoperative HKA-angle (Figure 3a). However there was no association between Ahlbäck categories of knee OA severity and pain (Figure 3b).
Discussion
We found no association between knee alignment and knee pain, neither preoperatively nor from preoperatively to one year postoperatively in patients operated on for medial knee OA by high tibial osteotomy using the hemicallotasis technique.
To our knowledge the association of alignment and pain has not previously been assessed in patients undergoing an intervention improving malalignment. The rationale for analysing this association is the belief that higher grade of preoperative HKA-angle may be related to less improvement in pain. However our results indicate that patients with more severe varus alignment experience similar pain relief from high tibial osteotomy by the hemicallotasis technique as patients with less varus alignment.
A strength of our study is the wide range of HKA angle and KOOS pain both preoperatively and over time. If there were any associations between preoperative HKA-angle and preoperative pain, or between change in pain and change in HKA-angle, the study had the possibility to detect them.
We used the Ahlbäck classification [8] to determine OA severity. The Ahlbäck classification, used especially in orthopedics and in northern Europe, primarily focus on reduction of the joint space as an indirect sign of cartilage loss while the more commonly used classification according to Kellgren & Lawrence takes osteophytes, joint space narrowing or both into account [9]. The Ahlbäck system differentiates between more severe grades of OA than the classification of Kellgren & Lawrence, which is useful in orthopedics and decisions relating to surgical treatment. The agreement between K&L grade 2-3 and Ahlbäck grade 1 as well as K&L grades 3-4 versus Ahlbäck grades 1-2 has been shown to be good (k 0.76 and 0.78) [20].
Our results differ from previous reported results on the relation of knee alignment and pain measuring alignment from long limb radiographs [1][2][3]. However our results are in line with results from studies measuring alignment from anteroposterior (AP) radiographs of the knee joint [4,5]. Reasons for the difference in results between studies may include the different populations, different methodologies for assessment of alignment and pain and interpretation of data.
Different populations
In our study subjects about to have surgery for advanced OA were included which is in contrast to subjects recruited from the community with less advanced OA or at risk for knee OA [1][2][3][4][5]. However different study populations alone may not explain the difference as different methods were used.
Assessment and interpretation of alignment
Different methods as well as different axis are used to determine the degree of deformity of the lower extremity. The mechanical axis by full-limb radiographic measures, Boxplot of preoperative HKA angle (a) and preoperative pain (b) for each Ahlbäck grade of knee OA (Median with quartiles) Figure 3 Boxplot of preoperative HKA angle (a) and preoperative pain (b) for each Ahlbäck grade of knee OA (Median with quartiles). Any data observation which lays more than 1.5 IQR lower than the first quartile or higher than the third quartile is considered an outlier and marked as a dot. The horizontal line or "whisker" indicate where the smallest/highest value that is not an outlier by connecting it to the box).
the HKA-angle, is used in association with surgical interventions such as high tibial osteotomy and knee replacement. Knee alignment is sometimes determined from anteroposterior (AP) radiographs of the knee joint. This measure is however uncertain because the shorter images includes limited parts of the femur and tibia and makes it impossible to determine neither mechanical nor anatomical axis of the lower extremity.
Measurement of different angles, using AP and long leg radiographs respectively, the error in the measurement, and different definitions of normal, varus and valgus alignment may explain the contradictory results. Studies analysing the association of knee alignment to knee pain has not reported or discussed the possible error in the measurement of neither the anatomical axis nor the mechanical axis [2,4,5,[21][22][23]. The technique, experience and accuracy of the performance of the radiographic examination are of importance to minimize the methodological error. Aspects that makes the measurement of alignment of the lower leg uncertain.
Assessment and interpretation of pain
The mean KOOS pain score of 42 in this study is comparable to a preoperative score of 38 seen in patients having total knee replacement [24], indicating patients undergoing high tibial osteotomy having severe pain preoperatively. The mean improvement from high tibial osteotomy was 32 points at one year compared to 45 at one year after total knee replacement [24], indicating the effect of high tibial osteotomy being nearly as large as that from total knee replacement.
In previous studies the WOMAC [4,5] and the Visual Analogue Scale (VAS) [2] have been used as pain measures. Different pain instruments may be of minor importance as long as valid instruments are used and instrument-specific clinically relevant differences are considered. Sharma et al (2001) showed for example differences of 3.5 -16 mm in pain assessed by the VAS between three different categories of varus alignment and an average VAS increase of 10 mm on a 0-100 mm scale in knee pain with each 5°o f increased malalignment [2]. Clinically meningful differences in the Visual Analogue Scale (VAS) have been suggested to be 13-28 mm on a 100 mm scale depending on the initial VAS score [25].
In our study patients reported on average 1.5 KOOS points more pain on a 0-100 point scale per 5 degrees of varus alignment (Table 2). For the KOOS, an 8-10 point difference is considered a clinically relevant difference [17]. None of these studies showed clinically relevant differences with 5 degrees increasing malalignment, but the results were interpreted in opposite directions. Conclu-sions based on statistically significant results on the association of alignment to pain should be interpreted with caution if they are not clinically relevant.
In the cross sectional analysis preoperative pain was associated with increasing BMI while in the longitudinal analysis, there was no association. In the cross sectional analysis the change of preoperative pain per unit change of BMI was however negligible despite being significant. Patients with Ahlbäck grade 2 experienced clinically significant less improvement in pain over time compared to patients with Ahlbäck grade 1 but there was not a similar association for patients with Ahlbäck grade ≥3. This may reflect the well-known discordance between radiographic knee OA and symptoms [26].
Conclusion
We found no association between knee alignment and knee pain in patients with knee OA indicating that alignment and pain are separate entities, and that the degree of preoperative malalignment is not a predictor of knee pain after surgery.
|
v3-fos-license
|
2020-04-23T09:03:15.603Z
|
2020-04-16T00:00:00.000
|
218820584
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1999-4907/11/4/450/pdf",
"pdf_hash": "b47d8bf6b49bb69e6cd59bd36413eb60ee5072be",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43035",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "a84acaaa994667f7b2f32a05fa30a016fbad40c5",
"year": 2020
}
|
pes2o/s2orc
|
Sex-Related Di ff erences in Growth, Herbivory, and Defense of Two Salix Species
: Sex-related di ff erences in sex ratio, growth, and herbivory are widely documented in many dioecious plants. The common pattern is for males to grow faster than females and to be less well-defended against herbivores, but Salix is an exception. To study sex-related di ff erences in the patterns of resource allocation for growth and defense in willows, we conducted a large-scale field experiment to investigate the flowering sex ratio, mortality, growth traits, insect herbivory and content of defensive substances in three Salix populations comprising two species. Results demonstrate that the two Salix suchowensis Cheng populations have a female bias in the sex ratio, whereas no bias is found in the S. triandra L. population. Male individuals in the S. suchowensis populations have significantly higher mortality rates than females. However, the mortality rate of S. triandra population has no gender di ff erence. This finding may be one of the explanations for the di ff erence in sex ratio between the two species. The females are larger in height, ground diameter, and biomass, and have a higher nutritional quality (N concentration) than males in both species. Nevertheless, slow-growing males have a higher concentration of the defense chemical (total phenol) and lower degrees of insect herbivory than females. Additionally, biomass is positively correlated with herbivory and negatively correlated with defense in the two willow species. It is concluded that the degrees of herbivory would have a great influence on resource allocation for growth and defense. Meanwhile, it also provides important implications for understanding the evolution of dioecy. indicated a significant e ff ect of gender on the heights, ground diameters, and dry weights of the willows. The female trees were taller and had larger ground diameters and biomasses than the male trees (gender: F H = 24.9, p < 0.0001; F GD = 9.6, p < 0.05; F DW = 12.2, p < 0.01). The results of the pairwise t-test revealed that the plant height, ground diameter, and dry weight of the female individuals of S. suchowensis (NF) were significantly greater than those of the male individuals. The plant height and dry weight of the female individuals in the S. suchowensis (XY) population were significantly higher than those of the male individuals, but no significant di ff erence was observed in ground diameter. Significant di ff erences were found between the sexes in terms of the growth traits of S. triandra (DB) included in the study (ground diameter, leaf area, and dry weight). Significant di ff erences in growth traits were found between the populations (population: F H = 33, p < 0.001; F GD = 21.2, p < 0.001; F NB = 162.51, p < 0.001; F LA = 85.6, p < 0.001; F DW = 68.1, p < 0.001). Overall, these results showed that the plant height, ground diameter, number of lateral branches, leaf area, and dry weight of S. triandra (DB) were significantly greater than those of S. suchowensis (NF, XY). By contrast, the interaction between gender and population demonstrated no significant e ff ect on the growth traits.
Introduction
Dioecy is found in 175 flowering plant families, and 7% of flowering plant genera accounting for approximately 5% of all plants [1]. These species are present in a wide variety of habitat types, and many of them are of economic importance. They often show sexual differences in reproductive traits such as nectar production and flower longevity, and in vegetative characteristics, such as morphology, secondary chemistry, and phenology [2]. Dioecious plants often exhibit gender bias and have varying gender ratios and nutrition distributions. The principle of resource allocation [3] states that the allocation of plant resources for three main functions, i.e., growth, reproduction, and defense, is unequal. In other words, when more resources are allocated to a specific function, the allocation for other functions is correspondingly diminished [4]. Many studies have inspected gender differences in vegetative traits [5], reproductive costs [6,7], and demographic characteristics [8]. In particular, in dioecious species, the production of the reproductive structures of female plants (i.e., fruits and seeds) usually requires higher resource input than the male reproductive structures of plants, indicating a trade-off between reproduction, growth, and plant defense [9].
Plant Material and Study Design
In 2012, two full-sib families were established for S. suchowensis, by NF2 and XY12 crossed with LS7 separately. NF2 and XY12 are female stands collected from Nanjing and Xinyi in Jiangsu province of China, respectively, whereas LS7 was a male stand collected from Linshu in the Shandong province of China. For S. triandra, a full-sib pedigree was established by DB447 × DB134, both of the parents were collected from Miaoer mountain in the Heilongjiang province of China. The cutting orchard for these pedigrees was maintained at Sihong Forest Farm in Jiangsu province of China. In spring 2018, cuttings were collected from 319 progeny of NF2 × LS7, 334 progeny of XY12 × LS7, and 134 progeny of DB447 × DB134. The three willow populations were named S.suchowensis (NF), S.suchowensis (XY), and S. triandra (DB), respectively. Nine cuttings in the length of 20 cm were prepared for each individual.
The field trial was conducted at Baima Forest Farm in Nanjing of Jiangsu province, China (N 31 • 60 , E 119 • 17 ), where the average annual temperature is 15.4 • C and average annual precipitation is 1009.7 mm. Rainfall was mainly concentrated in the growing season of willows from March to October. The study area was 0.45 ha in total. The soil on this site was mainly yellow-brown loam. Soil pH and chemical properties are listed in Table 2.
Notes: Values are expressed as the mean ± standard error (SE) in the table.
For field establishment, stem cuttings were stored at 4 • C for several days and then soaked in water for 48 h. The experimental land was deep-plowed and leveled before planting, and the black film was laid on each ridge for weed control. The cuttings were planted in a spacing of 0.5 m × 0.5 m. Three cuttings of the same genotype were planted in one plot, and each plot was replicated in three blocks according to the completely randomized block design. The cuttings were planted vertically by pushing them manually into the ground until approximately 3 cm of the cuttings protruded from the ground. Only one strong sprout was retained after 30 days of planting.
Growth Trait
The mortality of the cuttings was investigated, and the number of lateral branches was determined after deciduous. Meanwhile, the height and ground diameter of each plant were measured with a tower ruler and Vernier caliper at a precision of 0.1 mm. In January 2019, the harvest shoots were sent to Nanjing Senke Wood Drying Company, and the shoots were dried with GYB-D electric heating wood drying equipment (Senke, Nanjing, China) at 105 • C for 3 days. After being dried to a constant weight, the dry weight of each shoot was measured using an electronic balance with a precision of 0.1g. We used a leaf area meter (YMJ-B) to measure the leaf areas of the fresh leaves from the bottom, middle, and top of canopies at the end of September 2018.
Gender Assessment
All individuals reached sexual maturity one year after the willow cuttings were planted on the field. The flowers of S. suchowensis and S. triandra bloomed in early spring before the leaves appeared, and the male and female flowers were arranged in morphologically different catkins. In the middle of March 2019, the gender of each individual in the three willow populations was determined according to the distinct features of male and female flowers.
Herbivory
We observed that the larvae of Lepidoptera and Coleoptera caused the greatest harm to the two willow species in the field. Newly hatched larvae preferred to eat young shoots and young leaves, and only the central veins of the leaves were severely gnawed. A scale from 0 to 5 with a step size of 1 was used in quantifying larval attacks. This scale considers the proportion of attacked on top shoots and leaves as follows: A score of 0 denotes plants that were not attacked by insects. A score of 1 indicates that 20% of top shoots and leaves are affected by larvae. A score of 2 indicates 40% of top shoots and leaves are affected by larvae. A score of 3 indicates 60% of top shoots and leaves are affected by larvae. A score of 4 indicates 80% of top shoots and leaves are affected by worms. A score of 5 denotes that insects eat all the top buds and young leaves. We evaluated insect damage in each tree at the end of June 2018.
Leaf N Content
To quantify N content in leaves as a measure of nutritional quality. We measured SPAD-value, representing absorbance by chlorophyll, which is closely correlated to N concentration in Salix leaves [30,31] with a chlorophyll meter TYS-B (Zhejiang Top, Hangzhou, China). Three fresh leaves of each plant were selected for measurement at the end of June 2018.
Defense
Thirty individuals (fifteen males and fifteen females) were selected from each willow population, and the young leaves of willow trees were selected as materials at the end of September 2018. The leaves were oven-dried at 50 • C and weighed to the nearest 0.01 mg. The samples were then ground in a ball mill and analyzed colorimetrically for phenolic compounds and tannins.
Condensed tannins were measured using standard methods [32]. Briefly, 10 mg of leaf powder was weighed and washed with 500 µL of ether, then centrifuged at 3700 r.p.m for 4 min. Tannins were subsequently extracted four times with 200 µL of the solution containing acetone and water at 70:30 volume ratio and 1 mM ascorbate. The acetone in the final supernatant was removed by evaporation with Savant Speed-Vac. Distilled water was added until the final volume of 500 µL was obtained. The samples were analyzed using the n-butanol assay for proanthocyanidins [33]. Condensed tannin concentration (mg g −1 dry leaf mass) was then calculated.
Total phenol content was measured by the Folin and Ciocateu method, which detects all compounds containing phenate ions [34]. Briefly, 15 mg of leaf powder was weighed into a 2 mL microfuge vial. Cold methanol (200 µL) was added to each vial, sonicated in a cold water bath for 12 min, and then centrifuged at 3200 r.p.m for 5 min. Catechin was used as a standard. Absorbance was measured with a spectrophotometer at a wavelength of 765 nm, and the extraction solvent was used as a control. Total phenol concentration (mg g −1 dry leaf mass) was calculated.
Statistical Analyses
Experimental data were analyzed using SPSS 19.0 (SPSS Inc., Chicago, IL, USA). Sex ratios and mortalities were compared to a 1:1 ratio by chi-square analysis. When more than one leaf was measured (e.g., leaf area, N concentration), the average mean was taken for statistical tests. Comparisons between females and males within each variable were performed using the pairwise t-test. Two-way ANOVA models were used in analyzing the factors, populations, and genders and their interactions for all the variables. Scatter plots, boxplots, and line charts were made by using the ggplot2 package in the R software (version 3.6.0). A principal component analysis (PCA) was performed for the detection of a dependent variable that was most affected by gender and species, and any correlation between the dependent variables was determined. PCA was carried out using the FactoMineR and factoextra packages in the R software. The results were presented as mean ± SE.
Sex Ratio and Mortality
According to floral features, the two S. suchowensis populations studied were female-biased (Table 3). A chi-square test revealed that the sex ratio of the two S. suchowensis populations significantly departed from a 1:1 segregation ratio (χ 2 NF = 14.07, p < 0.001; χ 2 XY = 3.88, p < 0.05). The sex ratios of the two S. suchowensis populations sharing the paternal parent were not significantly different. However, the other population of S. triandra did not differ from a 1:1 sex ratio (χ 2 DB = 0.27, p > 0.05). The mortality survey revealed that the mortality of males in the two S. suchowensis populations was significantly greater than that of females (χ 2 NF = 4.97, p < 0.05; χ 2 XY = 9.38, p < 0.01). However, no difference in mortality was observed between the males and females in the S. triandra population (χ 2 DB = 0.036, p > 0.05).
Gender Effects on Growth Traits
Growth traits, including tree height, ground diameter, number of lateral branches, leaf area, and dry weight, were measured ( Table 4). The results of the two-way ANOVA indicated a significant effect of gender on the heights, ground diameters, and dry weights of the willows. The female trees were taller and had larger ground diameters and biomasses than the male trees (gender: F H = 24.9, p < 0.0001; F GD = 9.6, p < 0.05; F DW = 12.2, p < 0.01). The results of the pairwise t-test revealed that the plant height, ground diameter, and dry weight of the female individuals of S. suchowensis (NF) were significantly greater than those of the male individuals. The plant height and dry weight of the female individuals in the S. suchowensis (XY) population were significantly higher than those of the male individuals, but no significant difference was observed in ground diameter. Significant differences were found between the sexes in terms of the growth traits of S. triandra (DB) included in the study (ground diameter, leaf area, and dry weight). Significant differences in growth traits were found between the populations (population: F H = 33, p < 0.001; F GD = 21.2, p < 0.001; F NB = 162.51, p < 0.001; F LA =85.6, p < 0.001; F DW = 68.1, p < 0.001). Overall, these results showed that the plant height, ground diameter, number of lateral branches, leaf area, and dry weight of S. triandra (DB) were significantly greater than those of S. suchowensis (NF, XY). By contrast, the interaction between gender and population demonstrated no significant effect on the growth traits.
Gender Effects on Herbivores
As shown in Figure 1, the degree of insect damage in the three willow populations varied between males and females, and the degree of females was higher than that of males (Gender: F = 21.1, p < 0.001; Table 5). In the population (XY), approximately 36.8% of the top shoots of the male plants were gnawed by insects, and 47.8% of the top shoots of female plants were damaged, which was significantly higher than that of male. In the other population (NF), similar findings were found, but the degree of overall damage in this group was higher than that of the population (XY). We also found that the females were more severely damaged in the S. triandra (DB) population. Approximately 26.1% of the top end was eaten, which is higher than that in the male individuals (13.5%). However, the degree of damage between the two willow species was quite different, and the data showed that the degree of damage to the S. triandra was much lower than that to S. suchowensis (Population: F = 56.8, p < 0.001). Comparisons between females and males within each variable were performed using the pairwise t-test. ** p < 0.01; *** p < 0.001. Notes: Two-way ANOVA models were used in analyzing the factors, populations, and genders and their interactions for herbivory, N content, and chemical defense substance of willow plants. ** p < 0.01; *** p < 0.001.
Chemical Analyses between Genders
As shown in Figure 2 and Table 5, the chemical substances in the leaves of different willow species were different. The content of condensed tannins in the leaves of S. suchowensis was 38.9% higher than that in the leaves of S. triandra (Population: F CT = 95.5, p < 0.001). We also found that the condensed tannin content of willow is only 5.7%-10% of the total phenolic content. However, no difference in total phenol content was found among the three willow populations. Meanwhile, we determined the leaf N content of the three willow populations. We found that the N content (1.2%) of S. suchowensis (XY) was significantly higher than that (0.8 %) of S. triandra (Population: F N = 227.1, p < 0.001). Moreover, significant differences in leaf N concentration were found between females and males and among three willow populations (gender: F N = 40.3, p < 0.001), and N concentration in the female leaves was 0.52%-6.1% higher than that of the males (Figure 3). Moreover, a significant difference in total phenol content was found between female and male willows (gender: F TP = 7.7, p < 0.01). The content of total phenols in the leaves of the male individuals was 14.5%-22.6% higher than that of female individuals. No difference in tannin content was found between male and female individuals (Table 5). However, the condensed tannin content in the leaves did not differ between the females and males in the three willow populations.
231
As shown in Figure 2 and Table 5
Relationship between Growth, Defense, and Herbivores
The model explained 73.3% of the total variance, which was loaded into a three-dimensional space. The first PCA axis explained 33.1% of the total variation. It showed a highly positive correlation with herbivory, plant height, ground diameter, leaf area, and dry weight, and a negative correlation with total phenol (Figure 4a). The second PCA axis explained nearly 29.2% of the variance, and the dominant trend was the positive relationship among herbivory and N content and condensed tannin (Figure 4a). Herbivory was negatively associated with total phenol. A high cos2 value indicates that the variable has a larger contribution to the principal component. Our data revealed that leaf area and insect herbivory are the two most important predictors of the two salix species. The score plot of the three willow families is shown in Figure 4b. The PCA results revealed that the growth, herbivory, and defense values collected from the two willow species were successfully distinguished but not according to the genders of the willows.
275
The sex ratios of different species have considerable differences, even among closely related 276 species. Kaul and Kaul [16] found Salix amygdaloides has an equal number of male and female 277 individuals, and a 1.7:1.0 male-biased sex ratio was observed in Salix exigua [35]. However, sex ratios 278 in the genus Salix are often more skewed toward females than males [36][37][38][39]. We have similar 279 findings, and our data shows that the sex ratios of two S. suchowensis populations are an obviously
Sex Ratios
The sex ratios of different species have considerable differences, even among closely related species. Kaul and Kaul [16] found Salix amygdaloides has an equal number of male and female individuals, and a 1.7:1.0 male-biased sex ratio was observed in Salix exigua [35]. However, sex ratios in the genus Salix are often more skewed toward females than males [36][37][38][39]. We have similar findings, and our data shows that the sex ratios of two S. suchowensis populations are an obviously female-biased sex ratio. By contrast, the S. triandra population has no gender bias. Female-biased sex ratios preferentially increase over time because males have lower viability than females in natural populations [27], and male plants are generally more herbivorous than females [14]. The population of S. suchowensis was established by our laboratory in 2012. One year since the establishment, Hou et al. [40] found that the sex ratio of the population was 1:1. However, the S. suchowensis population develops into a female-biased sex ratio group. Gender differences in mortality may lead to female bias, which is exacerbated during the life cycle to produce skewed sex ratios [41]. We found that the mortality of male individuals was significantly higher than that of female individuals. Therefore, survivorship may be a highly important factor in determining sex ratios.
Sex ratios in a given population are strongly modulated by environmental conditions [42]. The reproductive habit of willows may be an important factor affecting sex ratio bias. Some studies concluded that male and female willows respond differently to environmental factors, and females generally perform better than males under stress [26,27,38]. We demonstrated that female-biased ratios are related to the presence of sex chromosomes, suggesting that genetic factors play a role in changing sex ratios. Notably, the gender segregation rate in the analysis of the pedigree indicates that the sex of the willow is dominated by a single locus. This finding is consistent with the findings of earlier studies [29,43,44]. Therefore, willows provide a unique system for exploring sex effects.
Biomass
Our obtained data showed that gender significantly affected growth characteristics. The female plants in the three willow populations had larger biomass than the males. Our findings are consistent with previous reports on other Salix species. These reports suggested that female plants grow faster than male plants [22,40]. However, these results were contradictory to reports on Salix [17,24]. In addition, Maldonado-López et al. [18] found that physiological responses between the females and males of Spondias purpurea were different, and the water use efficiencies and photosynthetic rates of females were higher than those of males. These results indicated that input to plant growth was increased. This increase may be one of the reasons that females have faster growth than males.
Herbivory
Herbivory may play an important role in the evolution of plant reproductive systems. Male-biased herbivory appears to be the most common pattern [9,13]. However, some papers documented female-biased herbivory [15], suggesting that male-biased herbivory might not be universal. This finding is consistent with our findings, which showed that female plants have more serious insect herbivory than male trees. The effects of plant gender on herbivory rates might be related to environmental conditions (soil moisture and nutrition) and populations. Damage of S. suchowensis was much more severe than that of S. triandra. Herbivores may be affected by plant growth characteristics, such as biomass, number of branches, and arrangement of leaves [45]. An increase in above-ground biomass, number of leaves and stems, and sizes of leaves may lead to an increase in sensitivity to insect herbivory [17]. We found a positive correlation between plant biomass and insect herbivory in S. suchowensis and S. triandra. Plant individuals with large biomass or more vigorous growth may be more conducive to insect spawning, foraging, and survival.
Defense
Many studies provide more evidence that female plants have better chemical defenses than male plants. Palo et al. [46] found that male willows generally have lower concentrations of phenolic glycosides in the leaves. Furthermore, Danell et al. [47] suggested that female willows contain more tannin in the bark than male willows. High concentrations of quercetin-glucoside-related compounds are found in the females of Salix myrsinifolia [48]. However, Moritz et al. [31] found no difference in total phenolic acid or lignan content between genders in Salix viminalis. Conversely, we found substantial intersexual differences in defensive compounds among the three willow populations. The leaves of the female willows had higher total phenol contents than those of the male trees. This result implied that phenols are a major deterrent of herbivores, as shown by Boeckler et al. [49]. Environmental variations may have a great impact on the content of defensive compounds in male and female willows. Jiang et al. [22] documented that females produce more condensed tannins than males under nutrient-poor conditions and produce fewer condensed tannins under nutrient-rich conditions. Therefore, our study does not support the idea that female plants have better chemical defenses than males in dioecious species [18]. Low nutritional quality is a potentially active defense against herbivorous insects [50]. We found that the leaves of females contain high N concentrations in S. suchowensis and S. triandra. Therefore, we conclude that nutritional value is limiting the feeding decisions of insects. We acknowledge that other nutritional variations exist between male and female plants. For example, the concentrations of phosphorous and potassium differ between male and female Salix lasiosepsis [17]. We realize that other aspects should be considered, such as leaf toughness [51] and diet maximizing soluble sugars [52]. Further study of these variables in relation to browsing and secondary metabolite profiles can broaden our understanding of generalist herbivore feeding decisions.
Resource Allocation Principle
In this study, we revealed that gender significantly affects the biomass, herbivory, and defense of S. suchowensis and S. triandra, and the females generally have significantly greater biomass, higher defensive substance content, and a lower rate of insect herbivory than males. Our findings are consistent with the results of a previous study [9]. However, our observations are inconsistent with the hypothesis of Myers-Smith and Hik. [38], who believed that female willows consume more sexual reproduction resources than male willows and thus delay plant growth. Our data showed that biomass is positively correlated with insect damage and negatively correlated with defensive compound content. When females allocate more resources to reproduction, the main reduction in allocation may be observed in defense, and the resource allocation for vegetative growth does not necessarily decrease. In this case, there would be no adverse effect on growth. Maldonado-López et al. [18] found that gender with great demand for reproductive resources generally tends to garner more resources than other factors. The relative differences in reproductive costs between females and males can be reduced through some mechanisms such as increased photosynthesis rates, increased canopy area, increased mineral nutrient uptake rates, increased root branching, and enhanced mycorrhizal associations [53]. Our results show that female willows have more branches, more leaves, higher leaf N content, and probably higher photosynthetic rates, which is expressed as greater investment in plant growth. Furthermore, browsing inhibited apical dominance and activated axillary and adventitious buds for the production of new vegetative shoots and reduced reproductive growth [54].
Conclusions
The sex ratios of two S. suchowensis populations are obvious female-biased sex ratios. By contrast, the S. triandra population has no sex ratio bias. Female-biased sex ratios preferentially increase over time because males have lower viability than females in natural populations and male plants are generally more herbivorous than females. The mortality of male individuals of S. suchowensis is significantly higher than that of female individuals. Survivorship may be a highly important factor in determining sex ratios of willow populations. In this study, Females generally have greater biomass, higher defensive substance content, and a lower rate of insect herbivory than males. Leaves of females contain high N concentrations in two willow species. The nutritional value may be limiting the feeding decisions of insects. PCA data shows that biomass is positively correlated with insect damage and negatively correlated with defensive compound content. When females allocate more resources to reproduction, the main reduction in allocation is observed in defense, and the resource allocation for vegetative growth does not necessarily decrease. In this case, there would be no adverse effect on growth. Future work should focus on genetic data for the analysis of sex-biased gene expression patterns.
Author Contributions: G.Y. and Q.X. contributed equally to the work. Writing-original draft preparation, G.Y. and Q.X.; writing-review and editing, G.Y., T.Y. and X.L.; visualization, G.Y., W.L.; investigation, Q.X., G.Y., W.L. and J.L.; supervision and critical revision of the manuscript, T.Y. and X.L.; and funding acquisition, T.Y. All authors have read and agreed to the published version of the manuscript.
Funding: This work is supported by the Natural Science Foundation of China (31570662).
|
v3-fos-license
|
2023-01-20T14:16:26.892Z
|
2020-07-11T00:00:00.000
|
256018032
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://mobilednajournal.biomedcentral.com/track/pdf/10.1186/s13100-020-00222-y",
"pdf_hash": "7eeb6949afbf62a12b08dae616eb485de33e1ba0",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43036",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "7eeb6949afbf62a12b08dae616eb485de33e1ba0",
"year": 2020
}
|
pes2o/s2orc
|
LINE-1 ORF1p does not determine substrate preference for human/orangutan SVA and gibbon LAVA
Non-autonomous VNTR (Variable Number of Tandem Repeats) composite retrotransposons – SVA (SINE-R-VNTR-Alu) and LAVA (L1-Alu-VNTR-Alu) – are specific to hominoid primates. SVA expanded in great apes, LAVA in gibbon. Both SVA and LAVA have been shown to be mobilized by the autonomous LINE-1 (L1)-encoded protein machinery in a cell-based assay in trans. The efficiency of human SVA retrotransposition in vitro has, however, been considerably lower than would be expected based on recent pedigree-based in vivo estimates. The VNTR composite elements across hominoids – gibbon LAVA, orangutan SVA_A descendants and hominine SVA_D descendants – display characteristic structures of the 5′ Alu-like domain and the VNTR. Different partner L1 subfamilies are currently active in each of the lineages. The possibility that the lineage-specific types of VNTR composites evolved in response to evolutionary changes in their autonomous partners, particularly in the nucleic acid binding L1 ORF1-encoded protein, has not been addressed. Here I report the identification and functional characterization of a highly active human SVA element using an improved mneo retrotransposition reporter cassette. The modified cassette (mneoM) minimizes splicing between the VNTR of human SVAs and the neomycin phosphotransferase stop codon. SVA deletion analysis provides evidence that key elements determining its mobilization efficiency reside in the VNTR and 5′ hexameric repeats. Simultaneous removal of the 5′ hexameric repeats and part of the VNTR has an additive negative effect on mobilization rates. Taking advantage of the modified reporter cassette that facilitates robust cross-species comparison of SVA/LAVA retrotransposition, I show that the ORF1-encoded proteins of the L1 subfamilies currently active in gibbon, orangutan and human do not display substrate preference for gibbon LAVA versus orangutan SVA versus human SVA. Finally, I demonstrate that an orangutan-derived ORF1p supports only limited retrotransposition of SVA/LAVA in trans, despite being fully functional in L1 mobilization in cis. Overall, the analysis confirms SVA as a highly active human retrotransposon and preferred substrate of the L1-encoded protein machinery. Based on the results obtained in human cells coevolution of L1 ORF1p and VNTR composites does not appear very likely. The changes in orangutan L1 ORF1p that markedly reduce its mobilization capacity in trans might explain the different SVA insertion rates in the orangutan and hominine lineages, respectively.
Background
The mobile element landscape of hominoid primates (gibbon, orangutan, gorilla, chimpanzee and human) is characterized by the expansion of non-autonomous composite non-LTR (non-long terminal repeat) retrotransposons (SVA -SINE-R-VNTR-Alu [1,2]; LAVA -L1-Alu-VNTR-Alu [3]) that are absent in Old World monkeys. SVA elements amplified in the hominids (orangutan, gorilla, chimpanzee and human); LAVA expanded in gibbon only. Figure 1a shows the structural organization of the elements: 5′ hexameric repeats (TCTCCC) n , a domain composed of two partial antisense Alu copies (Alu-like) and a region comprised of 36-50 bp variable number of tandem repeats (VNTR) are shared by SVA and LAVA. The 3′ end of SVAs (SINE-R retrovirus-derived SINE) is derived from the endogenous retrovirus HERV-K; the LAVA 3′ end contains Alu and L1 fragments separated by simple repeats (Fig. 1b). Both SVA and LAVA evolve as hierarchical subfamilies [2,5] displaying subfamily-specific nucleotide exchanges and small indels. However, by contrast to other non-LTR retrotransposons evolution of these composite elements does not only occur at the nucleotide level but also at the level of structural organization of the VNTR domain [4] (Fig. 1b).
The VNTR of gibbon LAVA elements is characterized by conserved subunit arrangements at both the 5′ and 3′ end of the domain. Orangutan SVAs are direct derivatives of the evolutionary oldest subfamily SVA_A. The VNTR of the evolutionary youngest orangutan subfamilies is composed of a fixed 5′ end (TRtandem repeat) followed by arrays of Q and C subunits ((QCAC 4 )(QC 3 )(Q-CAC 2 )(QC 3 )(QC)(QCACAC 3 )(QC 5 )) and a fixed 3′ end. The phylogenetically most recent SVA elements in the hominines (SVA_D in gorilla and chimpanzee and SVA_D, SVA_E and SVA_F in human) display short deletions in both the Alu-like and SINE-R regions when compared to the ancestral SVA_A. In the VNTR a fixed 5′ part (TR) is followed by [(K 1-4 GC') n ] (SVA_D; SVA_ F) or [(K 1-4 GC'/C″) n (LL'GC'/C″) n ] (SVA_E) variable length arrays. Overall, the hominine SVA VNTR is dominated by 49 bp G-rich K-type subunits whereas orangutan SVA VNTRs are enriched for short, 37 bp long C-type subunits [4]. 1 General structure of VNTR composite retrotransposons and SVA/LAVA subfamilies in hominoid primates. a Structure of SVA/LAVA. The elements are composed of (from 5′) hexameric repeats (TCTCCC) n , an Alu-like region, variable number of 36-49 bp tandem repeats (VNTR) and either a retrovirus-derived SINE (SINE-R in SVA) or a 3′ domain containing L1 and Alu fragments (LA in LAVA). They terminate with a poly A tail (AAA) n . b Currently active LAVA and SVA subfamilies in hominoid primates. Blue and yellow bars indicate short deletions relative to the ancestral SVA_A sequence. Tildes represent the apparently unstructured central part of gibbon LAVA. The VNTR subunit code is that described in Lupan et al. [4]. TR represents the invariable tandem repeats at the VNTR 5′ end. Note that the type and sequence of subunits in this part is not identical among subfamilies (for details see [4]). The overall structure of SVA_D elements in gorilla and chimpanzee corresponds to that shown for humans. LAVA_F, OU3, OU4, H19_27 and H8_43 denote the LAVA/SVA elements used in the study. The position indicates their subfamily affiliation. c Non-canonical SVAs in human and chimpanzee. In SVA_F1 and pt_SVA_D6 the hexameric repeat and larger part of the Alu-like regions are replaced by the first exons of MAST2 and STK40, respectively In addition to the canonical SVAs depicted in Fig. 1b chimpanzee and human harbour non-canonical composite elements in which the 5′ hexameric repeats and the larger part of the Alu-like region are replaced by the first intron of MAST2 (SVA_F1 in human [6][7][8]) and STK40 (pt_SVA_D6 in chimpanzee [9]), respectively (Fig. 1c). Copy numbers of the composite non-LTR retrotransposons range from 1800 in gibbons (LAVA in Nomascus leucogenys [5]), 1800 (SVA in orangutan [10]) to 2800 (SVA in human [2]).
As non-autonomous elements VNTR-composite retrotransposons are dependent on the proteins encoded by the autonomous LINE-1 (L1) element for their mobilization [11][12][13][14]. Across hominoids SVA/LAVA "pair" with L1 partners belonging to different subfamilies: LAVA with L1PA4 in gibbons, SVA pa with L1PA3 in orangutan and SVA hs with L1PA1 in human. Given the requirement for L1-encoded proteins for VNTRcomposite mobilization it can be hypothesized that LAVA and orangutan/human SVA evolved their specific structural features in response to the characteristics of the L1 subfamily active in the respective lineage. The primary interaction of RNAs to be retrotransposed by the L1 protein machinery occurs with the nucleic acid binding protein encoded by L1 ORF1 [15]. Mobilization of both SVA and LAVA is dependent on L1 ORF1p [12,13]. Taken together these two facts suggest that the determinants for substrate preference of LINE1 subfamilies for LAVA versus orangutan SVA versus human SVA might reside in L1 ORF1p.
To date, three different human SVA elements and two LAVA elements have been characterized with regard to their capacity to be mobilized by L1-encoded proteins in trans in a cell-based assay [11][12][13][14]16]. The retrotransposition rates reported for the human SVAs differ by three orders of magnitude from those observed for L1 in cis (4-5 × 10 − 5 [11,12] versus 1.3 × 10 − 2 [17]). Recently published estimates for in vivo mobilization rates, however, show human SVA on par with human L1 (one in 63 births [18]). In addition, the relatively high number of disease-causing SVA insertions (16 [19][20][21][22], compared to 30 for L1 and 76 for Alu [19] with much higher copy number in the genome) points to a considerable activity of SVA elements in vivo. Taken together, the elements tested so far might not represent the currently active fraction of SVAs in the human genome. Orangutan SVAs have not been investigated in the cell-based assay. As a prerequisite for addressing the hypothesis of LAVA/SVA -L1 coevolution I report here the identification and functional characterization of a human SVA element considerably more active than those described previously. I also demonstrate that orangutan SVAs can be efficiently mobilized by human L1 in human cells. Finally, using codon-optimized L1 ORF1 chimeras, I show that LINE1 ORF1p derived from the three species under study does not determine substrate preference for gibbon LAVA versus orangutan SVA versus human SVA.
Results
Identification and isolation of potentially active human and orangutan SVA elements Retrotransposition-competent SVA elements can be expected to lack potentially inactivating mutations/structural modifications (substitutions or indels (Alu-like region and SINE-R)/changes in the VNTR substructure relative to the subfamily consensus). There is a high probability for such elements to be found among source elements of recently integrated copies still displaying presence/absence polymorphism and among these polymorphic elements themselves.
In case of human polymorphic elements of the evolutionary youngest subfamilies SVA_E and SVA_F were extracted using dbRIP [23]. Detailed analysis of all fulllength elements in the dataset identified a small group of SVA_E elements carrying a specific 6 bp insertion in the SINE-R region (SVA_E1; Additional file 1). The entire group comprises nine 5′ full-length elements; six out of them are polymorphic according to dbRIP. Based on analysis of the VNTR structure (Additional file 1) and on similarity to the group consensus sequence two elements (chr7:1,185,116-1,187,654 and chr8:43,033,761-43,036,378; hg19) were selected for amplification. One of them (chr8) displays an 11 bp deletion in the 3′ part of the SINE-R. The chr7 element was absent in all three human genomic DNAs tested. The chr8 element (H8_43) was amplified, sub-cloned and sequenced. The amplified sequence is provided in Additional file 2: Figure S1.
In orangutan the search was based on a previous analysis [4]. Unfortunately, the quality of the genome build (ponAbe2) available at that time permitted the identification of only very few 5′ full-length elements belonging to the evolutionary younger subfamilies SVA PA _7-11. The full-length elements were genotyped in silico on available short read archives and most of them were found to be polymorphic. Three elements (all belonging to subfamily SVA PA _7) were then amplified from genomic DNA of eight individuals (7x Pongo abelii; 1x Pongo pygmaeus). As expected, all of them were found to be polymorphic among the individuals tested (Additional file 1). Finally, the SVA containing alleles were sub-cloned and sequenced. The amplified sequences are provided in Additional file 2: Figures S2 and S3.
A modified reporter cassette permits robust comparison of SVA mobilization rates across species The human (H8_43) and two of the orangutan (OU3 -chr19:59,431,118-59,434,697 and OU4 -chr1:218,026, 414-218,030,602; ponAbe2) elements were subsequently tested in a cell-based retrotransposition assay using the mneoI reporter cassette [24] (in pCEPneo [12]) and L1RP (pJM101/L1RPΔneo [17]) as driver in Hela HA cells. Figure 2a shows the principle of the assay. A previously characterized human SVA_E element (H19_27 in pAD3/SVA_E [12]) was also included in the experiments. As shown in Fig. 2b, the two orangutan elements were found to be 10-15x more active than H19_27. The newly identified human H8_43 was mobilized seven times more efficiently than H19_27. The high retrotransposition rates observed for the orangutan SVAs were surprising against the background that they contain the "ancestral" SVA_A-type Alu-like region also present in gibbon PVA (PTGR-VNTR-Alu) and FVA (FRAM-VNTR-Alu) elements. Their Alu-like domains had been shown to dramatically decrease the mobilization rate when fused to the VNTR and SINE-R of the human H19_27 SVA_E element [13]. Northern blot analysis (Fig. 2c) revealed that the H8_43_mneoI transcript is extensively spliced; the correctly spliced variant (γ-globin intron only) is barely detectable. In case of the two orangutan elements only the mneoI-single spliced Fig. 2 Human SVAs are spliced in the context of the mneoI reporter cassette. a Schematic representation of the cell-based retrotransposition assay. The element of interest is tagged with a reporter cassette containing a neomycin phosphotransferase (neo) coding region driven by the SV40 promoter and polyadenylated at an HSV TK poly A site in antisense. The neo open reading frame is interrupted by an intron in sense direction. Following transcription of the VNTR composite from the 5′ CMV promoter, the intron is spliced out and the RNA is polyadenylated at the downstream SV40 pA site. Mediated by the L1 proteins encoded on a cotransfected vector the RNA is then reverse transcribed and the cDNA copy inserted into the genome. A functional neomycin phosphotransferase can now be generated from the uninterrupted coding regiongiving rise to G418 resistant (G418 R ) cells once retrotransposition has occurred. SDsplice donor; SAsplice acceptor; G418 S -G418 sensitive (b) Retrotransposition assay of mneoI-tagged human (H19_27, H8_43) and orangutan (OU3, OU4) SVA elements. Retrotransposition rates +/− SEM are shown relative to H19_27 (100%). Average colony counts are given on top of each column. n ≥ 3 (c) Northern blot analysis of mneoI-tagged SVA transcripts. In case of the human SVA (H8_43) splicing between the VNTR and the mneoI cassette generates additional mature RNAs schematically depicted on the right. Lengths are given in the order of loading on the gel. d Structure of the H8_43 VNTR-neo splice variants as determined by RT-PCR. Nucleotides important for splicing are bold and underlined; intron sequence is in lowercase transcripts are detected. Considering the obvious differences in the processing of mneoI-tagged human and orangutan SVAs I concluded that a robust cross-species comparison of SVA mobilization rates is not possible using the established mneoI reporter cassette.
RT-PCR of the human SVA-mneoI splice variants established that the polypyrimidine tract and branchpoint at the acceptor site are provided by the mneoI HSV TK pA region ( Fig. 2d [13];). I, therefore, decided to replace this part of the cassette by a minimal functional polyadenylation signal [25]. To prevent premature polyadenylation upstream of the reporter cassette the antisense polyA signal in the fragment was modified ( Fig. 3a; for details on functional validation see Additional file 2, Fig. S4). Subsequently, all available SVA sequences (H19_27/SVA_E, H8_43/SVA_E, OU3 and OU4) as well as the previously characterized gibbon LAVA_F element [13] were combined with the modified reporter cassette named mneoM (modified mneo).
Northern blot analysis following transfection into Hela HA cells (Fig. 3b) revealed a considerable reduction in the amount of double-spliced (VNTR-neo stop and mneoM-intron) human SVA transcripts (arrow). Although splicing to the neo R stop codon could not completely be abolished (only one of the three donor sites appears to be used according to RT-PCR analysis), the majority of the transcripts can now contribute to emergence of G418 resistant colonies in the cell-based retrotransposition assay.
Subsequent co-transfection of the constructs with pJM101/L1RPΔneo yielded retrotransposition rates > 1.9 × 10 − 3 for the human H8_43/SVA_E element. Integration sites determined for three G418-resistant colonies show the hallmarks of L1-mediated retrotransposition: they are flanked by target site duplications (14-16 bp) and terminate with polyA tails of variable lengths (Additional file 2: Figure S5). The previously characterized human H19_27/SVA_E and LAVA_F elements were both mobilized at about 30% of H8_43. This is in contrast to published data using the mneoI cassette that demonstrated a twofold higher mobilization rate for the LAVA element when compared to H19_27 [13]. The two orangutan elements retrotransposed at 50-70% of the rate observed for H8_43/SVA_E (Fig. 3c). Overall, the results clearly show that splicing of human SVAs in the context of the established mneoI cassette confounds the results obtained in the cell-based retrotransposition assay.
The VNTR and 5′ hexameric repeats determine mobilization capacity of human SVA A previous study has identified the 5′ hexameric repeat/ Alu-like region as the "minimal active human SVA retrotransposon" [16]. The importance of this domain has also been supported by other reports employing deletion analysis [12] or domain swaps [13]. Deletion of the 5′ hexameric repeats alone has been shown to reduce retrotransposition rates by 75% [16]. Results obtained with regard to the function of the VNTR have been contradictory: larger deletions led to decrease in mobilization, whereas a shorter deletion resulted in an increase in the retrotransposition rate [16]. Here, "VNTR-slippage-mutants" generated in the course of reamplification of the SVA elements by the thermostable polymerase offered the unique opportunity to study the effect of removal of parts of the VNTR in a setting comparable to the situation in vivo where slippage of the replication polymerase is the most likely mechanism producing changes in VNTR length and structure [4].
One of the deletion mutants tested (ΔVNTR1) lacks the two central {K n GC} arrays; in the other one (ΔVNTR2) the 3′ part of the fixed TR part and the entire variable part has been lost through slippage (Fig. 4a). In the cellbased retrotransposition assay ΔVNTR1 is mobilized at around 30% the level of the full-length element (similar to the level of H19_27 with a comparable VNTR length cf. Figure 3c); ΔVNTR2 reaches only about 5%. As evidenced by Northern blotting the reduction in the mobilization rates cannot be attributed to a decrease in the steady-state level of the RNAs (Fig. 4b). In case of one of the orangutan elements (OU3), deletion of the VNTR (fusion of the 5′ and 3′ terminal repeat subunits) completely abolished retrotransposition (not shown).
A further set of experiments was designed to establish the function of the 5′ hexameric repeats in the context of the newly identified active SVA_E element and its possible interplay with the VNTR. As shown in Fig. 4c, deletion of the hexamers led to a 60% decrease in the mobilization rate. Combination of the hexamer and VNTR1 deletions reduced retrotransposition rates by 80%. In neither case the RNA steady state level has been affected. Taken together, these results suggest that the two domains might act cooperatively to define mobilization capacity.
L1 ORF1p does not determine substrate preference for gibbon LAVA versus orangutan SVA versus human SVA Ideally, substrate preference of species-specific L1 should be tested using multiple elements derived from that species. A pilot study using genomic copies of gibbon and orangutan L1 elements, however, failed.
Mobilization of both SVA and LAVA is dependent on L1 ORF1p [12,13]. To address a possible intra-species preference of L1 subfamily ORF1-encoded proteins for SVA/LAVA, I generated chimeras containing codonoptimized ORF1 sequences corresponding to the currently active subgroups (consensus sequences) of L1PA4 (gibbon) and L1PA3 (orangutan) and an established inter-ORF and codon-optimized ORF2 available in pBS-L1PA1-CH-mneo [26]. Codon optimization for mouse and human L1 elements has been shown to result in improved transcription, increased protein expression and mobilization rates in cell-based retrotransposition assays [26][27][28]. The protein sequences of the ORF1-encoded proteins are shown in Fig. 5, the general organization of the constructs used in Fig. 6a. The chimeras were first tested for retrotransposition in cis. As shown in Fig. 6b, there are no major differences to be observed. The codon-optimized L1PA1 and chimeric elements lacking the mneoI reporter cassette were then transferred into the episomal pCEP4 vector to assess their capacity to mobilize VNTR-composite elements in trans (Fig. 6c). For this assay the 11 bp deletion in the SINE-R region of the human SVA_E H8_43 was corrected to obtain an element corresponding to the subgroup consensus. The modification did not significantly affect mobilization rates when L1RP was used as the autonomous partner (not shown).
If there is L1 ORF1p-mediated substrate preference then the human element should be mobilized most efficiently by the human L1PA1; orangutan SVA by the L1PA3-PA1 chimera and gibbon LAVA by the L1PA4-PA1 chimera. This, however, was not found to be the case: the human SVA_E element is the most efficiently mobilized with all three ORF1-encoded proteins, followed by orangutan SVA and gibbon LAVA. The finding that the L1PA4-PA1 chimera shows only about 50% of the activity of L1PA1 is not really surprising given the phylogenetic distance between the two L1 subfamilies. However, the very low retrotransposition activity of the L1PA3-PA1 chimera in trans was completely unexpected given that the construct showed only slightly diminished mobilization capacity in cis when compared to L1PA1.
Outside the coiled-coiled domain mediating trimerization [29] two of the amino acid exchanges specific to orangutan ORF1p reside in the N-terminal region (T35) and central RRM (RNA recognition motif) domain (N172), respectively. Both domains have been characterized in human ORF1p with regard to their role in L1 mobilization in cis [30,31], however, no specific function has been assigned to either of the residues in question (amino acids 35 and 172). In an attempt to identify amino acids exchanges that might be responsible for the reduced mobilization capacity of orangutan (PA3) ORF1p for SVA/LAVA in trans, I mutated the two residues to obtain the sequence present in human (PA1) and gibbon (PA4) ORF1p (T35M, N172T). Although an increase in human SVA H8_43 retrotransposition rates could be observed for both mutants, mobilization levels did not reach those obtained for the human ORF1p (Fig. 6d). Mobilization in cis has not been affected by the two mutations (not shown).
Discussion
After Alu and L1, SVA/LAVA are the third largest group of non-LTR retrotransposons in hominoid primates [2]. They can act as insertional mutagens (for review see [32] and can co-mobilize sequences at both their 5′ [6,7] and 3′ [33] ends. SVAs have also been shown to function as exon trap [7] and to be co-opted as regulatory sequence [34]. Despite this obvious impact on genome evolution and gene expression, their mechanism of Fig. 4 The VNTR and 5′ hexameric repeats determine mobilization capacity of human SVA. a VNTR structure of the H8_43 deletion mutants. VNTR subunits are encoded as in Lupan et al. [4]. Subunit arrays are bracketed. The VNTR subunit structure of H19_27 is given for comparison. TR -Tandem Repeat, fixed 5′ part of the domain; VNTRvariable length central part of the domain. b "In-frame"-deletions in the VNTR reduce SVA mobilization rates up to 90%. c Deletion of both the central part of the VNTR and the 5′ hexameric repeats has an additive effect. Retrotransposition rates +/− SEM are shown relative to the full-length element (100%). n = 3 for each independent set of experiments mobilization and their amplification dynamics in evolution are not well understood.
Estimates based on a phylogenetic study (one in 916 births) pointed at a relatively low in vivo mobilization rate when compared to Alu, the other non-autonomous non-LTR retrotransposon in hominoid genomes (one in 21 births [35]). Results obtained in vitro in a cell-based retrotransposition assay appeared in agreement with these estimates: Hancks and colleagues reported an approximately 30-fold higher mobilization rate for Alu when compared to a (canonical) SVA element [11]. Against this background it has been disputed that SVA is indeed a preferred substrate of the L1-encoded proteins mediating its mobilization.
A recent pedigree-based analysis, however, resulted in a much higher estimate of SVA in vivo retrotransposition rates (one in 63 births)comparable to that found for L1 (one in 63 births [18]) and in obvious contrast to the low rates observed in vitro. The results presented here now clearly show that SVA can be mobilized with high efficiency in cell culture. The elements previously characterized for their mobilization potential in vitro were identified based on (i) the ability to generate human-specific offspring (H2D - [16,33]), (ii) the sequence similarity to the SVA_D consensus sequence (H11D - [16]) and (iii) the sequence identity to a reported disease-causing SVA insertion (SVA_E H19_27 - [12,36]), respectively. The results presented here suggest that affiliation to a subgroup containing both polymorphic and fixed elements taken together with low divergence from the subgroup consensus (Alu-like region and SINE-R) and a VNTR structure corresponding to the subgroup "consensus" could be a suitable basis for the identification of potentially The results also show that the comparatively low in vitro mobilization rates reported previously can, to a large extent, be attributed to an experimental artefact: splicing of the SVA VNTR to the reporter cassette results in mature transcripts that cannot contribute to the fraction of G418 resistant cells following retrotransposition because they lack the stop codon and polyadenylation signal of the neomycin phosphotransferase. Possibly, the large amounts of double-spliced RNA produced also reduce the overall visible/detectable retrotransposition rate by acting as a "dominant-negative": the 5′ hexameric repeat/ Alu-like region that constitutes the "minimal active human SVA retrotransposon" [16] and presumably mediates preferred interaction of SVAs with the L1-encoded proteins is present in the double-spliced RNA.
With regard to SVA functional domains the results obtained provide further support for the importance of the 5′ hexameric repeats in L1-mediated mobilization. Deletion of the domain leads to a decrease of 60% in the retrotransposition rate. Hancks et al. reported a 75% reduction in the context of SVA element H2D [16]. However, the hexameric repeat region of human SVAs is heterogeneous in both sequence and length. In SVA_E elements the TCTCCC repeats are frequently interspersed with Gs at regular intervals (e.g. in the previously described SVA H19_27). Preliminary results suggest that indeed there may be differences between elements with regard to the contribution of the 5′ hexameric repeats to overall mobilization capacity.
Previous results concerning the role of the central VNTR yielded conflicting results. Whereas complete deletion negatively affected mobilization, partial deletion led to a more than 50% increase [16]. However, the deletion mutants investigated were generated using restriction enzyme digestion that does not (i) accurately remove arrays of VNTR subunits and (ii) leaves subunits at the 5′ end of the domain and deletes the 5′ most part of the SINE-R as well. Thus, the constructs do not precisely reflect VNTR shortening the way it most likely occurs through polymerase slippage in vivo [4]. Experiments performed here with "VNTR-slippage-mutants" now provide clear evidence that the VNTR is a major determinant for efficient mobilization of SVA elements in both human and orangutan. For LAVAthe VNTR-composite family expanding in gibbonsit has been shown that either the length or a particular, as yet undefined, VNTR structure mediate efficient mobilization [4]. Thus, the central repetitive domain appears to play a key role in the amplification process across VNTR-composite families in hominoid primates. For a robust conclusion, however, analysis of additional SVA and LAVA elements will be required.
From an evolutionary point of view VNTR shortening by polymerase slipping could be considered to represent an inbuilt inactivation mechanism. An interesting point to be addressed in the future would be how fast this process occurs compared to random mutation leading to loss of activity in Alu elements.
Based on the finding that only a small number of L1 subfamilies were amplified intensively during the burst of Alu and processed pseudogene formation 40-50 million years (myrs) ago, Ohshima et al. hypothesized that "proteins encoded by members of particular L1 subfamilies acquired an enhanced ability to recognize cytosolic RNAs in trans" [37]. A later experimental study, however, could not find any evidence for coevolution between Alu and L1 [38]. Whereas Alu subfamilies differ by nucleotide exchanges and small indels only, VNTR composite retrotransposons display more pronounced differences across hominoid primates. LAVA is the dominant family in gibbon; orangutan SVAs are direct descendants of the ancestral SVA_A as far as the Alu-like domain and SINE-R are concerned and currently active elements in the hominines derive from SVA_D with its specific deletions in the Alu-like region and SINE-R [2]. In addition, there are marked differences in the subunit structures of the VNTR between LAVA, orangutan SVA and hominine SVAs [4]. Thus, by contrast to Alu, coevolution of VNTR-composites and L1 at the lineage/species level appeared to be possible. Given the dependence of VNTR-composite retrotransposition on L1 ORF1p [12,13], changes mediating preferential mobilization of one or the other type (LAVAorangutan SVAhuman SVA) by a particular L1 subfamily could be expected to reside in this protein. The results obtained for the SVA/LAVA elements tested here, however, do not support this hypothesis. Indifferent of the ORF1p encoded in the constructs the human SVA is the most efficiently mobilized element. A preferred interaction of the human element with host factors involved in retrotransposition in the human cell environment might be an explanation for this observation. It will be interesting to see whether the preference of ORF1p for a particular VNTR-composite family changes with the cellular context, e.g. in orangutan or gibbon cells. In addition, it would be desirable to corroborate the results obtained with the analysis of more SVA/LAVA elementsalso against the background that the now available orangutan genome build (ponAbe3) permits the generation of more reliable "consensus" VNTR substructures (Additional file 4) and, consequently, a more specific selection of potentially active SVAs from a wider range of sequenced and correctly assembled 5′ full-length elements.
In the absence of coevolution with its autonomous partner L1 SVA/LAVA could also have evolved to evade host repression. Turelli et al. [39] noticed that the human-specific subfamilies SVA_E and SVA_F are "less frequently associated with TRIM28 (a KRAB-zinc finger protein (ZFP) cofactor involved in transcriptional repression) than their older counterparts" and reasoned that "this could be because not enough time elapsed since they invaded the genome for KRAB-ZFPs or other TRIM28-tethering proteins recognizing their sequence to have been selected".
Given the failure of detecting Alu-L1 coevolution [38], the finding that L1 ORF1p does not confer substrate preference in human cells did not really come as a surprise. The greatly reduced trans mobilization activity of the PA3-PA1 chimera, however, didin particular against the background that the ORF1p protein encoded appears to be fully functional in L1 retrotransposition in cis. The multiple alignment of the ORF1p sequences reveals five amino acid exchanges specific to the orangutan protein outside the coiled-coil domain required for trimerization (Fig. 5). Substitution of two of these residues (T35 and N172) did not rescue orangutan ORF1p mobilization capacity in trans (compared to human PA1). It remains to be seen whether exchange of the C-terminal divergent amino acids or a combination of mutations (possibly including the orangutan-specific residues in the coiled-coiled domain) "restores" activity. From another point of view the greatly reduced capacity of the orangutan protein to mediate mobilization in trans might explain the lower insertion rate of SVA in the orangutan lineage. Based on a number of 1800 SVA elements in the genome of P. abelii (all lineage-specific), the lineage-specific insertion rate per myrs would be ca. 120 (split-time from hominines 14-16 myrs ago [10];). By contrast, the human genome harbours 1395 species-specific SVAs [9] resulting in a lineagespecific insertion rate of ca. 280 per myrs (split-time from chimpanzee 4-6 myrs ago). However, a direct comparison of these numbers might be misleading: to date there is no information available about the SVA expansion dynamics in orangutan over the last 14-16 myrs. An approximately constant rate over the entire period of time and bursts of amplification are equally possible. In addition, the lineage-specific evolution of SVA's autonomous partner, L1, in the orangutan lineage will have to be taken into account.
Conclusions
SVAs can be mobilized with high efficiency in tissue culturethey are indeed a preferred substrate of the L1encoded proteins. Modification of the retrotransposition reporter cassette to minimize splicing of human SVA facilitates robust comparison of VNTR composite mobilization across species and provides an essential tool for the analysis of these elements. Results obtained on SVA functional domains confirm earlier data on the role of the 5′ hexameric repeats [16] and assign a critical function to the VNTR in accordance with published findings for LAVA [4].
The results obtained in human cells do not provide any evidence for co-evolution between L1 ORF1p and VNTR composite elements across hominoids, suggesting that host factors most likely were or are involved in shaping the interaction between the autonomous and non-autonomous partnersat the root of each of the lineages (Hylobatidae, Ponginae, Homininae) and/or in the cellular environment of the present day species.
Amplification and cloning of human and orangutan SVA elements
Elements were amplified from genomic DNA using primers in the flanking sequence and Phusion HSII (Thermo Scientific). Orangutan DNA was obtained from the Gene Bank of Primates at the German Primate Center. Primer sequences are provided in Additional file 2: Table S6. To facilitate melting of the VNTR, the denaturation time was extended to 30s and 3% DMSO was added to the reaction mix. Amplicons were subcloned into pJET 1.2 (Thermo Scientific) and sequenced. To obtain complete VNTR sequences, subclones containing the VNTR 5′ and 3′ ends, respectively, were generated using SmaI (H8_43) or MscI (OU3, OU4). 5′ primers localized directly upstream of the CT hexameric repeats and 3′ primers designed to exclude the elements' polyadenylation signals were used for re-amplification. KpnI and NheI recognition sites, respectively, were introduced into the upstream and downstream re-amplification primers. Amplicons were again subcloned into pJET 1.2, sequenced and transferred into pCEP Neo [12] and pCEP_mneoM via KpnI/NheI. The human SVA H8_43 displays an 11 bp deletion in the SINE-R region when compared to SVA_E and to the subgroup consensus sequences. To obtain a plasmid with a consensus SVA_E SINE-R for cross-species comparison, the missing 11 bp were introduced by site-directed mutagenesis (NEB Q5 kit).
Modification of the mneoI reporter cassette: pCEP_mneoM
The minimal polyA signal [25] was excised NotI/ClaI from pGL3basic (Promega) and subcloned into pBII (KS+) yielding pB_syn_pA. The 3′ end of the mneoI cassette (lacking the HSV TK pA signal) was amplified from pCEP Neo [12] using the primers Neo_STOP_Not 5′ GGCGGC CGCCCTCAGAAGAACTCGTC 3′ and mneo_Xho_REV 5′ CCTCGAGACTAAAGGCAAC 3′, subcloned into pJET 1.2 (Thermo Scientific), and subsequently cloned upstream of the minimal pA signal in pB_syn_pA via SacI/ blunt-XbaI/blunt and NotI. The polyA signal present in the antisense orientation was then changed to AACAAA by site-directed mutagenesis using the NEB Q5 kit. The fragment containing the modified minimal pA signal, the 3′ part of the neo R coding sequence and the 5′ part of the mneoI intron was then transferred to pCEP Neo NheI/ blunt-ClaI/blunt and XhoI to replace the respective part of the original mneoI cassette.
Retrotransposition reporter cassette-containing constructs
All SVA and LAVA elements were cloned KpnI/NheI upstream of the respective reporter cassette. Details on the construction of the human SVA_E H8_43 deletion mutants can be obtained from the author. L1PA chimeras were generated by exchanging ORF1 in pBS-L1PA1-CH-mneo [26] NheI/BsmBI with the respective gibbon or orangutan sequence obtained as synthesized and cloned fragments in pMA-RQ (Invitrogen).
Tissue culture and retrotransposition assays
Hela HA cells (a gift from J. Moran) were cultured in DMEM (Gibco) 4.5 g/l Glucose, 10% FCS. Cell-based retrotransposition assays were carried out as described previously [12,40]. Briefly, 1.5 × 10 5 cells per well were seeded in 6-well plates. 24 h after seeding cells were transfected with 0.5 μg each of the L1 expression plasmid and the mneoI/mneoM-tagged reporter construct using X-tremeGENE 9 (Roche) according to the manufacturer's instructions. G418 selection (Sigma; 400 μg/ ml) was started 72 h after transfection and continued for 12 days. Resulting colonies were then stained with Giemsa and counted.
Genomic DNA isolation and characterization of H8_43 mneoM de novo insertions Genomic DNA of grown-up G418-resistant colonies was isolated using the Monarch Genomic DNA Purification Kit (New England Biolabs). The 3′ ends of the insertions were determined using EPTS-LM PCR as described previously [12]. Subsequently, the de novo integrations' 5′ ends were amplified using primers in the upstream genomic sequence.
Generation of codon-optimized orangutan and gibbon L1 ORF1
As a basis for codon-optimization consensus sequences for the evolutionary youngest subgroups of gibbon (N. leucogenys) L1PA4 (L1Nomleu) and orangutan L1PA3 were generated: the sequences of all full-length L1PA3 and L1PA4 elements were retrieved using the UCSC genome browser table browser function (P. abelii -ponAbe3; N. leucogenys -nomLeu3). The sequences were aligned and, in case of orangutan, filtered manually to identify elements displaying the 129 bp 5'UTR deletion [41] characteristic for the evolutionary youngest L1PA3 subgroup. Sequences were sorted manually into subfamilies and subfamily consensus sequences were generated. Final alignments of the subfamily members to the respective subfamily consensus sequence were inspected and random mutation rates (coding sequence only; ORF1 and ORF2 assessed separately) were determined. Finally, the ORF1p consensus sequences of the subfamilies displaying the least deviation from the subfamily consensus were selected as basis for codon optimization. Codon optimization used the sequence and codon frequency of the target pBS-L1PA1-CH-mneo [26] as template. The optimized sequences were complemented with the pBS-L1PA1-CH-mneo ORF1-flanking sequences for cloning and synthesized by Thermo Scientific. The subcloned fragments obtained were transferred into pBS-L1PA1-CH-mneo yielding pBS-L1PA3/PA1-CH-mneo (orangutan) and pBS-L1PA4/PA1-CH-mneo (gibbon).
Additional file 1: Human and orangutan SVAs referred to in the study. Human SVA_E1: Human SVA_E elements displaying a 6 bp insertion in the SINE-R. Genomic positions, target site duplications (TSD), polymorphic status and the VNTR subunit structure are shown. Arrays of VNTR subunits are boxed. Boxes highlighted in red indicate VNTR subunits providing splice acceptors for splicing to the mneoI cassette. Orangutan SVAs: Orangutan SVAs genotyped and amplified. Buschi, Babu, Dunja, Kiki and Elsi are P. abelii individuals for which short read archives are available. Numbers (011 etc) refer to individuals genotyped on genomic DNA. Positional information refers to the primary amplicon. Fields highlighted in yellow indicate the animals from which the respective element was amplified. TSD -Target site duplication; TDtransduction. Orangutan SVA VNTR: VNTR subunit structure of the orangutan SVAs tested for their retrotranspositional activity. Arrays of VNTR subunits are boxed.
Additional file 2: Figure S1. Reference (hg19) and amplicon sequence of human SVA_E H8_43. Binding sites of amplification primers are highlighted in yellow; Alu-like domain and SINE-R are highlighted in green; the amplicon part marked in red could not be resolved using Sanger sequencing. Target site duplications are italicized and underlined. Figure S2. Reference and amplicon sequences of orangutan SVA OU3.
Binding sites of amplification primers are highlighted in yellow; Alu-like domain and SINE-R in green. Target site duplications are italicized and underlined. The 3′ transduction is highlighted in grey (not included in the re-amplification product). Figure S3. Reference and amplicon sequences of orangutan SVA OU4. Binding sites of amplification primers are highlighted in yellow; Alu-like domain and SINE-R in green. Target site duplications are italicized and underlined. Figure S4. The minimal polyA signal used in the mneoM cassette facilitates correct polyadenylation of neo cDNA. 3′ RACE analysis to assess correct polyadenylation of the neomycin phosphotransferase gene using the minimal functional polyA signal [25]. The minimal polyA signal (pGL3-derived) was tested downstream of an SV40 promoter-driven neomycin phosphotransferase cDNA. The stop codon is shown in red; the polyA signal and GU-rich tract are underlined. The polyA signal mediating premature polyadenylation of elements upstream of the reporter cassette is italicized and underlined. The stop codon is shown in red; the polyA signal and GU-rich tract are underlined. The polyA signal mediating premature polyadenylation of elements upstream of the reporter cassette is italicized and underlined. Figure S5. Human SVA H8_43 mneoM de novo integrations. The L1 endonuclease cleavage site on the bottom strand is indicated in blue. Extra G residues at the 5′-ends of the insertions are shown in green; target site duplications in red. Neoneomycin phosphotransferase gene. Table S6. Sequences of oligonucleotides used in amplification and re-amplification of human and orangutan SVA elements. Restriction enzyme recognition sites present in the re-amplification primers are underlined.
Additional file 3: Subgroups of human SVA_E elements containing both fixed and polymorphic elements. SVA_E3: VNTR subunit structure of SVA_E subgroup E3 containing four fixed and four polymorphic elements. Based on divergence from subgroup consensus (div; Alu-like region and SINE-R) and VNTR structure the two fixed elements on chromosome 1 would be candidates to test for activity. AF -allele frequency as provided in Stewart Additional file 4: VNTR structure of orangutan SVA PA _7 elements. Ten orangutan SVA PA _7 in ponAbe3 were selected at random and their VNTR subunit structure was determined based on the code developed in Lupan et al. (2015). CONSENSUS ponAbe2 -consensus VNTR structure of the SVA PA _7 elements identifiable in ponAbe2. The VNTR structure of the two elements tested (OU3, OU4) is given for comparison.
|
v3-fos-license
|
2020-10-19T18:11:33.339Z
|
2020-09-19T00:00:00.000
|
224953304
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://bpspsychub.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/lcrp.12182",
"pdf_hash": "847cf5d84e4ac061dac0da6471e781d25ab79c4e",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43038",
"s2fieldsofstudy": [
"Psychology",
"Law"
],
"sha1": "d6eaa111b3987548b849c297b7d6c84b0c49d8d3",
"year": 2020
}
|
pes2o/s2orc
|
The effects of cognitive load during an investigative interviewing task on mock interviewers’ recall of information
Purpose. Although investigative interviewers receive training in interviewing techniques, they often fail to comply with recommended practices. Interviewers are required to actively listen, accurately remember information, think of questions to ask, make judgements, and seek clarification, whilst conducting interviews with witnesses, victims, or suspects. The current study examined the impact of increased cognitive load on mock interviewers’ recall of a witness’s account. Method. Participants took the role of an investigative interviewer in one of three conditions, high cognitive load (HCL), moderate cognitive load (MCL), or no cognitive load(NCL).Participantswatchedavideo-recordedfreenarrativeofachildwitnessduring which they followed condition-relevant task instructions. Each participant rated their perceived cognitive load during their task and then recalled (free and cued recall) the content of the witness’s account. Results. Participants in the HCL and MCL conditions perceived higher cognitive load and demonstrated poorer performance on the free recall task than those in the NCL condition. Participants in the HCL condition demonstrated poorer performance on the cued recall task compared to participants in the NCL condition. Conclusions. The cognitive demands required to complete an investigative interview task led to an increased perceived cognitive load and had a negative impact on recall performance for mock interviewers. Accurately recalling what has been reported by a witness is vital during an investigation. Inaccurate recall can impact on interviewers’ questioning and their compliance with recommended interviewing practices. Developing and practising interview techniques may help interviewers to better cope with the high cognitive demands of investigative interviewing.
criminal investigations, child protection enquiries, and intelligence-gathering settings. For example, the PEACE model is recommended for interviewing suspects and witnesses (Bull & Soukara, 2009;Kassin et al., 2010;Milne & Bull, 1999). Similarly, the Achieving Best Evidence guidelines (ABE; Ministry of Justice [MoJ], 2011) and the National Institute for Child Health and Human Development protocol (NICHD; Lamb et al., 2018;Orbach et al., 2000) have been developed, and are recommended for interviewing vulnerable witnesses 1 .
There are, therefore, an abundance of guidelines, which provide advice to practitioners for the optimal approach to obtaining precise and complete statements from interviewees (Bull, 2010;Hershkowitz, 2011;Oxburgh et al., 2015). However, adhering to these guidelines remains a challenge for investigative interviewers (Lamb, 2016;Schreiber-Compo et al., 2012). This may be because interviewing is a complex cognitive task for the interviewer (Lafontaine & Cyr, 2016;Powell, 2002). In an exploratory study, the cognitive load experienced by interviewers was identified as a possible barrier to compliance with recommended techniques (Hanway & Akehurst, 2018). Contrary to recommendations, interviewers' cognitive burden may result in them interrupting the witness or asking questions that have already been answered (Schreiber-Compo et al., 2012). However, as noted by Kleider-Offutt et al. (2016), the impact of multiple cognitive demands for investigative interviewers has not been empirically examined. The current study explored the cognitive demands of a mock interview task and tested the effects of cognitive load on the recall of a witness's account.
Cognitive load and task performance
Cognitive load is the mental workload placed on individuals when they are required to undertake activities (Hart & Staveland, 1988;Van Acker et al., 2018). It signifies working memory use and the demands placed on cognitive resources when carrying out multiple and competing tasks (Dias et al., 2018;Engstr€ om et al., 2013). The capacity limitations of working memory mean that without the rehearsal of received sensory information, the processing of information is restricted (van Merrienboer & Sweller, 2010). This can lead to an attentional bottleneck where attending to one element of information causes other cognitive processes, and the associated information, to be neglected (Strayer & Drews, 2007).
Controlled processing is needed to complete cognitive tasks that require attention and the management of information (Bargh, 1984). However, this type of processing is slow and effortful and relies on our limited attention capacity (Strayer & Drews, 2007). High levels of focused attention can be accomplished with effort (Bargh, 1984;Schneider & Shiffrin, 1977), but errors occur if an individual cannot meet the mental demands required to effectively complete the tasks (Paas & van Merrienboer, 1993). Additionally, during complex tasks, there is an increase in cognitive demand; thus, the amount of mental effort required also increases (Kleider-Offutt et al., 2016). The attentional demands required to perform complex tasks may lead to cognitive load and errors, or a reduction in performance (Engle & Kane, 2004;O'Donnell & Eggemeier, 1986).
Cognitive load theory (CLT) identifies three types of load (Sweller, 1988(Sweller, , 1994Sweller et al., 1998) that are relevant in a variety of applied settings (Galy et al., 2018). The first type, intrinsic load, relates to the load imposed by the fundamental nature of the information being processed and the natural complexity of the task (Schnotz & Kurschner, 2007). The second, extraneous load, is induced by other external factors, such as time pressure (Galy et al., 2012). The third type of load described within CLT is germane load, which is the load used for learning, the development of skills, and the application of skills in a novel situation (Paas, et al., 2004). Notably, germane load is required for the construction and automation of schemas for a particular task (Galy, et al., 2018).
Cognitive load in investigative interviews
For investigative interviewers, there are several inherent (i.e., intrinsic) features of interviewing that may contribute to a cognitive load, including the generation of questions, identifying topics to pursue, and seeking clarification from interviewees. Interviewers are required to actively listen to, and accurately remember, what interviewees are saying (Fisher et al., 2014). They may also be required to take notes and formulate hypotheses to account for the events described. As such, interviewers must attend to multiple cognitive processes (Kleider-Offutt et al., 2016). At the same time, they are required to adhere to best practice guidance, such as building rapport and forming appropriate questions (Hanway & Akehurst, 2018).
Open questions typically lead to detailed, free narrative responses from interviewees (Dale et al., 1978;Hershkowitz, 2001). Hence, asking open questions is an important feature of an investigative interview (Danby et al., 2017). Interviewers must then accurately remember the often-numerous details provided by interviewees but interviewers' recall of information may be limited and inaccurate (Hyman-Gregory, 2009).The interviewer may introduce this erroneous information to the witness, which may have an impact on the subsequent accuracy and reliability of the witness's testimony (Gudjonsson, 2010;Loftus & Pickrell, 1995). In doing this, interviewers can affect the amount and quality of evidence provided by witnesses (Brown & Lamb, 2015;Gudjonsson, 2010).
In sum, obtaining accurate and detailed accounts from witnesses during investigative interviews can be difficult (Hope & Gabbert, 2019;La Rooy & Dando, 2010). Interviewers hold information provided by witnesses in their memory, whilst at the same time assessing that information, thinking of questions to ask, and identifying the correct order in which to ask those questions (i.e., which topic to ask questions about first; Hanway & Akehurst, 2018). The complex cognitive functions required to complete these tasks are likely to have an impact on interviewers' performance and their judgements (Ask & Landstrom, 2010;Nordstrom et al., 1996).
The current research
The current research examined the effect of increased cognitive demands on participants' perceived cognitive load during a mock interview task. The tasks for each condition were designed to replicate the cognitive demands present during an investigative interview (i.e., to listen to the witness, remember information, judge information, and think of questions to ask; Fisher et al., 2014;Hanway & Akehurst, 2018). We explored the effect of increased cognitive demands on the amount and accuracy of information recalled from a witness's statement by participants who took on the role of interviewers.
Based on previous cognitive load research (e.g., Dias et al., 2018;Nordstrom et al., 1996), we hypothesized that during the interview and recall tasks, participants in a high cognitive load (HCL) condition would report higher perceived cognitive load (PCL) compared to those in a moderate cognitive load (MCL) condition, who would report higher PCL than those in a no cognitive load (NCL) condition. Second, we hypothesized that participants in the HCL condition would recall fewer details, and would have a lower accuracy rate for their free recall of a witness's statement, than those in the MCL condition, who would recall fewer details and have a lower accuracy rate than those in the NCL condition. Third, we predicted that participants in the HCL condition would have a lower percentage accuracy score when answering questions about a witness's statement than those in the MCL condition, who would have lower percentage accuracy score when answering questions about a witness's statement than those in the NCL condition.
Design
For this independent-groups study, there was one between-subjects factor, cognitive load, with three levels: high cognitive load (HCL); moderate cognitive load (MCL); and no cognitive load (NCL; control). The dependent variables were perceived cognitive load (PCL), the amount and accuracy of statement details provided by participants during free recall, and the accuracy of their cued recall.
Participants
A priori G*power analysis (Faul et al., 2009) for an omnibus one-way ANOVA with three groups indicated that a sample size of 102 participants was required. This was based on power = 0.95, a large effect size of f = 0.40, and the traditional alpha = .05. A large effect on recall accuracy was predicted on the basis of research showing large effects of working memory capacity on memory accuracy (e.g., Jarrold et al., 2011) and large effects of cognitive load on recall accuracy for the spoken word (e.g., Hunter & Pisoni, 2018). 102 participants, staff and students, were recruited via a university participant pool and workplace advertisements at the university. Participants were invited to take part in a study that examined what it is like to be an investigative interviewer. No monetary incentives were offered to participants, but first year undergraduate psychology students were offered one course credit for their participation. Participants attended for one test session, which lasted approximately 45 minutes. Only adults with English as a first or primary language were recruited. The aim of the study was to assess participants' recall of information provided by a witness, when under varying degrees of cognitive load. Therefore, as experience can have an impact on task performance when under cognitive load (Paas, et al., 2004), prior investigative interviewing experience was an exclusion criterion.
The sample comprised 68 females and 34 males. Participants were aged 18 to 71 years (M age = 25.95 years, SD = 10.02, the median age was 22 years). To ensure equal numbers of participants (N = 34) in each condition, they were pseudo-randomly allocated to one of the three conditions (HCL, MCL, NCL). Data from one participant were removed from the analysis as their responses suggested a poor understanding of the task and a z-score for accuracy rate of the witness's account was an outlier at À3.41 (Field, 2013). Data from two further participants were removed due to recording equipment failure. The final sample, therefore, comprised 99 participants who were aged 18 to 71 years (M age = 26.03 years, SD = 10.09, median age = 22 years) 2 . There were 67 females and 32 males. For the final analyses, there were 34 participants in the high cognitive load (HCL) condition, 33 in the moderate cognitive load (MCL) condition, and 32 in the no cognitive load (NCL) condition.
Stimulus event
To enable an accurate reflection of a real-world interview, the interview room setting, interview procedure, and recording of the interview were designed to correspond with published guidance for interviewing child witnesses (MoJ, 2011). An eight-year-old child witness was interviewed about an event she had experienced (a recent birthday party). The witness was given an open prompt by the interviewer (i.e., 'Please tell me everything you can remember about the party you went to'). This question and the witness's subsequent free recall were digitally recorded. The recording of the interview captured a head and shoulders view of the witness. The child's recorded free recall account lasted for 6 minutes and 30 seconds.
Perceived cognitive load measure
To measure participants' perceived cognitive load, the National Aeronautics and Space Administration, Task Load Index (NASA-TLX) was used. This questionnaire combines information about the magnitude and source of six related factors to derive a sensitive and reliable estimate of workload (Hart & Staveland, 1988).
The NASA-TLX uses a multi-dimensional rating scale questionnaire to evaluate participants' subjective ratings of mental workload; the scale items are mental demand, physical demand, temporal demand, performance, effort, and frustration. These items were selected following analysis of the primary factors that do (and do not) define a subjective experience of workload (Hart, 2006). Each item is measured on a 20-point scale from low to high (except for performance which is measured on a scale from good to poor). A weighted score is obtained by completing 15 pairwise comparisons of the six scale items. For each pair, one item is selected that is more relevant for the participant when completing the task (Hart & Staveland, 1988). For this study, and following the scoring procedure devised by Hart and Staveland (1988), a PCL score out of 100 was calculated by multiplying each scale item score (rating score) by the number of times that item was selected in the pairwise comparisons (adjusted score); the six weighted item scores were then totalled and divided by 15 to obtain an overall PCL score. The NASA-TLX was designed to be used during, or immediately after, a task and has been widely used in a variety of settings to measure the cognitive load perceived by participants when they complete a task (e.g., Hart, 2006;Rizzo et al., 2016).
Procedure
After reading the information sheet and providing written informed consent, participants were allocated to one of the three conditions: HCL, MCL, or NCL. The lead author conducted the research and followed written instructions for all conditions. The experimenter was aware of each participant's condition. To reduce experimenter effects, instructions for each condition were read out verbatim from a written script and all questions were asked verbatim from a prepared script. All participants were instructed to take the role of a police interviewer and were informed that a child had witnessed an event, which the participant needed to investigate. Participants were asked to watch and listen to the witness's recorded interview and were informed that they would be asked some questions after they had watched the interview. In the HCL condition, participants were given the following additional instructions, 'Whilst watching the interview, I would like you to consider carefully what the witness is telling you so that you clearly understand the witness's experience of the event she is describing. Your other task is to identify follow-up questions to ask the witness once she has given her statement. So, whilst you are listening to the child, please think about the wording of your questions and in what order the questions should be asked'. In the MCL condition, participants were given the following additional instructions, 'Whilst watching the interview, I would like you to consider carefully what the witness is telling you, so that you clearly understand the witness's experience of the event she is describing'. In the NCL (control) condition, no further instructions were given to participants.
After receiving their specific instructions, all participants watched the recorded interview on a computer screen wearing headphones to reduce distractions. Immediately after watching the interview with the child witness, all participants completed the first PCL measure (i.e., they recorded their perceived cognitive load during the interview task, using the NASA-TLX scale presented via an android tablet application). Participants then carried out a 15-minute distraction task, which required them to work through some unrelated number puzzles.
Following the distraction task, participants were asked to recall as much information, in as much detail as they could, from the witness's recorded statement. After participants finished their free recall, they were asked if there was anything further they could recall about the interview. Once participants had completed the free recall task, they were asked 40 cued recall questions about the content of the witness's interview (e.g., 'What did the witness say was 'quite tricky'?'; 'Who drove the witness home?'). The order of these questions was randomized across participants. All participants were audio-recorded whilst they gave their free narrative and answered the cued recall questions. Participants then completed a second self-report of their PCL for the recall task (i.e., their perceived cognitive load when they were recalling the child's statement and answering the 40 questions). This was again completed using the NASA-TLX scales.
For completeness, as participants in the HCL condition had been asked to think about questions to ask the witness, we then asked them to write down 10 follow-up questions they would ask the witness if they were the investigator in the case. To ensure all participants completed the same tasks, those in the MCL and NCL conditions were also asked to write down 10 questions they would like to ask the witness 3 .
Finally, participants were asked to rate, using 7-point scales their confidence in their memory accuracy, from [1] not at all confident to [7] extremely confident; the extent to which they felt motivated to remember the content of the child's interview, from [1] not at all motivated to [7] extremely motivated; the extent to which they found remembering the child's statement easy or difficult, from [1] very easy to [7] very difficult; and the extent to which they found coming up with questions easy or difficult, from [1] very easy to [7] very difficult. Participants in the HCL condition were also asked to rate how motivated they were to think about questions whilst they were listening to the child's statement, from [1] not at all motivated to [7] extremely motivated.
As a manipulation check, participants were then asked to write down the instructions they were given by the researcher before they watched the child's account. Demographic details including age and gender were also recorded. A verbal debrief was provided for all participants and they were thanked for their time and effort.
Coding
Free recall coding Verbatim transcripts of the participants' audio-recorded free recall of the witness's statement were coded for quantity and accuracy of details reported. Details were coded as person, action, object, setting, or temporal details. For example, participant accounts were coded as follows 'Amelia (1-person) trotted (1-action) on her horse (1-object) in the stables (1-setting)'. If the participant mentioned a detail relating to time (e.g., 'at the end of the day'), it was coded as a temporal detail. Consistent with previous research and to facilitate assessment of overall accuracy, details were coded as correct, incorrect, or confabulations (Wright & Holliday, 2007). A detail was deemed (1) correct, if it was present in the witness's account and was correctly reported by the participant (e.g., 'she was called Amelia'); (2) incorrect, if a reported detail was discrepant from the witness's account (e.g., participant recalls 'pull the reins back to go' but the witness actually said 'pull the reins back to stop'); and (3) confabulated, if a reported detail was mentioned in the participant's account which was not mentioned at all by the witness (e.g., the participant reported 'they got into a car' but the witness did not mention a car at all during her account). Accuracy rate for the free recall accounts was calculated by dividing the total number of correct details reported by the total number of details reported (i.e., correct plus incorrect plus confabulations). Additionally, to assess indicators of uncertainty in participants' recall of the witness's account, ambiguities were coded (e.g., 'I'm not sure, it was something like. . .').
Inter-coder reliability for the free recall accounts was assessed by selecting 20 interview transcripts (20%), which were coded by an independent scorer. Intra-class correlation coefficients (ICC) using absolute agreement were computed for the following measures: total details [r (19 ]. This analysis indicated that the inter-coder reliability was 'good' for the coding of incorrect details and ambiguities, and 'excellent' for the coding of total details, confabulations, and correct details (Koo & Li, 2016).
Cued recall coding
Answers to 40 cued recall questions were scored as fully correct (e.g., in relation to the location of the event, 'Pink Mead Farm': 2 points), partially correct (e.g., 'Mead stables': 1 point), don't know response (0 points), and incorrect (e.g., 'Crofton stables': À1 point). Total accuracy could therefore range from À40 (all questions answered incorrectly) to 80 (all answers fully correct). The scores were added, and a percentage accuracy score for each participant was calculated.
Manipulation check
All 99 participants passed the manipulation check and accurately reported their instructions. As per their instructions, participants in the NCL condition confirmed they were required to watch the interview carefully and participants in the MCL condition confirmed they were to watch the interview and consider what the witness was saying. Participants in the HCL condition confirmed that they were asked to think of questions to ask the witness, as if they were the interviewer in the case, and to watch the interview carefully.
Hypothesis testing
To examine our hypotheses, we conducted a series of between-groups ANOVAs.
Perceived cognitive load
For the 'encoding of interview' task that the participants were first asked to undertake, Levene's test indicated that the assumption of homogeneity of variance for PCL scores had been violated, F(2, 96) = 3.94, p = .023. Therefore, the more robust Welch equality of means test was examined. As predicted, there was a significant difference in PCL scores between the three conditions; F(2, 62.10) = 7.70, p = .001, with a large effect size, ƞ 2 p = .20 (see Table 1). Tukey HSD post-hoc comparisons showed there was no significant difference between PCL scores for participants in the HCL and MCL conditions (p = .209). However, participants in the HCL and MCL conditions scored higher for PCL than those in the NCL condition (HCL, p < .001; MCL, p = .033). For the 'recall' task, there was no significant difference between the three conditions in terms of PCL scores, F (2, 96) = 1.21, p = .304, ƞ 2 p = .02 (see Table 1).
Free recall
With respect to the total number of free recall details reported about the witness's statement, there were no significant differences between the three experimental conditions, F(2, 96) = 2.20, p = .117, ƞ 2 p = .04 (see Table 2). In terms of accuracy rate of the details recalled, there was a difference between the three conditions with a large effect size, F(2, 96) = 8.54, p < .001, ƞ 2 p = .15. Post-hoc comparisons of percentage accuracy indicated that there was no significant difference in percentage accuracy for participants in the HCL condition compared with those in the MCL condition (p = .476). However, percentage accuracy for participants in the HCL condition was lower than for those in the NCL condition, (p < .001). Accuracy was also lower for those in the MCL condition compared with those in the NCL condition (p = .015), as shown in Table 2. For details of mean scores for correct details, incorrect details, confabulations, and ambiguity, see the Supplementary Materials.
Cued recall questions
For the accuracy of cued recall question responses, there was a difference between the three conditions for percentage accuracy score, with a large effect size, F(2, 96) = 7.87, p = .001, ƞ 2 p = .14. Tukey HSD post-hoc comparisons indicated that percentage accuracy score for participants in the HCL condition was not significantly different from those in the MCL condition (p = .114). The percentage accuracy score for participants in the MCL condition was also not significantly different from those in the NCL condition (p = .130). However, percentage accuracy score for participants in the HCL condition was significantly lower than for those in the NCL condition (p < .001; see Table 3). For details of mean scores for correct, partially correct, incorrect, and don't know responses, see the Supplementary Materials.
Motivation, confidence, and task difficulty A series of Pearson's correlations were calculated to determine whether the dependent variables of motivation, confidence, and task difficulty were correlated with each other. There were significant, but moderate, correlations between the majority of variables (see the Supplementary Materials). Therefore, the assumption of an absence of multicollinearity was met, and to reduce type 1 error, a one-way between-groups MANOVA was conducted to investigate differences between the conditions for participants' motivation, confidence, and how difficult they found the tasks. The MANOVA indicated that there was no significant multivariate effect: Wilks' k = .95, F(8, 186) = .62, p = .764, ƞ 2 p = .03 (for details of scores across each of the dependent variables for each condition, see the Supplementary Materials). There were no significant differences at the univariate level.
Exploratory analysis
As our confirmatory analysis showed that increased cognitive demand for participants in the HCL and MCL conditions was associated with increased perceived cognitive load during the 'encoding the interview' task and also a lower recall accuracy for the free recall and question tasks, we conducted further exploratory analyses. A Pearson's correlation showed that there was a relationship between PCL and accuracy of free recall, r = À.279, p = .003. When the sample was split by condition, a linear regression analysis indicated that in the HCL condition, PCL was a predictor of participants' free recall accuracy rate (b = À.40, p = .018) accounting for 16% of the variance. However, PCL was not a predictor of free recall accuracy for participants in the MCL (b = À.08, p = .653) or NCL conditions (b < .001, p = .1.00) (see Figure 1). PCL was also not a predicator of cued recall percentage accuracy scores across any of the conditions (HCL, b = À.042, p = .815; MCL, b = À.121, p = .502; NCL, b = À.047, p = .797).
Discussion
We examined the effects of increased cognitive demands on perceived cognitive load and subsequent recall of an interviewee's account in a mock investigative interviewing task. As predicted, participants who were required to complete tasks that are intrinsic to investigative interviewing (i.e., listening, remembering, judging the information provided, and generating follow-up questions to ask) perceived a higher cognitive load than did participants who were required to complete tasks with fewer cognitive demands (i.e., merely watching and listening to a witness's statement). Participants who were asked to complete more cognitively demanding tasks were less accurate, when freely recalling information provided by the witness, than those who were asked to perform less cognitively demanding tasks. Additionally, when asked cued questions about the witness's account, interviewees who completed more demanding cognitive tasks than those asked to perform fewer cognitively demanding tasks whilst watching the interview, provided less accurate responses. Taken together, these results suggest that the demands placed on the participants' cognitive resources when carrying out the multiple tasks of an investigative interview resulted in a reduction in performance on the tasks.
In exploratory analyses, we found a relationship between PCL and recall accuracy rate. When participants' scores for the three conditions were examined separately, we found the relationship was moderated by the tasks undertaken by participants (i.e., for the HCL condition, higher levels of perceived cognitive load predicted performance in terms of free recall accuracy). When more controlled and focused attention was required for the task of generating questions to ask, there was an increase in perceived cognitive load and a reduction in performance. The reduction in recall performance may have been due to a limited capacity to carry out multiple cognitive tasks in working memory (Kahneman, 1973;Reisberg, 2007). However, more automatic processes (i.e., listening and watching the witness) were less affected by cognitive load (Schneider & Shiffrin, 1977). This research provides the first empirical evidence that increased cognitive demands inherent in an investigative interviewing task result in higher perceived cognitive load as well as reduced recall performance for participants. For the current experimental task, which was designed to reflect real-world interviewing procedures, participants were asked to focus on certain intrinsic features of interviewing, including listening, remembering information, and thinking of questions to ask. Whilst our experimental design included a manipulation of cognitive load based on realistic processes for interviewers, we recognize that investigative interviewing in the field is a complex task and likely requires more cognitive processing than was required for our participants. In practice, interviewers are required to build rapport, interact with the witness, and consider other aspects of the case (Schreiber-Compo et al., 2012). Interviews, therefore, occur in a social context, whereby interviewers also perceive witnesses' actions and make judgements about their credibility, reliability, and well-being (Ask & Landstrom, 2010;Hanway & Akehurst, 2018). These extraneous factors, and that of time pressure (i.e., temporal demand), were not present during the current study. However, cognitive load is additive (Leppink et al., 2015). Therefore, the additional factors identified as present when conducting investigative interviews will likely contribute to a higher cognitive load for interviewers in practice (Hanway & Akehurst, 2018;Nordstrom et al., 1996).
Cognitive load theory suggests that automatic processing relies on schemas to reduce effort (Paas et al., 2004). With training, and skill development, more schemas are potentially built. However, if a task is cognitively demanding, and the intrinsic and extraneous load exceeds capacity, then there is little opportunity to form these schemas (Schnotz & Kurchner, 2007). Cognitive load, therefore, may also have an impact on interviewers' skill development. It may be that, despite their training and knowledge of best practice guidance, the intrinsic and extraneous cognitive demands imposed on investigative interviewers each time they conduct a unique interview leaves little capacity for building schemas. Consequently, interviewers are not afforded the opportunity to rely on more automatic processing and they experience significant cognitive load. Thus, interviewers do not always comply with their training (CJJI, 2014;Cross & Hershkowitz, 2017;Powell & Barnett, 2015).
For this study, our aim was to examine the effect of holding information in mind whilst judging that information and thinking of questions to ask a child witness. We also aimed to reduce extraneous load not directly related to the task. Note-taking can be cognitively demanding in itself and may divide attention between listening to the witness, formulating questions, and recording information (Piolat et al., 2005;Schreiber-Compo et al., 2012). Therefore, in the HCL condition, participants were not permitted to note down the questions they were thinking about whilst they were listening to the child. An inevitable limitation of this design was that we could not be sure what participants were thinking during their task. To mitigate this limitation, and to ensure participants had understood their instructions, we included a manipulation check after the recall phase to check participants' understanding of what they had been asked to do. Future research might examine the effects of note-taking for the interviewer.
Whilst the design of this study replicated the cognitive demands experienced by interviewers during real-world interviews, a limitation is that our participants were novice interviewers, who had not received any training in investigative interviewing. As such, the current findings may have limited generalizability to trained or experienced interviewers. However, interviewers in the real world are also required to think about, and comply with, their training when undertaking interviews, which may increase their cognitive load (Hanway & Akehurst, 2018;Schreiber Compo et al., 2012). Considering this, and the additional intrinsic and extraneous factors, it is possible that interviewers in the field will experience more cognitive load than the novice participants in our study. In turn, interviewers' performance in the field may be impacted to a greater extent than was the case for participants in the current experiment. Further research should focus on aspects of investigative interviewing in context. It would be interesting to explore the impact that training and experience have on interviewers' cognitive load as well as the effects of cognitive load on other aspects of interviewer performance, such as the types of questions asked. As some of the variation seen in the current study may be accounted for by individual differences in cognitive ability, this may also be an interesting area for further research, for example, individual differences in working memory capacity (Engle, 2002).
Finally, the sample size estimation may also be a limitation for this study. The sample size was based on a predicted large effect size, which has practical relevance in an applied setting. We considered the approach to be appropriate and in line with similar research in the investigative interviewing literature (e.g., Hoogesteyn et al., 2020;Kontogianni et al., 2018). However, given the sample sizes in each condition (N = 32, 33 and 34), a larger sample would be needed to detect smaller effects, and significant differences between conditions, in the post-hoc analyses.
The current findings suggest that the cognitive demands required to complete an investigative interview can lead to an increased cognitive load and a reduction in recall accuracy of what was said by an interviewee, which may have an impact on interviewers' questioning and compliance with recommended interviewing practices. Providing interviewers with the opportunity to develop and practise their techniques, so that skills relating to interviewing become more automatic, along with better management of factors which may contribute to additional cognitive load, such as time pressure, may help interviewers to better cope with the high cognitive demands of investigative interviewing. Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design.
Supporting Information
The following supporting information may be found in the online edition of the article: Table S1. Mean correct, incorrect, confabulations, and ambiguity, free recall scores for each condition. Table S2. Mean correct, partially correct, incorrect, and don't know, cued-recall scores for each condition. Table S3. Pearson correlations, Means and Standard Deviations associated with confidence motivation and task difficulty. Table S4. Questionnaire scores for each condition.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2001-03-01T00:00:00.000
|
2890890
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.nature.com/articles/6691692.pdf",
"pdf_hash": "b6b80e04f38b3a25b15def5134856aa2404f3130",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43039",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "b6b80e04f38b3a25b15def5134856aa2404f3130",
"year": 2001
}
|
pes2o/s2orc
|
Sensitivity to radiation-induced chromosome damage may be a marker of genetic predisposition in young head and neck cancer patients
We previously showed that levels of chromosome damage induced by ionizing radiation were, on average, higher in G 2 and G 0 lymphocytes of breast cancer patients than of normal healthy controls, but that there was no correlation between the results in the two assays. We proposed that enhanced sensitivity to G 2 or G 0 irradiation was a marker of low-penetrance predisposition to breast cancer, and have recently demonstrated heritability of sensitivity in families of breast cancer cases. We have now applied these assays to patients with head and neck cancers, for whom there is epidemiological evidence of inherited predisposition in addition to environmental causes. The mean frequency of radiation-induced G 2 aberrations was higher in the 42 patients than in 27 normal controls, but not significantly so. However, cases less than 45 years old were significantly more sensitive than normals of the same age range (P = 0.046), whereas there was no difference between patients and normals of less than 45 years. Also, there was an inverse correlation between G 2 sensitivity and age for patients but not for normals. Radiation-induced micronuclei in G 0 cells were more frequent in 49 patients than in 31 normals (P = 0.056) but, as with the G 2 assay, the greatest difference was seen between early-onset patients and young normals. Again there was an inverse correlation with age for patients but not for normals. Six patients with enhanced toxicity to radiotherapy were G 2 tested and four other such patients were G 0 tested; levels of chromosome damage were not significantly greater than in patients with normal reactions. Both assays were used on 64 individuals (39 patients, 25 normals) and there was no significant correlation between the results. We suggest that a proportion of early-onset head and neck cancer patients are genetically predisposed and that each of the two assays detects a different subset of these cases. © 2001 Cancer Research Campaign http://www.bjcancer.com
We have shown that lymphocytes of breast cancer patients are, on average, more sensitive than those of normal healthy controls to the induction of chromosome damage by ionizing radiation. This was true for cells irradiated in either the G 2 or G 0 phases of the cell cycle (Scott et al, 1994(Scott et al, , 1998. The G 2 assay involved the analysis of metaphase cells for structural aberrations whereas, in the G 0 assay, chromosome damage was measured as the induction of micronuclei (MN). Our G 2 observations have now been confirmed in three independent studies in different laboratories (Parshad et al, 1996;Patel et al, 1997;Terzoudi et al, 2000).
Using the G 2 assay on 105 normal individuals we found a skewed distribution of induced aberration yields, with 5-10% of donors being sensitive outliers. This proportion was much higher (42%) among 135 breast cancer patients . With the MN assay we found that 27% (35 of 130) of patients were of elevated sensitivity, compared with 10% (7 of 68) of normals. When we performed both assays on the same 80 patients we found no evidence of a correlation between aberration yields in the G 2 assay and MN yields in the G 0 assay suggesting that the cellular defects leading to enhanced sensitivity are different in these cell cycle stages.
We have recently shown that the degree of sensitivity in the G 2 assay is an inherited characteristic in the families of patients with breast cancer and could be attributed to the segregation of one or two genes in each family Scott et al, 2000). We also have preliminary evidence that elevated sensitivity in the G 0 /MN assay is a heritable trait in first-degree relatives of breast cancer patients .
These observations in breast cancer patients and their families have led us to suggest that such enhanced chromosomal radiosensitivity may be a marker of cancer-predisposing genes. Support for this hypothesis comes from the demonstration that many inherited cancer-prone conditions (e.g. ataxia-telangiectasia, Li-Fraumeni syndrome, hereditary retinoblastoma) exhibit evidence of this type of elevated radiosensitivity (reviewed in Scott et al, 1999) but, in contrast to the situation in our breast cancer studies, the gene defects responsible for cancer predisposition in these rare syndromes are generally strongly expressed (highly penetrant). We propose that the defects leading to the enhanced radiosensitivity that we have seen in our studies are associated with a lesser risk of cancer and therefore do not lead to a strong family history (low-penetrance genes). There is good epidemiological evidence that the inherited risk of breast cancer is greater than can be accounted for by mutations in the highly penetrant genes BRCA1, BRCA2 and TP53 (Teare et al, 1994;Lichtenstein et al, 2000;Peto and Mack, 2000).
There is also indirect evidence for the existence of lowpenetrance, inherited, predisposing factors for cancers other than Sensitivity to radiation-induced chromosome damage may be a marker of genetic predisposition in young head and neck cancer patients breast;for example, lung (Sellers, 1996), colorectal (Cannon-Albright et al, 1988 and head and neck cancers. For the latter group, Foulkes et al (1995) found, in a case-control study, that even when allowing for the known environmental risk-factors such as alcohol and tobacco consumption, cancer in a first-degree relative was a significant independent risk-factor.
In the present study, we have investigated the chromosomal radiosensitivity of head and neck cancer patients and normal healthy controls, using both the G 2 and G 0 assays. Because it has been suggested that genetic factors may be particularly important in young patients with head and neck cancers, where there will be a reduced impact of cumulative environmental factors (Son and Kapp, 1985), our selection of cancer cases has been biased in favour of such early-onset patients. Our sample of patients also included a small number of cases who had shown adverse reactions to radiotherapy, because we have previously shown that the average radiosensitivity of breast cancer patients of this type may be greater than that of normally-reacting patients, depending upon the nature of the reactions and the type of assay .
Patients and normal controls
Individuals tested with the G 2 and/or the G 0 assay comprised 4 groups: 1. Healthy subjects (normals), mainly from within the staff of this Institute but including a small number of spouses of patients 2. Head and neck cancer patients at the Christie Hospital before they received radiotherapy (pre-therapy cases) 3. Patients after radiotherapy (9 months to 10 years post-therapy, mean 5.7, SD 2.5 years). These will be referred to as posttherapy cases 4. A small group of patients after radiotherapy (2-5 years, mean 3.7, SD 1.2) for whom the treating clinician identified radiation necrosis as a late complication following a standard radiotherapy schedule. These are designated 'highly radiosensitive' (HR) patients according to the nomenclature of Burnet et al (1998).
The majority (36 of 50) of the patients had tumours of the larynx, other sites being mouth, tongue, tonsil, oral cavity and oropharynx. The distribution of sites was not significantly different between patients groups 2-4. Over the period of this study the proportion of early onset (<45 years) laryngeal cancer cases admitted to this hospital was 3.3%, whereas in our sample the proportion was 21% (11 of 52), indicating our preferential selection of younger cases. Details of tobacco and alcohol consumption were obtained from those patients who volunteered this information, but not from normals. Table 1 shows the characteristics of the various patient groups. Permission for the study was obtained from the local Ethics Committee.
The G 2 assay
Full details are given in Scott et al (1999). Briefly, whole-blood cultures were set up in pre-warmed (37˚C) and pre-gassed (5% CO 2 , 95% air) medium. One hour later, lymphocytes were stimulated with phytohaemagglutinin (PHA) and cultured for 70 h, at which time the culture medium was replaced, without centrifugation, with fresh medium. Cells were irradiated (or mockirradiated) at 72 h with 0.5 Gy 300 Kv X-rays, colcemid was added 30 min later and at 90 min after irradiation culture vessels were plunged into ice chippings. Subsequent centrifugation, hypotonic treatment and fixation was carried out at 4˚C. From 1 h before irradiation to the time of harvesting, cultures were kept at 37˚C.
Metaphase preparations were made with standard procedures and Giemsa stained. Slides were randomized and coded for analysis and 50-100 metaphases were scored from both irradiated and control samples. The frequency of aberrations in control samples was subtracted from that in irradiated samples to give the induced yield. The majority of aberrations were chromatid breaks which were misaligned with respect to the intact sister chromatid or, if aligned, had an achromatic region of greater than the width of the chromatid. Smaller achromatic lesions (gaps) and occasional radiotherapy-induced chromosome-type aberrations in patients were ignored.
The G 0 micronucleus assay
These experiments were performed before we had standardized our MN assay so the procedures differ in several respects from those used in our studies of breast cancer patients.
Heparinized whole blood was kept overnight (16-24 h) at room temperature, then 0.5 ml aliquots were added to 4.5 ml of culture medium which comprised 82% RPMI 1640 (Flow Laboratories, Ashby de la Zouche, UK), 15% fetal calf serum (FCS) (Gibco BRL, Lewes, UK), 1% L-glutamine (Gibco BRL) and 2% of a mixture of penicillin and steptomycin (both at 5000 units ml -1 ). The medium was in T-25 flasks (Corning Costar, High Wycome, UK) and was pre-warmed (37˚C) and pre-gassed (5% CO 2 , 95% 28.0 (5.7) n = 2 12.0 n = 1 * Pre-therapy cases assayed shortly after diagnosis air). One hour after setting up the cultures they were irradiated (or mock-irradiated) with 3.0 Gy 137 Cs gamma rays at 3.3 Gy min -1 and returned to the incubator for 1 h, at which time PHA was added at a final concentration of 1.0 µg ml -1 . At 24 h after PHA stimulation, 3 ml of culture medium was pipetted from each culture flask and replaced with fresh, pre-warmed and pre-gassed medium, then cytochalasin-B was added at a final concentration of 6 µg ml -1 to enable the identification of post-mitotic cells as binucleates (Fenech and Morley, 1985). At 72 h after stimulation, 'clean' cytospin preparations were made, first by separating the lymphocytes from other cells (mainly erythrocytes) in the culture medium by layering the contents of each flask onto 5 ml of Lymphoprep (Nycomed, Amersham, UK) in a 12.5 ml centrifuge tube and centrifuging at 1100 rpm for 30 min. Then, an aliquot of the lymphocyte-rich buffy coat was removed with a small pipette, suspended in 5 ml of PBS and centrifuged at 1500 rpm for 5 min. The latter procedure was repeated and cells were resuspended in 1 ml of PBS. Aliquots of 100-200 µl were then pipetted into cytofunnel chambers and spun onto clean microscope slides by cytocentrifugation for 2 min at 1000 rpm. Cells were fixed in 90% methanol, dried, stained with 10% Giemsa for 10 min, rinsed in distilled water, dried and mounted.
Slides were randomized and coded and a minimum of 100 binucleate cells was scored for MN from both irradiated and control samples.
The principal differences between this protocol and that which has now become our standard procedure are: a radiation dose of 3 Gy (3.5 Gy in our standard assay), a delay of 1 h between irradiation and addition of PHA (cf. 6 h), fixation at 72 h after stimulation (cf. 90 h) and cell preparation by cytocentrifugation (cf. conventional harvesting with a short hypotonic treatment). Cells were scored using similar criteria for both assays but by different microscopists.
Statistical methods
Assay variability was assessed using standard one-way analysis of variance. Aberration yields were compared using Mann-Whitney U-tests, supplemented with Kruskall-Wallis tests where there were more than two groups being compared. Proportions of sensitive cases were compared using Fisher's exact tests. Spearman's rank correlations were used to look at associations between aberration yields and age. A significance level of 0.05 was used throughout.
RESULTS
A total of 69 individuals were tested with the G 2 assay, 80 with the G 0 assay (Table 1) and 64 with both (see Figure 5). When both assays were used, this was with the same blood sample.
The G 2 assay
The mean spontaneous yield of aberrations in the various patient groups was slightly, but not significantly, above the level of 1.2 ± 1.5 per 100 cells in normal donors.
To assess assay reproducibility, six normal donors were tested on two (four donors) or three (two donors) occasions. The intraindividual coefficient of variation (CV) for radiation-induced aberration yields, which is a measure of assay error, was 7.3%, very similar to the value of 7.0% which was our previous estimate from repeat assays on 28 normal donors .
The mean yield of induced aberrations in the 27 normals tested in this study, was 117.7 ± 14.5 per 100 cells (Table 2), which is higher than that from our earlier investigation of 105 normals (97 ± 15, Scott et al, 1999). This is likely to be because the samples from the two studies were scored by different microscopists and probably reflects differences in the inclusion of small gaps in the scores (see above). Although the mean yield in the 42 patients was higher than normals, for none of the three patient subgroups (pretherapy, post-therapy or highly-radiosensitive) was this increase statistically significant (Table 2, Figure 1). The highest yields were seen in the post-therapy patients (127.0 ± 19.7) but this level was not significantly (P = 0.13) above that in the pre-therapy group (117.3 ± 14.4). There was no indication that the scores for the six highly-radiosensitive (HR) patients were higher than those of the 20 post-therapy cases with normal reactions to radiotherapy.
A method of comparing different groups of individuals, other than simply using mean values, is to chose a cutoff value between a normal and a sensitive response for healthy donors and to compare this proportion of sensitive cases with the proportion of patients whose yields are above the cutoff value (Table 2, Figure 1). Previously, we have chosen the 90th percentile as the cutoff . Using this criterion, the cutoff value in the present study was 135 aberrations per 100 cells. This actually gave 15% (4 of 27), not 10%, sensitive normals because the G 2 score for several individuals fell exactly on the cutoff value. For all 42 patients, the proportion of sensitive cases was 31% (13 of 42) but this was not significantly higher (P = 0.16) than the 15% of sensitive normals. Of the various patient subgroups, only the post-therapy group had a sensitive proportion (45%, 9 of 20) that was significantly higher than normals (P = 0.045). This proportion of sensitive post-therapy patients was higher than that for pre-therapy cases (13%, 2 of 16), but the difference did not quite reach statistical significance (P = 0.067).
There was no indication of any influence of age on radiosensitivity for normal donors (r = 0.002, P = 0.99, Figure 2), but for patients there was an inverse correlation with age at diagnosis (r = 0.32, P = 0.038, Figure 2). It should be pointed out that the average age of the patients was greater than that of the normals (Table 1). To further investigate the influence of age on sensitivity in the assay we have stratified the patients into early (≤45 years) and normal (> 45) onset cases. The mean induced G 2 yield of early-onset cases (127.2 ± 18.6, Table 3) was greater than that of young (< 45 years) normals (112.9 ± 13.5, P = 0.12) and when the difference between patients and normals was expressed in terms of the proportion of sensitive cases, the difference was statistically significant Table 2). The cutoff used to define the sensitive population is indicated by the solid vertical line, and the mean aberration yields of each group are shown as broken vertical lines Figure 2 The relationship between induced G 2 aberration yields and age at diagnosis (patients = closed symbols) or at the time of testing (normals = open symbols). See also Table 3. The vertical and horizontal lines indicate the cutoff values used to define sensitivity in the two assays (38% sensitive patients, 0% sensitive normals, P = 0.046). On the other hand, mean yields and sensitive proportions were very similar for patients and normals above the age of 45 years (Table 3). There was a wide range in smoking and alcohol consumption in both groups, the mean consumption being higher in the older patients, the difference reaching statistical significance for smoking but not for alcohol use (Table 4). There was no significant correlation between the induced G 2 yield and smoking or alcohol consumption. There was no influence of gender on either spontaneous or induced aberration frequencies.
The MN assay
The spontaneous MN yield in the patients was not significantly different from the level of 3.5 ± 2.6 in normals.
Assay error for induced MN yields, estimated from repeat tests on six normal donors (three tested twice and three tested three times) was 6.2%, less than our previous estimate of 13% from repeat tests on 14 normals .
The mean yield of induced MN for all 49 patients (55.6 ± 5.8 per 100 cells) was higher than that of the 31 normals (50.6 ± 10.2), on the borderline of significance (P = 0.056, Table 2). When the patients were stratified into their various subgroups (Table 2, Figure 3) mean yields were higher than normals but the level of statistical significance was less, because of the relatively small numbers of patients in each subgroup, except for the four HR patients whose mean yield (62.0 ± 9.8) was significantly above the normals (P = 0.011). However, the more appropriate group to compare with the HR cases are the post-therapy patients with a normal response to therapy. The yield in HR patients was not significantly higher than that in these normal responders (54.1 ± 8.4). The response of pre-and post-therapy patients was not significantly different. The range of values for patients was greater than that of normals (Figure 3).
Using the 90th percentile of healthy donors to distinguish sensitive from normal responses gave a cutoff value of 60 MN per 100 Radiation-induced MN yields and G 2 aberrations for the same 64 donors, using the same blood sample for both assays (see also Table 4). Closed symbols are patients and open symbols are normals. The vertical and horizontal lines indicate the cutoff values between normal and sensitive responses in the G 0 and G 2 assays respectively Table 4 Smoking and alcohol consumption in early-or normal-onset patients tested with the G 2 or G 0 assays. Not all patients volunteered this information and the numbers of responses is indicated
There was no difference in spontaneous or induced yields of MN between males and females.
Both assays
A total of 64 individuals were tested with both assays on the same blood sample. These comprised 25 normals and 39 patients (16 pre-therapy, 19 post-therapy and four HR cases). There was no significant correlation between the results of the two assays (r = 0.05, P = 0.81 for normals, r = 0.40, P = 0.13 for patients, see Figure 5). The proportion of individuals who were sensitive in both assays (5% of those tested, Figure 5) was very close to that predicted if the results of both assays are completely uncorrelated (6%). This was also true for the various subgroups of donors.
DISCUSSION
We have previously argued that enhanced chromosomal radiosensitivity may be a marker for low-penetrance predisposition to breast cancer. We have now applied both the G 2 and G 0 micronucleus assays to patients with head and neck cancers for which there is epidemiological evidence of inherited risk in spite of a strong environmental influence, particularly through tobacco and alcohol useage (Morita et al, 1994;Copper et al, 1995;Foulkes et al, 1995).
The G 2 assay
With the G 2 assay, although the mean yield of aberrations and the proportion of sensitive cases was higher for all of the patient groups compared with the normals, this increase was not statistically significant (Table 2). However, when patients were stratified on the basis of age of onset of disease, early-onset cases (<45 years) were significantly more sensitive than normals in this age group, whereas later-onset cases (> 45 years) were of very similar sensitivity to normals of corresponding age (Table 3).
Also, there was a significant negative correlation between aberration yields and age for patients but not for normals. If G 2 chromosomal radiosensitivity is indicative of genetic predisposition to head and neck cancers, as we have suggested for breast cancer, the above results would indicate that for early-onset cases there is a genetic contribution to risk, but not so for normal-onset cases. For the latter, environmental influences may predominate. It should be noted that smoking and alcohol consumption were higher in the latter group (Table 4). There is some evidence that head and neck cancers in young adults may be clinically different from those in older patients, tending to be more anaplastic and consequently more aggressive (Son and Kapp, 1985) although this difference has not been seen in all studies (Von Doersten et al, 1995).
These results for head and neck cancer patients differ from those for breast cancer cases in that there was no age-dependence for G 2 sensitivity in the latter group . The proportion of young head and neck cases that were sensitive (38%) was similar to that for all breast cancer patients (42%), but since early-onset head and neck cancers represent <5% of all cases (references in Son and Kapp, 1985), our results with the G 2 assay would suggest a considerably lower genetic component in the overall risk of head and neck cancer than for breast cancer. Terzoudi et al (2000) recently reported that the mean G 2 sensitivity of 185 patients with various cancers was significantly higher than that of 25 normals. Among the patients were 20 cases of laryngeal cancer whose G 2 scores were higher than those of the normals, although the statistical significance of this increase was not given and the ages of the patients were not specified.
Enhanced sensitivity of G 2 lymphocytes of head and neck cancer patients to the chromosome-damaging agent, bleomycin, has been reported in several studies (references in Cloos et al, 1996). In a large case-control study of risk-factors for head and neck cancer, in which age, history of tobacco and alcohol usage, and bleomycin G 2 sensitivity were recorded, it was shown that the latter parameter is a biomarker of cancer susceptibility, since it modulates the risk from carcinogen exposure (Cloos et al, 1996). It has also been shown that, as in the case of G 2 X-ray sensitivity , there is a strong inherited component in G 2 bleomycin sensitivity (Cloos et al, 1999). However, G 2 response to X-rays cannot simply be regarded as a surrogate for response to bleomycin because, although breast cancer cases show enhanced X-ray sensitivity, they exhibit a normal bleomycin response (Hsu et al, 1989). Also, unlike our present observations on head and neck cancer patients, Cloos et al (1996) found a significant positive correlation between age and G 2 bleomycin sensitivity in 313 such patients.
The fact that we were unable to distinguish between patients who had shown late HR reactions or normal responses to radiotherapy with the G 2 assay agrees with our studies on breast cancer patients, where this assay was only able to distinguish patients with acute HR reactions . In the present study and that on breast cancer patients there was an indication that non-HR patients tested post-therapy were more sensitive than pretherapy patients, but in neither case was this difference statistically significant. The possibility that radiotherapy may alter the response of lymphocytes in the G 2 assay requires further investigation on the same group of patients tested before and after treatment.
The micronucleus assay
As we found in our studies of breast cancer patients , in the present investigations we found no significant correlation between the results of the G 2 and G 0 assays. This suggests that different mechanisms are responsible for enhanced sensitivity in the two tests and that these assays are independent markers of predisposition to both breast and head and neck cancers.
Using either the mean MN yields or the proportion of sensitive cases, there was better discrimination between patients and normals with this assay than with the G 2 assay (Table 2). However, as with the G 2 assay, this difference was seen mainly in early-onset patients where 54% were sensitive compared with 9% normals ( Table 3). The inverse correlation between MN yields and patient age differs from that for breast cancer patients, where no significant trend was seen . Further quantitative comparisons with the MN and breast cancer data are probably of limited value because of differences between the assays used in the two studies (Materials and methods). Rached et al (1998) showed that the average sensitivity of 15 cancer patients was greater than that of 15 normals, using a lymphocyte MN assay. The patients included eight cases of head and neck cancer but their individual MN scores and ages were not given.
There was a suggestion of enhanced mean sensitivity of the four patients who had shown adverse late reactions to radiotherapy, compared with 23 normally-reacting cases, but the difference was not significant. In a study of a larger number of breast cancer patients we obtained better discrimination between severe late reactors and normal reactors but, again, there was a complete overlap of values for the two groups, which obviously limits the value of the assay for predictive purposes .
Our main finding is that both assays are able to identify chromosomally radiosensitive groups of early-onset patients who may be genetically predisposed to head and neck cancer, each assay detecting a different subgroup of these patients.
|
v3-fos-license
|
2021-12-04T16:04:23.947Z
|
2021-12-02T00:00:00.000
|
244857998
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://granthaalayahpublication.org/journals/index.php/granthaalayah/article/download/4258/4466",
"pdf_hash": "796eb27c766e42b19ea8955f8f870bdf9f16e2c0",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43041",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"sha1": "5b1b4852e7a20a45e31461d93e3574141d307b47",
"year": 2021
}
|
pes2o/s2orc
|
EFFECT OF CORPORATE SOCIAL RESPONSIBILITY ON INVESTMENT EFFICIENCY OF QUOTED OIL AND GAS FIRMS IN NIGERIA
How to cite this article (APA): Ibrahim, K.F.A., Onyekachi, A.S. (2021). Effect Of Corporate Social Responsibility on Investment Efficiency of Quoted Oil and Gas Firms in Nigeria. International Journal of Research GRANTHAALAYAH, 9(11), 122–137. doi: 10.29121/granthaalayah.v9.i11.2021.4258 122 EFFECT OF CORPORATE SOCIAL RESPONSIBILITY ON INVESTMENT EFFICIENCY OF QUOTED OIL AND GAS FIRMS IN NIGERIA
INTRODUCTION
The truth is that oil has performed a key role pertaining to degradation of the natural environment which led to monocultural economy in Nigeria, still we have to accept that oil and gas sector in Nigeria has brought "better than harm" in Nigeria at large Ugwukah and Ohaja (2016) even though there are huge expenditures tangled with participating in social responsibility that might distress the company's performance Ishola and Ishola (2019). Petroleum companies are chief sectors in the energy market and they take up powerful responsibilities in the international industry as the world's major fuel sources Vassiliou (2018). This sector is classified in triple: upstream, the commerce of oil and gas exploration and production; midstream, transportation and storage; and downstream, that comprises refining and marketing of oil and gas Vassiliou (2018).
Nevertheless, the above activities by the firms are not performed in a vacuum; there must be an environment where these operations are carried out for profit maximization purposes and good success Idowu (2014). The inclusion of financial, surroundings of corporation's operationalization, people's wellbeing, and welfare which would be integral aspect of firms' activities, modality, morality, conscientious efforts, consensus, which can be monitored and evaluated by company's management and auditors is called Corporate Social Responsibility (CSR) Carroll (2016), Mogaka (2016).
CSR generally deals with social matters. For example, the area of CSR would classically be refining access to schooling and learning amongst indigenes and public, giving them better health care chances and refining their ecological environments Freeman and Dmytriyev (2017).
In light of the above, this study therefore seeks to investigate effect of CSR on investment efficiency (IE) of quoted oil and gas firms in Nigeria in order to ascertain whether CSR has positive or negative effect on CSR performing firms.
STATEMENT OF THE PROBLEM
A lot of studies have been carried out on effect of CSR on financial performance, but very few have been done on effect of CSR and/or investment efficiency. Past works studied the effect of CSR on investment efficiency in developed countries Benlemlih and Bitar (2015) & Cook, et al, 2018, but very few studies have been conducted in developing countries especially in Nigeria. Several researchers claim that great CSR participation is connected with high corporation's performance and greater corporation value Benlemlih and Bitar (2015). CSR undertakings could also cause clashes of interest amongst interested parties (Krüger, 2015).
The main discrepancy between this study and the above past studies is that this study examines effect of CSR on investment efficiency of quoted oil and gas in Nigeria since less work has been specifically done on this very sector peculiarly in Nigeria. The study is based on the technique of quantitative analysis of effect of CSR using the audited annual reports and accounts of oil and gas firms quoted on the floor of Nigerian Stock Exchange (NSE) as at 31 st December, 2019.
RESEARCH HYPOTHESES
The null hypotheses formulated for this study are as below: Ho1: CSR Charitable Donation Expenditure does not significantly affect the investment efficiency of oil and gas firms in Nigeria.
Ho2: There is no significant relationship between CSR Expenditure on Education and investment efficiency of oil and gas firms in Nigeria.
Ho3: There is no significant relationship between CSR Societal Expenditure and investment efficiency of oil and gas firms in Nigeria Ho4: CSR Health Expenditure does not significantly affect investment efficiency of oil and gas firms in Nigeria.
Ho5: CSR Environmental Expenditure does not have significant effect on investment efficiency of oil and gas firms in Nigeria.
Ho6: There is no significant relationship between CSR Sports Expenditure and investment efficiency of oil and gas firms in Nigeria.
SCOPE OF THE STUDY
The study is delimited to seven (7) oil and gas firms quoted on the floor of NSE as at December 31st, 2019 due to availability of disaggregated CSR data. The study covered a period of ten (10) years (2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018)(2019). Whereas year 2010 was chosen as a significant year because that is the year that "Nigerian Oil and Gas Industry Content Development (NOGICD)'' was established, year 2019 made it a decade (10 years) to ensure reasonable and reliable results since all the annual reports are available on these dates.
LITERATURE REVIEW AND THEORETICAL FRAMEWORK 2.1. CONCEPT OF CSR
According to Mallouh and Tahtamouni (2018), when a company is no longer operationally myopic on its charitable obligations to such vicinity where it operationalizes and esteems the wellbeing of its immediate surroundings where itself is the citizen and plays a role of a responsible citizenry, it becomes a step further to boost their financial wellbeing. CSR can be looked at as a deliberate action to both internal and external stakeholders which go beyond the purview of Law (Amodu, 2017). Carroll (2016) stated that in general, CSR specifically might have showcased its comprehension to mean strategies with implementation that businesses introduced to ensure that people, corporation investors, other than business owners, are protected and considered in the way they strategized and operationalize.
1) Corporate Social Responsibility Charitable Donation Expenditure (CSRCDE):
Such humanitarian gestures could be in sort of financial (currency assistances) or non-financial like food stuff, apparel, houses, relief materials, services, clothing, toys, food, vehicle, donation of blood and transplant Madugba and Okafor (2016), Ohaka and Ogaluzor (2018).
2) Corporate Social Responsibility Expenditure on Education (CSREDE):
These expenses for examples are studentship awards, young people's improvement exercise and the provision of learning amenities, donation of houses and equipment to improve the coaching with educational setting, purchase of stationeries, donation of school bus, building of classroom blocks, computers to schools, award of scholarships to indigent pupils and students from poor homes and intelligent children as an encouragement, building and donating staff quarters to teachers Ezeji and Okonkwo (2016), Madugba and Okafor (2016), Tijani et al. (2017).
3) Corporate Social Responsibility Societal Expenditure (CSRSE):
This among others includes rewards remunerated to workers as expenses incurred towards the public, gifts to public and community (people), rural development, investing in females through giving occupational teaching, occupation and health care spending, security with work of the needy, aids, public benefit provision, social privileges, consumer safety, bodily wellbeing, giving a proportion of profits to community developments Mandal and Banerjee (2015), Mentor (2016), Cho et al. (2019).
4) Corporate Social Responsibility Health Expenditure (CSRHE): This is
firm's spending on the aspect of health on both their employees and for community (stakeholders), health need of the needy within the society; reducing the conveyance contamination and pollutions in the surroundings where the firm operates Iqbal et al. (2013).
5) Corporate Social Responsibility Environmental Expenditure (CSREE):
These are costs expended for preservation of the surroundings where operations take place. The expenses are on environmental investment, pollution performance, preservation of natural resources, discarding of manufacturing wastes by using safe means, prevention of noise and air pollution etc. (Jarbou, 2007 as cited in Mallouh and Tahtamouni (2018), Cho et al. (2019).
6) Corporate Social Responsibility Sports Expenditure (CSRSPE):
Agreeing with Jajić and Jajić (2021), 'aligning the view that, there is existence of a thing other than profit, "the business unceasingly support mutual schemes and gifted persons in the educational arenas, sports, science, technical know-how, well-being, culture and arts'. In addition, to give immense consideration to the mutual popular sports, such as football and basketball, momentous backing is directed towards encouraging development of sports associations related to sports which the government often times gives little or entirely no attention to, like judo, wrestling, karate, biking, skiing, gymnastics and other forms of sports that can bring people like physically challenged youths together Jajić and Jajić (2021).
CONCEPTS AND MEASUREMENT OF INVESTMENT EFFICIENCY (IE)
IE of a company is its capacity to invest in all positive net present value (NPV) projects Anwar and Malik (2020). IE means those ventures with positive NPV where market frictions like unfriendly choice or agency costs are not present Ibrahim and Ibrahim (2021).
THEORETICAL FRAMEWORK Social Contract Theory
According to Omran and Ramdhony (2015), the historical precedence of this theory started in Hobbes (1946), Rousseau (1968) and Locke (1986). Donaldson (1982) sights the corporation and societal association in the theoretical point of view. His opinion is that there is an unspoken social agreement that exists among corporation with the people and community. License to operate with regards to this theory originates from standpoint that all firms require implicit and explicit authorization from regimes, societies in relation to other interested parties to participate in Mwangangi (2018).
Stakeholders Theory
The leading scholars of stakeholder theory among others are: Freeman ( Omran & Ramdhony (2015). Stakeholder theory proposes that a firm's aim is creation of stakeholder's value to the best of its ability. Since Stakeholder theoreticians' opinion that the company is a crew of interested parties within and outside (e.g., shareholders, employees, customers, suppliers, creditors, and neighbouring communities)-"Stakeholders," which was at first referred to as individuals that are influenced by and/or could affect the accomplishment of the firm's goals (Freeman, 1984), stakeholder theory is the underpinning theory for this research work. Yazdani and Barzegar (2017) investigated 'Relationship between CSR and investment performance of firms listed in Tehran Stock Exchange (TSE)'. The work examined the association amidst CSR and investment performance of companies quoted in the TSE. Peculiar and temporary domains are the companies quoted in TSE from 2010-2013, correspondingly. Score for disclosure of CSR and the IE were measured as independent and dependent variables, respectively. The study analyzed the content for determining the level of CSR, a discloser checklist, social responsibility model of Barzegar (2013) and a binary technique are employed. The study is an applied and correlative based on objective, nature and technique. 93 firms were sampled with the aid of systematic elimination. Study outcomes reveal a significant association between CSR and investment performance of firms quoted in TSE.
EMPIRICAL REVIEW
Ho et al. (2021) studied on 'How does corporate social performance affect investment inefficiency? The study examines the association amid corporate social performance (CSP) and IE in the Chinese stock market. By means of the distinctive CSP ratings scores from the Rankins CSP Ratings (RKS), the study discover that socially responsible firms are more proficient in their investment. The study also discovers that the impact of CSP in decreasing investment inefficiency is more obvious in overinvestment situations. Also, the study suggests strong and robust proof that CSP significantly increases IE in state-owned enterprises.
Kirsten et al. (2018) study dual vital means by which corporate social responsibility (CSR) influences company worth which are IE and innovation. They establish that companies with greater CSR implementation spend more proficiently: such companies are not disposed to spend in unfavourable NPV ventures (overinvestment) and less prone to sacrifice positive NPV ventures (underinvestment). They as well discover that companies with greater CSR execution produce more rights with patent credentials. Mediation exploration shows that companies with greater CSR execution are highly lucrative and appreciated, consequences partly attributable to proficient reserves and innovation. These outcomes, robust to replacement model stipulations, provide backing to rational hind stakeholder theory. Lee (2020) carried out an investigation on 'CSR and Investment Efficiency: Evidence from an Emerging Asian Market. The study aims to examine conflicting opinions of the association between CSR and IE in the main Asian emerging stock (AES) market. Experiential outcomes reveal that CSR significantly alleviates investment inefficiency (II) amongst Taiwanese companies. The finding corroborates the view that socially responsible Taiwanese companies have less agency difficulties and lesser information asymmetry, hence decreasing II. The empirical outcomes as well review that CSR has a more obvious impact in alleviating II for Taiwanese companies with more effective corporate governance. Peculiarly, because of compulsory preparation of CSR reporting, CSR is connected with lesser IE for Taiwanese companies with fragile governance machineries within 2014-2017. The discoveries of the study depict suggestions for government establishments, company executives, and shareholders in terms of CSR policy creation, execution of CSR plans, and running of investment portfolios. Benlemlih and Bitar (2015) researched on 'CSR and Investment Efficiency'. The sample size of 21,030 US firm-year interpretations that represent above 3,000 different companies above years 1998-2012. In uniformity with the study prospects that great CSR companies benefit little information asymmetry with great stakeholders' solidarity, the study discovered significant with robust indication that great CSR participation reduces investment inefficiency which as result enhances investment efficiency. Also, the study findings recommend that CSR components that have direct link with firms' major undertakings are pertinent in decreasing investment inefficiency as paralleled to those related to secondary stakeholders (e.g., human rights and community involvement). Finally, other outcomes indicated that the effect of CSR on IE is more pronounced during the subprime crisis. Mutually, the outcomes highlighted the vital role which CSR plays in modeling company's investment conduct and efficiency. This is also supported by Benlemlih and Bitar (2018) whose study indicated that high CSR disclosures decrease investment inefficiency and increase investment efficiency. Zhong and Gao (2017) carried out research and established that CSR disclosures influenced investment efficiency, which can cause reduction of the difficulty of information asymmetry. CSR disclosures motivate and cause companies to finance the environment. A firm is gratified to participate in environmental protection activities to meet its stakeholders' demands. It indicates that CSR disclosures assists in reduction of information asymmetry and refining investment efficiency. The impact of governance on IE is more with CSR disclosures.
Samet and Jarboui (2017) conducted a study on the topic, 'How does CSR contribute to investment efficiency?' The study investigated the direct and indirect association amongst CSR performance with IE. Panel dataset consisting of 398 quoted firms in Europe in the European STOXX 600 from 2009 to 2014. Initial outcome reveals that companies which have more CSR performance venture proficiently. Analysis differentiating two diverse circumstances were conducted: underinvestment and overinvestment. Concentrating on under-investing firms, the study highlight that CSR performance improves their investment levels by moderating information asymmetry. However, for over-investing corporations, CSR performance decreases investment excess by alleviating free cash flow issues. Generally, the findings recommend a part for CSR indirectly bettering firm-level IE by assisting corporations address agency hitches and information asymmetry issues. The link among CSR performance with IE which is positive reveals CSR execution as a means to put smile on stakeholders' faces. The firms' CSR undertakings can lead to competitive benefit for the firm especially when caring for environs with the firms' problem of information asymmetry. Erawati et al. (2020) carried out a study on 'The Role of CSR in the Investment Efficiency: Is It Important? The study was on how CSR disclosures aid; mediate between impact family ownership and corporate governance (CG) on investment efficiency. STATA was employed to interpret available data sourced. Sample size was 210 industrial firms quoted on the Indonesian Stock Exchange, which were in the family businesses category from 2016 to 2018. The initial discovery reveals that CSR regulates the impact of family ownership on investment efficiency. The other finding reveals that CSR disclosures can mediate the impact of CG on investment efficiency. CSR undertakings play a key role when making choice, and via CSR disclosures, CG takes a higher impact on investment efficiency.
GAPS IN THE LITERATURE
A lot of studies conducted on relationship between CSR and IE is conducted in developed countries. Not much have been conducted in developing countries like Nigeria. The majority of the past studies employed aggregated method of CSR instead of disaggregated method. Also, very few studies have been conducted using oil and gas sector, which has necessitated this study in order to make contributions to knowledge. Ibrahim and Ibrahim (2021)
RESEARCH DESIGN
The study adopted ex-post facto research design because the secondary data already existed. The information relating to IE on financial statement of oil and gas firms is under their cash flow statements. Owing to this, the net investment cash flows which represent the NPV of projects and investments are used to measure the IE of the quoted oil and gas firms in Nigeria.
POPULATION OF THE STUDY AND SAMPLING DESIGN
The population of this study constitutes of the twelve (12) quoted oil and gas firms on the Nigeria Stock Exchange as at 31 st December, 2019. The sample size of seven (7) was chosen from oil and gas firms which CSR reportings are in disaggregated form. Also, some listed oil and gas firms did not quantify (in figures) their CSR report in the audited annual reports and accounts if at all they are practicing CSR which led to selection of seven (7) oil and gas firms (Eterna, Forte Oil, 11plc (Mobil), Mrs Oil, Oando, Seplat and Total Nig. Plc.) which data are available on disaggregated basis.
METHODS OF DATA COLLECTION
The study used secondary data that were collected from the annual reports and accounts of the sampled quoted oil and gas firms listed on the NSE as at December 31 st , 2019. The study is delimited to seven (7) oil and gas firms quoted on the floor of NSE as at December 31st, 2019 due to availability of disaggregated CSR data. The study covered a period of ten (10) years (2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018)(2019). Whereas year 2010 was chosen as a significant year because that is the year that "Nigerian Oil and Gas Industry Content Development (NOGICD)'' was established, year 2019 made it a decade (10 years) to ensure reasonable and reliable results since all the annual reports are available on these years.
TECHNIQUES OF DATA ANALYSIS
The statistical methods used are descriptive statistics, multiple regression and variance inflation factor (VIF) to test for multicollinearity. The analysis was conducted with the aid of SPSS Version 23 Software package.
DISCUSSION OF FINDINGS 4.1. REGRESSION RESULTS
Since regression analysis consolidates the restrictions of the correlation matrix in data analysis as the result of the latter's weakness of ability to reveal the track of link amongst surrogates, but cannot show the strength of the link, regression analysis is therefore employed for this study's analysis Kajola et al. (2017), Kajola et al. (2018). The results from the table above 4.5.1 prove that four (4) out of the six (6) null hypotheses formulated for this study are rejected, while two (2) are accepted. It therefore means that four (4) independent variables revealed significant effect on IE of oil and gas firms in Nigeria while two (2) depict insignificant effect. The rule of thumb for the test above is that where p-value is below 5% (0.05) the null hypothesis should be rejected. And if the p-value is exceeding 5% (0.05) the null hypothesis should be accepted Al Qaisi (2019) as in the cases of null hypotheses numbers four and five (Ho4; Ho5) above (CSRHE; CSREE) with p-values of 0.478 and 0.738 respectively. The model summary above reveals the Sig. F change (Prob. > F) of 0.000 which value is below 5% (0.05) level of significance. This implies that the whole model for this study is fit for analysis. From the table above, the coefficient of determination (R 2 ) reveals the extent to which the independent variables explain the dependent variable. It is therefore evidenced that the degree to which variations in dependent variable (NICF) can be explained by independent variables is 43.80%. The Durbin-Watson's result of 1.627 which is less than 2 portends that the regression model has a good fit.
SUMMARY OF FINDINGS
From the results of the findings above, it is discovered that: The results of the significant variables corroborate Benlemlih and Bitar (2015) whose study outcomes highlight the vital role which CSR plays in modeling company's investment performance with efficiency, and Yazdani and Barzegar (2017) whose study outcomes reveal a significant association between CSR and investment performance of firms quoted in TSE. The results are also in tandem with Erawati et al. (2020) who conducted a study on 'The Role of CSR in the Investment Efficiency: Is It Important? Their results reveal that CSR is indeed important. The results of the significant variables corroborate Zhong and Gao (2017) who carried out research and found that CSR disclosures affect IE significantly.
This study substantiates the study of Kirsten et al. (2018) who discover that companies with greater CSR execution produce more rights with patent rights. Intermediation exploration shows that companies with greater CSR execution are highly lucrative and appreciated, consequences partly attributable to proficient reserves and innovation that companies with greater CSR execution produce more rights with patent credentials.
This study upholds Samet and Jarboui (2017) whose study reveals positive link with CSR performance and IE,that executing a CSR approach is a potent means to inspire firm progress and safeguard stakeholders' benefits. The firm's CSR undertakings could lead to rivalry benefit for the firm especially when caring for environs with the firms' problem of information asymmetry. This study is also consistent with Lee (2020) whose study's empirical outcomes reveal that CSR significantly alleviates amongst Taiwanese companies.
CONCLUSION AND RECOMMENDATIONS
This study investigated effect of CSR on IE of quoted oil and gas firms in Nigeria. The proxy for IE used by study from audited annual reports of the firms is net investment cash inflows (NICF) while the independent variables are made up of six (6) as proxies used for CSR as found in the annual reports and accounts of the oil and gas firms. Having analysed the results of the findings and test of the formulated null hypotheses, the study therefore makes the following recommendations: 1) The oil and gas firms should improve their expenditure on CSR Charitable donation expenditure (CSRCDE) and monitor it meticulously against window dressing services.
2) The executive of these firms should be consistent in CSR Expenditure on Education (CSREDE) since it reveals a positively significant effect on their investment efficiency. 3) These firms ought to put up with CSR Societal Expenditure (CSRSE) as it indicates positively significant effect on their investment efficiency. 4) CSR Health Expenditures (CSRHE) on citizenry, community and the environment have to be prioritized irrespective of the degree of association with investment presently. 5) CSR Environmental Expenditure (CSREE) ought to be stimulated since it is a short-term investment with a long-term benefit. 6) The firms have to be consistent in CSR Sport Expenditure (CSRSPE) so as to benefit at long run even though it may seem unprofitable now.
SUGGESTIONS FOR FURTHER RESEARCH
This study investigates effect of CSR on IE of oil and gas firm quoted on the floor of NSE as at 31 st December, 2019 in Nigeria. As stated by Erawati et al. (2020), the challenge of IE is a composite challenge that is yet to be broadly investigated, so there exist several prospects for future studies. For generalization purposes of the study's key findings, prospective study should cover the framework of effect of CSR on IE by bringing into consideration other countries. The study also makes provision for prospective researchers to study other factors which could influence investment efficiency. Prospective research should as well investigate other sectors apart from oil and gas, like mining, quarry, construction etc
|
v3-fos-license
|
2018-05-04T14:36:09.000Z
|
2018-05-04T00:00:00.000
|
19154102
|
{
"extfieldsofstudy": [
"Computer Science",
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-3417/8/8/1213/pdf?version=1532429236",
"pdf_hash": "7a988081e368eddde05f96fef13b8f88a981de95",
"pdf_src": "ArXiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43043",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "4fa144fff30a8b00dce44b19241f3048c768b89c",
"year": 2018
}
|
pes2o/s2orc
|
Unsupervised learning for concept detection in medical images: a comparative analysis
As digital medical imaging becomes more prevalent and archives increase in size, representation learning exposes an interesting opportunity for enhanced medical decision support systems. On the other hand, medical imaging data is often scarce and short on annotations. In this paper, we present an assessment of unsupervised feature learning approaches for images in the biomedical literature, which can be applied to automatic biomedical concept detection. Six unsupervised representation learning methods were built, including traditional bags of visual words, autoencoders, and generative adversarial networks. Each model was trained, and their respective feature space evaluated using images from the ImageCLEF 2017 concept detection task. We conclude that it is possible to obtain more powerful representations with modern deep learning approaches, in contrast with previously popular computer vision methods. Although generative adversarial networks can provide good results, they are harder to succeed in highly varied data sets. The possibility of semi-supervised learning, as well as their use in medical information retrieval problems, are the next steps to be strongly considered.
Introduction
In an era of a steadly increasing use of digital medical imaging, image recognition poses an interesting prospect for novel solutions supporting clinicians and researchers.In particular, the representation learning field is growing fast in recent years [1], and many of the breakthroughs in this field are occurring in deep learning methods, which have also been strongly considered in healthcare [2].Leveraging representation learning tools to the medical imaging field is feasible and worthwhile, as they can provide additional levels of introspection of clinical cases through content-based image retrieval (CBIR).
Multiple initiatives for the provision of medical imaging data sets exist, the process of annotating the data with useful information is exhaustive and requires arXiv:1805.01803v1[cs.CV] 4 May 2018 medical expertise, as it often nails down to a medical diagnosis.In the face of few to no annotations, unsupervised learning stands as a possible means of feature extraction for a measurement of relevance, leading to more powerful information retrieval and decision support solutions in digital medical imaging.
Although unsupervised representation is limited for specific classification tasks when compared to supervised learning approaches, the latter requires an exhaustive process from experts to obtain annotated content.Unsupervised learning, which avoids this issue, can also provide a few other benefits, including transferrability to other problems or domains, and can often be bridged to supervised and semi-supervised techniques.We have hypothesized that a sufficiently powerful representation of images would enable a medical imaging archive to automatically detect biomedical concepts with some level of certainty and efficiency, thus improving the system's information retrieval capabilities over non-annotated data.
In this work, we present an assessment of unsupervised mid-level representation learning approaches for images in the biomedical literature.Representations are built using an ensemble of images from biomedical literature.The learned representations were validated with a brief qualitative feature analysis, and by training simple classifiers for the purpose of biomedical concept detection.We show that feature learning techniques based on deep neural networks can outperform techniques that were previously common-place in image recognition, and that models with adversarial networks, albeit harder to train, can improve the quality of feature learning.
Related Work
Representation learning, or feature learning, can be defined as the process of learning a transformation from a data domain into a representation that makes other machine learning tasks easier to approach [1].The concept of feature extraction can be employed when this mapping is obtained with handcrafted algorithms rather than learned from the original data in the distribution.Representation learning can be achieved using a wide range of methods, such as k-means clustering, sparse coding [3] and Restricted Boltzmann Machines (RBMs) [4].In image recognition, algorithms based on bags of visual words have been prevalent, as they have shown superior results over other low-level visual feature extraction techniques [5], [6].More recently however, image recognition has had a strong focus on deep learning techniques, often with impressive results.Among these, approaches based on autoencoders [7], [8] have been considered and are still prevalent to this day.
Research on representation learning is even more intense on the ground-breaking concept of generative adversarial networks (GANs) [9].GANs devise a min-max game where a generator of "fake" data samples attempts to fool a discriminator network, which in turn learns to discriminate fake samples from real ones.As the two components mutually improve, the generator will ultimately produce visually-appealing samples that are similar to the original data.The impressive quality of the samples generated by GANs have led the scientific community into devising new GAN variants and applications to this adversarial loss, including for feature learning [10].
Representation learning has been notably used in medical image retrieval, although even in this decade, handcrafted visual feature extraction algorithms are frequently considered in this context [11], [12].Nonetheless, although the interest in deep learning is relatively recent, a wide variety of neural networks have been studied for medical image analysis [13], as they often exhibit greater potential for the task [14].The use of unsupervised learning techniques is also well regarded as a means of exploiting as much of the available medical imaging data as possible [15].On the other hand, the amount of medical imaging data may be scarse for many use cases, which makes training deep neural networks a difficult process.
Methods
We have considered a set of unsupervised representation learning techniques, both traditional (as in, employing classic computer vision algorithms) and based on deep learning, for the scope of images in the biomedical domain.These representations were subsequently used for the task of biomedical concept detection.Namely: -We have experimented with creating image descriptors using bags of visual words (BoWs), for two different visual keypoint extraction algorithms.-With the use of modern deep learning approaches, we have designed and trained various deep neural network architectures: a sparse denoising autoencoder, (SDAE), a variational autoencoder (VAE), a bidirectional generative adversarial network (BiGAN), and an adversarial autoencoder (AAE).
Bags of Visual Words
For each data set, images were converted to greyscale without resizing and visual keypoint descriptors were subsequently extracted.We employed two keypoint extraction algorithms separately: Scale Invariant Feature Transform (SIFT) [16], and Oriented FAST and Rotated BRIEF (ORB) [17].While both algorithms obtain scale and rotation invariant descriptors, ORB is known to be faster and require less computational resources.The keypoints were extracted and their respective descriptors computed using OpenCV [18].Each image would yield a variable number of descriptors of fixed size (128-dimensional for SIFT, 32-dimensional for ORB).In cases where the algorithm did not retrieve any keypoints, the algorithm's parameters were adjusted to loosen edge detection criteria.All procedures described henceforth are the same for both ORB and SIFT keypoint descriptors.
From the training set, 3000 files were randomly chosen and their respective keypoint descriptors collected to serve as template keypoints.A visual vocabulary (codebook) of size k = 512 was then obtained by performing k-means clustering on all template keypoint descriptors and retrieving the centroids of each cluster, yielding a list of 512 keypoint descriptors V = {V i }.
Once a visual vocabulary was available, we constructed an image's BoW by determining the closest visual vocabulary point and incrementing the corresponding position in the BoW for each image keypoint descriptor.In other words, for an image's BoW B = {o i }, for each image keypoint descriptor d j , o i is incremented when the smallest Euclidean distance from d j to all other visual vocabulary points in V is the distance to V i .Finally, each BoW was normalized so that all elements lie between 0 and 1.We can picture the bag of visual words as a histogram of visual descriptor occurrences, which can be used as a global image descriptor [19].
Deep Representation Learning
Modern representation techniques often rely on deep learning methods.We have considered a set of deep convolutional neural network architectures for inferring a late feature space over biomedical images.These models are composed of parts with very similar numbers of layers and parameters, in order to obtain a fairer comparison in the evaluation phase.This also means that the models will have very similar prediction times.
Training samples were obtained through the following process: images were resized so that its shorter dimension (width or height) was exactly s g pixels.Afterwards, the sample was augmented by feeding the networks random crops of size s × s (out of 9 possible kinds of crops: 4 corners, 4 edges and center).Validation images were resized to fit the s × s dimensions.For all cases, the images' pixel RGB values were normalized to fit in the range [-1, 1].Unless otherwise mentioned, the networks assumed a rescale size to s g = 96 and a crop size s = 64.
Models with an enconding or discrimination process for visual data were based on the same convolutional neural network architecture, described in Table 1 and Table 2.These models were influenced by the work in deep convolutional generative adversarial networks [20].Each encoder layer is composed of a 2D convolution, followed by an optional (case-dependent) normalization algorithm and a model-dependent non-linearity.At the top of the network, global average pooling is performed, followed by a fully connected layer, yielding the code tensor z.The Details column in both tables may include the normalization and activation layers that follow a convolution layer.
Tbl. 1: A tabular representation of the SimpleNet layers' specifications.The Details column may include the normalization and activation layers that follow a convolution layer (where LN stands for layer normalization and ReLU is the rectified linear unit max(0, x)).
Sparse Denoising Autoencoder
The first tested deep neural network model is a common autoencoder with denoising and sparsity constraints (Figure 1).In the training phase, a Gaussian noise of standard deviation 0.05 was applied over the input, yielding a noisy sample x.As a denoising autoencoder, its goal is to learn the pair of functions (E, D) so that x = D(E(x)) is closest to the original input x.The aim of making E a function of x is to force the process to be more stable and robust, thus leading to higher quality representations [7].Sparsity was achieved with two mechanisms.First, a rectified linear unit (ReLU) activation was used after the last fully connected layer of the encoder, turning negative outputs from the previous layer into zeros.Second, an absolute value penalization was applied to z, thus adding the extra minimization goal of keeping the code sum small in magnitude.The final decoder loss function was therefore: where is the sparsity penalty function, r = 64 × 64 is the number of pixels in the input images, and x represents the original input without synthesized noise.s is the sparsity coefficient, which we left defined as s = 0.0001.This network used batch normalization [21] and (non-leaking) ReLU activations.
Variational Autoencoder
The encoder of the variational autoencoder (Figure 2) learns a stochastic distribution which can be sampled from, by min-imizing the Kulback-Leibler divergence with a unitary normal distribution [8].Like in the SDAE, convolutions were followed by batch normalization [21] and (non-leaking) ReLU activations.
Bidirectional GAN While
GANs are known to show great potential for representation purposes, the basic GAN archicture does not provide a means to encode samples to their respective prior.The bidirectional GAN, depicted in Figure 3, addresses this concern by including an encoder component, which learns the inverse process of the generator [10].Rather than only observing data samples, the BiGAN discriminator's loss function depends on the code-sample pair.
The Encoder component of the network used the same design as the discriminator, with the exception that the original data was fed with a size s of 112, the outcome of cropping the data after the shortest dimension was resized to 128 pixels (as in, s g = 128 for the encoder).Images were still downsampled to 64x64 to be fed to the discriminator.Like in [10], all constituent parts of the GAN were optimized simultaneously in each iteration.The encoder and the discriminator of this model used layer normalization [22] and leaky ReLU with a leaking factor of 0.2 on all except the last respective convolutional layers.
Adversarial Autoencoder
The adversarial autoencoder (AAE) is an autoencoder in which a discriminator is added to the bottleneck vector [23].While reducing the L 2 -norm distance between a sample and its decoded form, the full network includes an adversarial loss for distinguishing the encoder's output from a stochastic prior code, thus serving as a regularizer to the encoding process.
Our AAE used a simple code discriminator composed of 2 fully connected layers of 128 units with a leaky ReLU activation for the first two layers, followed by a single neuron without a non-linearity.During training, the discriminator is fed a prior z sampled from a random normal distribution N (0, 1) as the real code, and the output of the encoder E(x) as the fake code.The model uses layer normalization [22] on all except the last layers of each component, and leaky ReLU with a leaking factor of 0.2.Like in [10], all three components' parameters were updated simultaneously in each iteration.
Network Training Details
The networks were trained through stochastic gradient descent, using the Adam optimizer [24].The α 1 hyperparameter was set to 0.5 for the BiGAN and the AAE, and 0.9 for the remaining networks.
Each neural network model was trained over 206000 steps, which is approximately 100 epochs, with a mini-batch size of 64.The base learning rate was 0.0005.The learning rate was multiplied by 0.2 halfway through the training process (50 epochs), to facilitate convergence.
All neural network training and latent code extraction was conducted using TensorFlow, and TensorBoard was used during the development for monitoring and visualization [25].Depending on the particular model, training took on average 120 hours (a maximum of 215 hours, for the adversarial autoencoder) to complete on one of the GPUs of an NVIDIA Tesla K80 graphics card in an Ubuntu server machine.
Evaluation
The previously described methods for representation learning were aimed towards addressing the domain of biomedical images.A proper validation of these features was made with the use of the data sets from the ImageCLEF 2017 concept detection challenge [26].As one of the sub-tasks of the caption prediction challenge, the goal of the challenge is to conceive a computer model for identifying the individual components from medical images, from which full captions could be composed.This task was accompanied with three data sets containing various images from biomedical journals: the training set (164614 images), the validation set (10000 images) and the testing set (10000 images).These sets were annotated with the lists of biomedical term identifiers from the UMLS (Unified Medical Language System)1 vocabulary for each image.The testing set's annotations were hidden during the challenge, but were later on provided to participants.
Each of the set of features, learned from the approaches described in the previous section, were used to train simple classifiers for concept detection.In both cases, the same training and validation folds from the original data set were considered, after being mapped to their respective feature spaces.In addition, data points in the validation set with an empty list of concepts were discarded.
These simple models were used to predict the concept list of each image by sole observation of their respective feature set.Therefore, the assessment of our representation learning methods is made based on the effectiveness of capturing high-level features from latent codes alone.
Logistic Regression
Aiming for low complexity and classification speed, we performed logistic regression with stochastic gradient descent for concept detection, treating the UMLS terms as labels.More specifically, linear classifiers were trained over the features, one for each of the 750 (seven-hundred and fifty) most frequently occurring concepts in the training set.All models were trained using FTRL-Proximal optimization [27] with a base learning rate of 0.05, an L 1norm regularization factor of 0.001, and a batch size of 128.Since the biomedical concepts are very sparse and imbalanced, the F 1 score was considered as the main evaluation metric, which was calculated with respect to multiple fixed operating point thresholds (namely, 0.025, 0.05, 0.075, 0.1, 0.125, 0.15, 0.175, and 0.2) for each sample and averaged across the 750 labels.The threshold which resulted in the highest mean F 1 score on the validation set is recorded, and the respective precision, recall, and area under the ROC curve were also included.Subsequently, the same model and threshold were used for predicting the concepts in the testing set, the F 1 score of which was retrieved with the official evaluation tool from the ImageCLEF challenge.
Since it is also possible to combine multiple representations with simple vector concatenation, we have experimented training these classifiers using a mixture of features from the SDAE and AAE latent codes.This process is often called early fusion, and is contrasted with late fusion, which involves merging the results of separate models.Each model undertook a few dozens of training epochs until the best F 1 score among the thresholds would no longer improve.In practice, training and evaluation of the linear classifiers was done with TensorFlow.
k-nearest neighbors A relevant focus of interest in representation
learning is its potential in information retrieval.While concept detection is not a retrieval problem, and the use of retrieval techniques is a naive approach to classification, it is fast and scales better in the face of multiple classes.Furthermore, it enables a rough assessment of whether the representation would fare well in retrieval tasks where similarity metric were not previously learned, which is the case for the Euclidean distance between features.
A modified form of the k-nearest neighbors algorithm was used as a second means of evaluation.Each data point in the validation set had its concepts predicted by retrieving the n closest points from the training feature set (henceforth called neighbors) in Euclidean space and accumulating all concepts of those neighbors into a boolean sum of labels.This tweak makes the algorithm more sensitive to very sparse classification labels, such as those found in the biomedical concept detection task.All natural numbers from 1 to 5 were tested for the possible k number of neighbors to consider.Analogous to the logistic regression above, the k which resulted in the highest F 1 score on the validation set was regarded as the optimal parameter, and predictions over the testing set were evaluated using the optimal k.The actual search for the nearest neighbors was performed using the Faiss library, which contributed to a rapid retrieval [28].Feature fusion was not considered in the results, as they did not seem to bring any improvement over singular representations.
Qualitative results
Each representation learning approach described in this work resulted in a 512dimensional feature space.Figure 5 shows the result of mapping the validation feature set of each representation learned into a two-dimensional space, using principal component analysis (PCA).The three primary colors were used (red, green, and blue) to label the points with the three most commonly occurring UMLS terms in the training set, namely C1696103 (Image-dosage form), C0040405 (X-Ray Computed Tomography), and C0221198 (Lesion), each painted in an additive fashion.
While extreme outliers were removed from the figures, it can be noted that the ORB, SIFT and BiGAN representations had more outliers than the other three representations.A good representation would enable samples to be linearly separable based on their list of concepts.Even though the concept detection task is too hard for a clear cut separation, one can still identify regions in the manifold in which points of one of the frequent labels are mostly gathered.The existence of concentrations of random points in certain parts of the manifold, as Fig. 5: The 2D projections of the latent codes in the validation set, for each learned feature space.Best seen in color.
further observed from the classification results, is noticeable mostly in poorer quality representations.
The latent space regularization in representations based on deep learning is also apparent in these plots: both the AAE (with the approximate Jason-Shennen divergence from the adversarial loss) and the VAE (with the Kulback-Leibler divergence) manifest a distribution that is close to a normal distribution.
Linear Classifiers
Table 3 shows the best resulting metrics obtained with logistic regression on the validation set, followed by the final score on the testing set.Mix is the identifier given for the feature combination of SDAE and AAE.We observed that, for all classifiers, the threshold of 0.075 would yield the best F 1 score.This metric, when obtained with the validation set, assumes the existence of only the 750 most frequent concepts in the training set.Nonetheless, these metrics are deemed acceptable for a quantitative comparison among the trained representations, and have indeed established the same score ordering as the metrics in the testing set.The adversarial autoencoder obtained the best mean F 1 score in concept detection, only superseded with a combination of the same features with those from the sparse denoising autoencoder.
These metrics, although seemingly low, are within the expected range of scores in the domain of concept detection in biomedical images, since the classified labels are very scarce.As an example, only 10.9% of the training set is positive for the most frequent term.For the second and third most frequent terms, the numbers are 9.8% and 8.6% respectively.The mean number of positive labels of each of the 750 most frequent concepts is 876.7, with a minimum of 203 positive labels for the 750th most frequent concept in the training set.We find that most concepts in the set do not have enough images with a positive label for a valuable classifier.
The scores obtained here are on par with some of the results from the ImageCLEF 2017 challenge.The best F 1 scores on the testing set, without the use of external resources that could severely bias the results, were 0.1583 (with a pre-trained neural network model [29]) and 0.1436 (with no external resources [30]) [26].The use of additional information outside of the given data sets is known to significantly improve the results.In the list of submissions where no external resources were used, these techniques were only outperformed by the submissions from the IPL team [26], [30].While the work has also relied on building global unsupervised representations, our representations are significantly more compact in size, and thus more computationally efficient in practice.
Tbl.It is understood that the thresholds can be better fine-tuned to further increase these numbers [31].Rather than performing a methodic determination of the optimal threshold, we chose to avoid overfitting the validation set by selecting a few thresholds within the interval known to contain the optimal threshold.
k-Nearest Neighbors
The results of classifying the validation set with similarity search are presented in Table 4
Conclusion
This paper takes unsupervised representation learning techniques from state-ofthe-art, facing them against a more traditional bags of visual words approach.The methods were evaluated with the biomedical concept detection sub-task of the ImageCLEF 2017 caption prediction task.We have tested the hypothesis that a powerful image descriptor can contribute to efficient concept detection with some level of certainty, without observing the original image.Results are presented for six different approaches, where two of them rely on visual keypoint extraction and description algorithms, and other two of them are based on generative adversarial networks.Overall, these methods have significantly outperformed our previous participation and are on par with other techniques in the challenge.
As identified in [32] and proved in this work, it is possible to obtain more powerful representations with modern deep learning approaches, in contrast with previously popular computer vision methods such as the SIFT bags of visual words.Deep learning techniques based on GANs can provide good results, but the additional complexity, the difficulty of convergence, and the possibility of mode collapse can significantly cripple their performance in representation learning.Nonetheless, these issues are already a high focus of attentions at this time, and will likely lead to substantial improvements in GAN design and training.
It is also important that these approaches are augmented with non-visual information.In particular, a medical imaging archive should take the most advantage of the available data beyond pixel data.Future work will consider semi-supervised learning as a means of building more descriptive representations from known categories and other annotations.Subsequently, these representations are to be evaluated in a medical information retrieval scenario, as well as with other data sets in the medical imaging domain.
3: The best metrics obtained from logistic regression for each representation learned, where Mix is the feature combination of SDAE and AAE.The combined representation of concatenating the feature spaces of the SDAE and AAE have resulted in even better classifiers.Although the results of the combined representation are shown here, this improvement is not to be overstated, given that it relies on a wider feature vector and on training two representations that were meant to perform individually.Another relevant observation is that the representations based on BoWs were generally less effective for the task than deep representation learning methods.Although SIFT BoWs have resulted in a slightly better area under the ROC curve, the chosen operating points led to ORB slighly outperforming SIFT.
. The presence of lower F 1 scores than those with linear classifiers is to be expected: the linear classifier can be interpreted as a model which learns a custom distance metric for each label, whereas k-NN relies on a fixed Euclidean distance metric.With k-nearest neighbors, the best mean F 1 score of 0.07505 was obtained with the SDAE.The AAE follows with a mean F 1 score of 0.06910.The form of passive fitting over the validation set, from the choice of k, is much less greedy than the training process of the logistic regression, which included a choice of operating threshold and halting condition based on the outcome from the validation set.Therefore, it is expected that the final F 1 score on the testing set heavily resembles the values obtained on the validation set.Tbl.4: The best F 1 scores obtained from vector similarity search for each representation learned.
|
v3-fos-license
|
2021-05-23T05:14:33.436Z
|
2021-05-01T00:00:00.000
|
235085921
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ccr3.4113",
"pdf_hash": "5d4d3653f8ac9a75eeb6eb06c1e610d6520d636b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43045",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "5d4d3653f8ac9a75eeb6eb06c1e610d6520d636b",
"year": 2021
}
|
pes2o/s2orc
|
Tuberculosis with Evans syndrome: A case report
Abstract Evans syndrome and tuberculosis could be predisposing factors for one another, or there may be a common pathophysiological denominator for the co‐occurrence. Further research is needed for a better understanding of pathophysiology and treatment.
| INTRODUCTION
We herein report an exceedingly rare case of Evans syndrome with associated tubercular pleural effusion. The patient was initially treated as autoimmune hemolytic anemia. However, the development of thrombocytopenia led to the subsequent diagnosis of Evans syndrome. The co-existence of tuberculosis resulted in additional difficulty during treatment with immunosuppressive medications.
Evans syndrome (ES), a hematological entity described by Evans and colleagues in 1951, is characterized by the presence of Coombs positive hemolytic anemia and immune thrombocytopenia, and less commonly, autoimmune neutropenia. 1 Although the exact pathophysiology of ES remains unknown, immune dysfunction with subsequent production of antibodies targeting the erythrocytes and platelets is a likely mechanism. Glucocorticoids and intravenous immunoglobulin (IVIG) have mostly been used as first-line therapy for ES. Other treatment options include immunosuppressants, blood transfusion, splenectomy, and hematopoietic stem cell transplant. 2 ES has a more variable clinical course as compared to isolated autoimmune hemolytic anemia (AIHA), with more frequent exacerbations and mortality. 3 The coexistence of tuberculosis (TB) with Evans syndrome makes therapy even more challenging.
| CASE PRESENTATION
A 20-year-old nonalcoholic, nonsmoker Nepalese man with no significant past medical history presented to the outpatient department with melena for 10 days. This was associated with fatigue and generalized weakness of the body. He denied any abdominal pain, nausea, vomiting, or recent weight loss. He also gave a history of low-grade fever with an evening rise of temperature for the same duration. It was not associated with chest pain, cough, hemoptysis, runny nose, watery eyes, or sore throat. He had no history of similar illness in the past or any recent sick contacts. His family history was unremarkable.
Vitals on presentation revealed blood pressure 100/70 mm Hg, pulse 120/min, respiratory rate 24/min, and temperature 100.5°F. On physical examination, he was pale and icteric. There were no edema, ecchymosis, or any palpable lymph nodes. Chest auscultation revealed diminished breath sounds over the right infrascapular region. The rest of the systemic examination was normal.
Serologies were negative for human immunodeficiency virus (HIV), hepatitis B virus, and hepatitis C virus. Serological evaluation for antinuclear antibody (ANA), antidouble-stranded deoxyribonucleic acid antibody (ds-DNA), and anti-smooth muscle antibody (ASMA) were negative. Immunoglobulin levels were normal. A presumptive diagnosis of autoimmune hemolytic anemia was made, and the patient was started on oral prednisone 60 mg daily.
Due to the presenting complaint of melena, upper gastrointestinal endoscopy was done, which revealed a peptic ulcer in the first part of the duodenum. A tissue sample was taken during endoscopy, which did not reveal Helicobacter pylori. He was treated with proton pump inhibitors for his peptic ulcer disease.
Meanwhile, a chest X-ray showed right-sided pleural effusion ( Figure 1). The pleural fluid analysis showed an exudative pleural effusion with lymphocyte predominance, lymphocyte to neutrophil ratio of 4, and high adenosine deaminase activity (89 units/L). Computed tomography scan (CT) of the chest with and without contrast showed multiple centriacinar nodules giving tree in bud appearance, fibrotic changes in the right upper lobe, and moderate right-sided pleural effusion suggestive of tubercular pathology. Acid-fast bacilli (AFB) was not visualized in the microscopic analysis of the sputum. However, due to high clinical suspicion, corroborative pleural fluid analysis and chest imaging findings, he was started on antitubercular treatment with isoniazid, rifampin, pyrazinamide, and ethambutol.
Follow-up laboratories showed Hb stable at 5-7 g/dL. Platelet count decreased to 127 000/mm 3 at the time of discharge ( Figure 2). Following discharge on antitubercular medications and prednisone, he was lost to follow up.
The patient presented to our clinic after 3 months following an exacerbation of his symptoms after the inadvertent stopping of his prednisone. He presented with increased fatigue, generalized weakness, and jaundice. He had no preceding upper respiratory tract symptoms. Blood investigation this time showed Hb: 4 g/dL, WBC: 6500/mm 3 , and platelets: 10 000/mm 3 . LFT showed total bilirubin: 48.05 μmol/L and direct bilirubin: 13.51 μmol/L. LDH was elevated to 851 U/L.
The development of thrombocytopenia in a patient with pre-existing autoimmune hemolytic anemia led us to consider the possibility of Evans syndrome as the cause of bicytopenia. He was started on intravenous methylprednisolone 1 g once daily for 3 days, followed by oral prednisone 60 mg daily. However, there was no improvement in his Hb or platelet count. We considered other immunosuppressive therapies like intravenous immunoglobulin, rituximab, and cyclosporine as possible treatment options, but we faced two challenges with their use: fear of reactivation of TB and the prohibitive cost. Eventually, we decided to start him on cyclosporine, given its affordable cost despite the risk of reactivation of TB. He was discharged on oral cyclosporine 10 mg daily for a month. Unfortunately, he was again lost to follow up.
| DISCUSSION
Evans syndrome is an autoimmune disorder characterized by the combination of AIHA and immune thrombocytopenia. 1 Although the worldwide incidence of ES has not been reported in the literature, a French national observational study done in 265 patients with AIHA showed that 37% of the patients had ES, while Pui et al reported that 73% in a cohort of 15 children with AIHA had ES. 4,5 The pathophysiology underlying ES is not clearly defined but is most likely related to a generalized dysregulation of the immune system, involving both the cellular and humoral immunity. 6,7 Downregulation of the T-cell control over the autoreactive B-cell clones results in a deranged Th1/Th2 ratio with subsequent increased production of IL-10 and INF-γ, and decreased generation of TGF-β. The increased secretion of INF-γ (a Th1 cytokine) stimulates the autoimmune B-cell clones to produce autoantibodies against red cell-specific and platelet-specific antigens. 8 common variable immunodeficiency (CVID), 22q11.2 deletion syndrome, and IgA deficiency, indicating immunodeficiency as a possible predisposing factor for this autoimmune phenomenon. [9][10][11] Few case reports have described the co-occurrence of TB and ES. [12][13][14] Sharma et al reported a case of ES presumed to be secondary to TB. The authors hypothesize that the occurrence of ES in a TB patient may be due to production of antibodies against the blood cells by lymphocytes in response to the tubercular pathogen. Molecular mimicry involving unknown antigens of tubercular bacilli, and platelet surface antigens could be responsible for thrombocytopenia seen in ES patients with TB. 13 Kim et al reported a case of tuberculosis cutis orificialis in a patient with pre-existing ES. They discussed the possibilities of impaired cellular immunity and the long-term use of immunosuppressive medications in ES as predisposing factors for TB. 14 Hence, ES could predispose to TB and vice versa. In our case, the patient presented with symptoms of TB and ES concurrently. It is possible that a common, yet to be determined, pathophysiological denominator could be responsible for the coexistence of these two seemingly disparate conditions. Frequent relapses characteristic of ES make its treatment an uphill task. There have been no randomized controlled trials comparing the effectiveness of different modalities of treatments for ES. Corticosteroids have been the mainstay of treatment based on studies from small cohorts, although frequent relapses have been reported. Pui et al, in his study cohort, reported remission with corticosteroid therapy in all six children who required treatment. However, relapse was reported during viral infections or on tapering of the corticosteroid dose. 5 Those who fail to respond or require a high dose of corticosteroids have reportedly been treated with IVIG. 15 Other treatment options that have been used for refractory cases include immunosuppressants, blood transfusion, splenectomy, and hematopoietic stem cell transplant. 2 Further research is needed to unravel the pathophysiology behind the concurrence of ES and TB, so that common pathophysiological culprit, if any, could be targeted. As mentioned above, there are no well-validated guidelines for the treatment of ES and, by extrapolation, for concurrent ES and TB. The coincidence of TB with ES adds a layer of complexity as the treatment of ES exacerbates TB. Furthermore, the manner in which antitubercular medications, with their nuclear targets, affect the course of ES remains to be studied. Thus, there is a significant knowledge gap in our understanding of this condition. The impetus for research, the locus of which is primarily situated in developed countries, could be dampened as TB is mostly a third world problem. Furthermore, the cost-effectiveness of therapy should also be an important consideration while devising therapeutic interventions for concurrent TB and ES.
| CONCLUSION
We report a rare case of co-occurrence of TB and ES. The exact pathophysiology behind the concurrence remains to be elucidated. We believe that ES and TB could be predisposing factors for one another, or there may be a common pathophysiological denominator for the co-occurrence. Further research is needed for a better understanding of pathophysiology and treatment.
ACKNOWLEDGMENT
Published with written consent of the patient.
|
v3-fos-license
|
2019-01-18T14:13:06.310Z
|
2019-01-01T00:00:00.000
|
58244603
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "c33dab5f23b58188041ee1c175def1f909a5d8f4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43046",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "c33dab5f23b58188041ee1c175def1f909a5d8f4",
"year": 2019
}
|
pes2o/s2orc
|
False Negativity of Tc-99m Labeled Sodium Phytate Bone Marrow Imaging Under the Effect of G-CSF Prescription in Aplastic Anemia: A Case Report
Granulocyte colony-stimulating factor (G-CSF) is a hematopoietic cytokine which controls the differentiation and growth of hematopoietic cells in the bone marrow. We report a severe aplastic anemia (SAA) patient with false-negative 99mTc sodium phytate bone marrow imaging findings under concurrent G-CSF therapy. The first bone marrow imaging showed a normal bone marrow activity. However, the bone marrow biopsy pathology report revealed a lack of hematopoietic cells. Furthermore, the complete blood count indicated severe pancytopenia resulting in the diagnosis of aplastic anemia (AA). A second marrow scan implemented after the stoppage of G-CSF showed an abnormal bone marrow activity, which matched the pathology reports. Accordingly, the concurrent administration of G-CSF was considered as the cause of false-negative bone marrow imaging findings obtained in the first scan. Consequently, it should be kept in mind that a 99mTc sodium phytate bone marrow scintigraphy during the concurrent administration of G-CSF may lead to the achievement of false negative results because it induces changes in bone marrow mimicking a normal marrow scan in patients with AA.
Introduction
Granulocyte colony-stimulating factor (G-CSF) is a hematopoietic cytokine which controls the differentiation and growth of hematopoietic cells in the bone marrow (1). The G-CSF is often used to treat neutropenia associated with aplastic anemia (AA); therefore, it shortens the length of hospitalization. This blood growth factor affects not only neutrophils, but also mononuclear macrophages. Moreover, it influences the bone marrow imaging based on mononuclear macrophages uptake of radioactive tracer. Herein, the present study presented a severe aplastic anemia (SAA) patient case with false-negative 99m technetium (Tc) sodium phytate bone marrow
Case report
A 28-year-old male presented to the hospital with a history of fatigue for 8 days, fever for 7 days, bleeding nose and gums for 6 days, and hematuria for 2 days. These signs and symptoms had no obvious cause. His past medical history and physical examination were unremarkable with the exception of patchy bleeding seen in both lower limbs.
On admission, the complete blood count of the patient showed severe pancytopenia (Table 1). Bone marrow biopsy demonstrated no hematopoietic cells ( Figure 1); furthermore, no reticular fiber was seen (MF-0) on reticular fiber staining. Bone marrow aspiration report revealed bone marrow suppression and AA.
In addition, bone marrow smear and T cell subset showed CD4+CD3+lymphocyte of 33.24%, CD8+CD3+lymphocyte of 24.38%, and CD4:CD8 ratio of 1.36. The results of flow cytometry were as follows: 1) low proportion of CD34+ with nucleated cells of 0.08%, 2) a significant decrease in the proportion of neutrophils and monocytes, 3) a significant reduction in the proportion of nucleated red blood cells, 4) a significant elevation in the proportion of lymphocyte, and 5) no phenotypic abnormalities. Finally, the diagnosis of SAA was made based on the patient's history, laboratory results, and marrow biopsy and puncture.
The patient was put on cyclosporine and androgen for the primary disease management Table 2 presents the grading of bone marrow activity and its clinical significance in details. This marrow scan did not match the pathological and laboratory findings. The G-CSF was eventually stopped, together with the other medications. The second bone marrow scan for this patient was performed 4 months after the first marrow imaging. The result was abnormal and corresponded with There is the slight uptake of radioactive tracer in bone marrow. The radioactivity of bone marrow is slightly higher than the background of the surrounding soft tissue, and the marrow outline is not clear.
Mild to moderate bone marrow suppression.
Discussion
Severe aplastic anemia is a bone marrow disease in which stem cells are damaged resulting in the hematopoietic cell deficiency (1). Hematopoietic growth factors are marrow regulators that support the growth and differentiation of hematopoietic cells and the function of mature hematopoietic cells (2)(3)(4). In clinical trials, G-CSF was reported to cause a momentary increase in neutrophil count and was beneficial for the management of complicated bacterial and fungal infections in AA patients (4).
The G-CSF promotes the proliferation, differentiation, and maturation of myeloid hematopoietic progenitor cells, and regulates the proliferation and differentiation of neutrophil cell lines. Moreover, G-CSF is a powerful stimulator and activator for monocytes and macrophages. 99m Tc sodium phytate is the radiotracer of bone marrow imaging in SPECT. When the tracer is injected into the body, it combines with Ca 2+ in the blood to form a phytin colloid. 99m Tc phytin colloid could be absorbed by mononuclear macrophages. The administration of G-CSF significantly enhances mononuclear macrophages in the bone marrow, which signifies an increase in hematopoietic activity. Regarding this, the amount of the absorbed imaging agent reflects the hematopoietic function of the bone marrow. Therefore, these changes in the bone marrow can be detected by bone marrow scintigraphy.
Multiple reports and case studies have defined a similar diagnostic impasse with marrow stimulation on PET imaging (5-7). The G-CSF leads to the reconversion of the fatty bone marrows to the hematopoietic marrows. This reconversion is presumably attributable to the residual hematopoietic cells stimulation in the predominantly fatty marrow (8). Differentiation of these changes without the knowledge of the bone marrow changes due to G-CSF administration might be problematic.
Our patient's laboratory results showed severe pancytopenia. Furthermore, the bone marrow biopsy pathology report revealed the lack of hematopoietic cells or reticular fibers, which Figure 3. Bone marrow imaging after suspending granulocyte colony-stimulating factor treatment. Bone marrow was slightly clear and slightly higher than the peripheral soft tissues; the silhouette was not clear, indicating mild to moderate bone marrow function inhibition. These findings correlate with level 1 of the standard bone marrow activity is a typical presentation of AA. These findings did not match the first marrow scan, which was considered as a false negative result due to the effect of the concurrent administration of G-CSF. Furthermore, the follow-up scan performed 4 months after the stoppage of G-CSF matched the laboratory and pathological findings.
In conclusion, it should be kept in mind that 99m Tc sodium phytate bone marrow scintigraphy during the concurrent administration of G-CSF in AA patients may result in a false negative finding. Therefore, the conceptualization of the G-CSF mechanism of action within the bone marrow is a matter of significant importance for the accurate interpretation of bone marrow scintigraphy images. Radiologists and nuclear medicine physicians must be aware of any sorts of treatments the patient was on preceding any scan in order to avoid the misinterpretation of bone marrow scintigraphy images.
|
v3-fos-license
|
2019-07-03T13:05:27.052Z
|
2019-06-20T00:00:00.000
|
195771892
|
{
"extfieldsofstudy": [
"Medicine",
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.langmuir.9b01063",
"pdf_hash": "5cc184b70b840ae370f0825740d013f8968a2fde",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43047",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Physics"
],
"sha1": "482eb17ee66d3072f3d966bdaf130baf25739525",
"year": 2019
}
|
pes2o/s2orc
|
Femtosecond Laser-Structured Underwater “Superpolymphobic” Surfaces
In this work, the surfaces that repel liquid polydimethylsiloxane (PDMS) droplets in water were created by femtosecond laser treatment. We define this superwetting phenomenon as underwater “superpolymphobicity”. The resultant underwater superpolymphobic silicon surface shows a contact angle of 159 ± 1° and a sliding angle of 1.5 ± 0.5° to liquid PDMS droplets in water. This underwater superpolymphobicity can be achieved on a wide range of hydrophilic materials, including semiconductors, glass, and metals. The adhesion between the liquid polymer and a solid substrate is effectively prevented by the underwater superpolymphobic microstructures. The underwater superpolymphobicity will have a great significance in designing the adhesion between the polymer and a solid substrate, controlling the shape of the cured polymer materials, as well as nearly all the applications based on the polymer materials.
■ INTRODUCTION
In recent years, superhydrophobic and superoleophobic surfaces have attracted increasing interest because of their broad applications in liquid repellence, 1 self-cleaning coating, 2,3 droplet manipulation, 4−6 oil/water separation, 7−11 submarine drag reduction, 12 antifogging/icing, 13−16 anticorrosion, 1 water harvesting, 17,18 cell engineering, 19−21 lab chip, 22,23 and so on. A wide range of superhydrophobic and superoleophobic materials have been developed by the combination of proper surface microstructure and chemical composition. 24−31 However, natural or artificial superhydrophobic and superoleophobic surfaces can only repel either water solutions or oils, but not polymers. Compared to water and oil solutions, liquid polymers usually have a more complex composition, lower fluidity, and higher viscosity. Moreover, many liquid polymers can be transformed into the solid state, different from water and oil liquids. There are few surfaces that can repel liquid polymers. 24 −27 Polymers have been widely used in various manufacturing industries, agriculture, national defence, and our daily lives. Some polymers have liquid states. For example, the uncured polydimethylsiloxane (PDMS) mixture of prepolymer and curing agent is in the liquid state. 32,33 After curing at high temperature, it solidifies and its shape is formed permanently. Preventing the adhesion between liquid polymers and a solid substrate is important in polymer casting industry, polymer preparation, and three-dimensional printing technology. Following the definition of "super-hydro-phobicity" and "super-oleo-phobicity", we coin a new term "super-polym-phobicity" ("polym" is usually short for "polymer") to characterize that the contact angle (CA) of a liquid polymer droplet on a solid substrate is larger than 150°. However, the fabrication of superpolymphobic surfaces still remains a great challenge.
In this paper, hierarchical micro-and nanostructures were prepared on a wide range of materials by femtosecond laser processing, including silicon, glass, stainless steel, Al, and Cu. The wettabilities of underwater liquid PDMS droplets on the laser-structured surfaces were investigated. The resultant surface shows excellent underwater superpolymphobicity and has the ability to repel liquid PDMS in water. Such underwater superpolymphobicity is caused by the underwater Cassie wetting behavior between PDMS droplets and surface microstructures.
■ EXPERIMENTAL SECTION
Femtosecond Laser Treatment. Femtosecond laser processing is widely applied in the formation of micro/nanoscale structures on a solid substrate and controlling the wettability of material surfaces. 34−43 A Ti:sapphire femtosecond laser system was utilized to induce micro/nanostructures on the surface of different substrates, including silicon, glass, stainless steel, Al, and Cu. The experimental setup for ablating a sample surface by femtosecond laser is shown in Figure 1a. The sample with initial smooth surface was mounted on a program-controlled translation stage. The laser beam (with the pulse width of 67 fs, the center wavelength of 800 nm, and the repetition rate of 1 kHz) was vertically focused onto the front surface of the samples by a plano-convex lens (focal length of 250 mm) in air. The size of the focused laser spot was about 100 μm. The typical line-byline laser scanning manner was used (Figure 1b). The laser power, the scanning speed, and the space/interval of the laser-scanning lines were set constantly at 500 mW, 2.5 mm s −1 , and 60 μm, respectively, in this experiment. The femtosecond laser-treated samples were finally cleaned with alcohol and distilled water, respectively.
Characterization. The surface morphology of the samples after femtosecond laser treatment was observed by a scanning electron microscope (S-4100, Hitachi, Japan) and a laser confocal microscopy (VK-9700, Keyence, Japan). The wettabilities of in-air water droplets and underwater liquid PDMS droplets (∼10 μL) on the sample surfaces were investigated by a contact-angle measurement (SL2000KB, Kino, America). Regarding the underwater wettability, the samples were fixed in a man-made glass container which was filled with distilled water. The uncured liquid PDMS was prepared by mixing the PDMS prepolymer and curing agent (v/v = 10:1) (DC-184, Dow Corning Corporation). There are abundant nanoparticles with the diameter of a few tens of nanometer decorating on the surface of the microprotuberances (Figure 2c,d).
■ RESULTS AND DISCUSSION
The formation of the micro/nanoscale hierarchical structures is ascribed to the material removal and particle resolidification during femtosecond laser ablation. 33,44−46 When the femtosecond laser pulses are focused onto a sample surface, part of the laser energy is directly absorbed by electrons via the nonlinear effect, such as multiphoton absorption and avalanche ionization. Some energy is further transferred from the electrons to the lattice until the thermal equilibrium between electrons and ions occurs. A hightemperature/pressure plasma forms above the sample surface. As the plasma expands and bursts out of the laser-ablated spot, the sample surface will be strongly damaged. The material at the laser-focused point is removed with the plasma burst and is sputtered above the substrate in the form of ejected particles. This process usually leads to a microscale rough structure on the substrate. As the nanoscale-ejected particles that are at the molten state fall back to the sample surface and resolidify, the nanoparticles finally coat over the surface of the laser-ablationinduced microstructures, resulting in a kind of micro/ nanoscale binary structures.
The wettabilities of water droplets and underwater liquid PDMS droplets on the sample surface were investigated by measuring CA and sliding angle (SA). The untreated flat silicon is inherently hydrophilic with the water CA (WCA) of 44 ± 3°to a small water droplet (Figure 3a). Once a water droplet was dispensed onto the laser-structured surface, the droplet would spread out quickly. The measured WCA is about 0° (Figure 3b), demonstrating the superhydrophilicity of the textured surface. The hydrophilicity of the silicon surface is enhanced by the laser-induced microstructure because the rough microstructure has the ability to amplify the natural wettability of a substrate. 24−27 The flat silicon surface shows ordinary polymphobicity with a polymer CA (PCA) of 141.5 ± 2.5°and high adhesion to a liquid PDMS droplet in water (Figure 3c). The uncured liquid PDMS droplet can stick on Regarding the laser-treated surface, an underwater PDMS droplet could keep a spherical shape on the surface ( Figure 3d). The PCA is measured to be 159 ± 1°and the CA hysteresis is only 5 ± 1.8°. As long as the sample was tilted at 1.5 ± 0.5°, the PDMS droplet could slowly roll away (SA = 1.5 ± 0.5°) (Figure 3e). The results indicate that the laser-ablated silicon surface exhibits underwater superpolymphobicity and very low adhesion to liquid PDMS droplet; that is, the laserablated surface greatly repels liquid PDMS in water. The potential wetting model between the liquid PDMS droplet and the laser-structured surface is proposed to well understand the underwater superpolymphobicity of the laserstructured surface. The wettability of a liquid droplet on the flat solid substrate is generally explained by Young's model. 47 Figure 4a shows the wetting state of a solid/PDMS/water three-phase system. The PCA (θ PW ) of an underwater polymer droplet on the flat surface can be expressed by where γ PA , γ WA , and γ PW are the free energies of polymer/air, water/air, and polymer/water interfaces, respectively. θ P and θ W are the CAs of polymer and water droplets in air. The liquid PDMS has a much smaller surface tension than water (γ PA ≪ γ WA ), so the values of cos θ P and cos θ W are all positive, and γ PA cos θ P − γ WA cos θ W is negative. 47 From eq 1, it can be predicted that flat silicon surface presents polymphobicity underwater.
Regarding the laser-ablated surface with micro/nanoscale structures, water can fully wet the microstructures because of the superhydrophilicity and occupy the whole space between surface microstructures as the sample is dipped into water. The water layer trapped in microstructures can provide a repulsive force to the PDMS droplet because of the insolubility between water and liquid PDMS. The trapped water cushion allows the liquid PDMS droplet to only touch the peaks of the surface microstructures. In fact, the liquid PDMS is on a composite solid−water interface. The contact model between the underwater PDMS droplet and rough surface microstructure agrees well with the underwater Cassie state (Figure 4b). 47 Therefore, the laser-structured silicon surface exhibits underwater superpolymphobicity. The high PCA (θ PW * ) of an underwater polymer droplet on the textured silicon surface can be expressed by where f is the projected area fraction of the polymer touching the surface microstructures, θ PW is the PCA on a flat surface underwater. From eq 2, the f can be calculated as 0.306 based on the measured values of CAs (θ PW * = 159°, θ PW = 141.5°), demonstrating that the underwater liquid PDMS droplet is in contact with a small area of the laser-induced surface microstructures.
Femtosecond laser pulse has two unique characteristics: ultrashort pulse width and ultrahigh peak intensity. Such features endow the femtosecond laser with the ability to ablate almost all of the known materials, so various hierarchical microstructures can be easily created on the surfaces of different material substrates through one-step femtosecond laser ablation. 34−37 In addition to the silicon surface, underwater superpolymphobicity can also be achieved on a wide range of other hydrophilic materials by femtosecond laser treatment. For example, Figure 5a−d shows the surface microstructures of laser-ablated glass, stainless steel, Al, and Cu substrates. Those materials are intrinsically hydrophilic (Figure 5e−h) and become superhydrophilic after laser treatment. When the laser-structured samples are submerged in water and liquid PDMS droplets are dispensed onto the sample surfaces, all the PDMS droplets are spherical with the PCA higher than 150° (Figure 5i−l). Therefore, the hydrophilic substrates exhibit excellent underwater superpolymphobicity after femtosecond laser ablation.
Different from water and oil liquids, many liquid polymers such as PDMS can be cured and become a solid state. We can selectively design the adhesion between liquid polymer and solid substrate or change the shape of the liquid polymer by using superpolymphobic microstructures. The shape of the liquid polymer will be fixed permanently when it solidifies, e.g., liquid PDMS can be cured at high temperature. Therefore, superpolymphobicity of the laser-induced microstructures can be applied to control the shape of cured polymer materials and enable designing the polymer−substrate adhesion.
■ CONCLUSIONS
In conclusion, underwater superpolymphobicity was achieved on various hydrophilic substrates by simple femtosecond laser processing, including semiconductors, glass, and metals. Femtosecond laser ablation endows silicon surface with hierarchical micro/nanostructures. The liquid PDMS droplet on the resultant surface has a PCA of 159 ± 1°and SA of 1.5 ± 0.5°in water, demonstrating that the laser-structured surfaces show excellent underwater superpolymphobicity and extremely low adhesion to the underwater PDMS droplet. The adhesion between the liquid polymer and a solid substrate can be effectively prevented by the underwater superpolymphobic microstructures. The underwater superpolymphobicity results from the underwater Cassie wetting state between the liquid PDMS droplet and the laser-induced surface microstructure. Following the formation mechanism of underwater superpolymphobicity and the building principle of underwater superpolymphobic microstructures reported in this paper, we believe the underwater superpolymphobicity can also be achieved on material surfaces by using various microfabrication methods besides laser processing. Current research will have wide potential applications in reducing the polymer/solid adhesion and controlling the shape of the polymer materials. (2) Nishimoto, S.; Bhushan, B. Bioinspired Self-Cleaning Surfaces with Superhydrophobicity, Superoleophobicity, and Superhydrophilicity. RSC Adv. 2013, 3, 671−690. (
|
v3-fos-license
|
2020-06-04T09:05:41.502Z
|
2020-05-28T00:00:00.000
|
219710981
|
{
"extfieldsofstudy": [
"Geology",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-4292/12/11/1733/pdf",
"pdf_hash": "bafa96aa11293075fb2fcd54eff9fec002576909",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43048",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "2e358e55184d3fe91fc4fc9f77b1b1f16f1d02b3",
"year": 2020
}
|
pes2o/s2orc
|
UAV Applications for Determination of Land Deformations Caused by Underground Mining
: This article presents a case study that demonstrates the applicability of unmanned aerial vehicle (UAV) photogrammetric data to land surface deformation monitoring in areas a ff ected by underground mining. The results presented include data from two objects located in the Upper Silesian Coal Basin in Poland. The limits of coordinate and displacement accuracy are determined by comparing UAV-derived photogrammetric products to reference data. Vertical displacements are determined based on di ff erences between digital surface models created using UAV imagery from several measurement series. Interpretation problems related to vegetation growth on the terrain surface that significantly a ff ect vertical displacement error are pointed out. Horizontal displacements are determined based on points of observation lines established in the field for monitoring purposes, as well as based on scattered situational details. The use of this type of processing is limited by the need for unambiguous situational details with clear contours. Such details are easy to find in urbanized areas but di ffi cult to find in fields and meadows. In addition, various types of discontinuous deformations are detected and their development over time is presented. The results are compared to forecasted land deformations. As a result of the data processing, it has been estimated that the accuracy of the determination of XY coordinates and the horizontal displacements (RMS) in best case scenario is on the level of 1.5–2 GSD, and about 2–3 GSD for heights and subsidence.
Introduction
Surface deformations caused by underground mining operations constitute a significant source of danger to building structures [1][2][3][4][5]. For this reason, they have been subject to geodetic observations for many decades. These observations allow for monitoring of their scope and scale and, if necessary, preventive action in the interest of public safety. Monitoring methods have changed as science and technology have advanced. Classical geodetic measurements aimed at identifying deformation indicators are carried out using observation lines along selected terrain profiles [6,7]. They are usually performed quite precisely using fixed points, which is expensive and time consuming. The need to minimize financial and time expenditures means that the number of such profiles observed is usually small and their location is often adjusted to fit the existing road network. Despite their advantages, the results of such measurements offer only a limited picture of deformation development.
that the methods offer similar precision levels when used to determine mining area subsidence. However, UAV technology prevails over the others in terms of measurement and data processing speed [27,28]. In practical terms, it turns out to be the most beneficial. UAV technology also allows observation of short-term dynamic land surface changes [29]. The frequency of UAV missions depends only on the detectable deformations expected. Research conducted thus far indicates that the UAV-derived point cloud obtained can be successfully used to determine subsidence and other parameters from the land surface deformation model [30]. In order to increase the precision of the resulting digital terrain model (DTM), point classification and filtering were performed using an algorithm [31]. The algorithm removed areas covered with dense vegetation and filled empty areas via interpolation using a flattening function.
UAV technology is already widely used to inventory open-pit mines [32,33]. UAV photogrammetric data allow one to determine the volume of material exploited and the stabilities of slopes. The concept of georeferencing a model generated from images is also interesting [32]. Ground control points (GCP) were selected based on the point cloud obtained via TLS. The scale-invariant feature transform (SIFT) algorithm was used to search for points that the UAV orthomosaics and TLS point cloud had in common. This approach allows the use of points located in dangerous areas (steep-sloped open-pit mines) to georeference UAV products using TLS, thus minimizing the number of points designated in the field via GNSS.
Land surface deformation can be caused by factors other than mining. Often, this phenomenon is caused by natural factors such as (mass) landslide movements [34][35][36][37][38][39]. Most studies have not focused on analysis of land surface displacements, but rather have analyzed earth movement from point clouds generated using data from several measurement series.
The research presented in this paper describes the spectrum of possibilities for use of UAV technology to monitor areas affected by mining operations. As in previous studies, field measurements were performed using both the tested technology and reference methods. In addition to subsidence analysis, this study contains a proposed methodology for determining horizontal terrain displacements and a comparison of analytical results to those from theoretical modeling. Development targets and ideas for adaptation of existing techniques are also proposed in order to increase the applications for and improve the use of UAVs. The main purpose of the study was to assess the accuracy of determining coordinates and displacements of points of the land surface using UAV. The study also made it possible to find strengths and weaknesses of the UAV photogrammetry and the limitations of its use in determining the displacement field.
Materials and Methods
The data presented contain observations from two independent study areas located in Poland within the Upper Silesian Coal Basin (USCB). Research in both areas has been carried out for many years [7]. The results of this research allow for an unambiguous assessment that the main source of land deformations is underground mining. Observations of both areas were performed using UAVs and classic surveying techniques such as static and kinematic GNSS measurements [40], total station measurements, and precise leveling.
The first research area contained demolished buildings from the now defunct coal mine in Piekarý Sląskie. This area was affected by the underground mining of hard coal deposits once found within the protective pillar of the mining buildings. The observed impacts were difficult to associate with current mining operations due to the simultaneously revealed effects of undocumented, historical underground mining of shallow zinc and lead ore deposits. This undocumented mining activity caused large anomalies including the appearance of discontinuous deformations in several areas.
The second data set was related to the Jaworzno area, which included an underground hard coal mining area. Studies of land surface deformation have been conducted in this area by the co-authors of the paper since 2009 [41]. In the analyzed period, i.e., starting in April 2016, deformations caused by the operation of three long walls in two seams: A (depth 607 m under terrain surface, average thickness It also includes two measurement series carried out in Jaworzno and denoted as Jaworzno 04.2016 and Jaworzno 02.2020. The UAV used to conduct the photogrammetric flights in the Piekary area and Jaworzno 04.2016 was a DJI S1000 octocopter with an A2 flight controller manufactured by the DJI Company (Shenzhen, China). The platform was fitted with a Sony ILCE A7R camera equipped with a Sony Zeiss Sonnar T* FE 35 mm F2.8 ZA lens (Tokyo, Japan) whose position was stabilized by a Zenmuse Z15-A7 gimbal (DJI, Shenzhen, China). The sensor used in the digital camera was 35.8 mm × 23.9 mm in size with a resolution of 7360 × 4912 px.
A UAV BIRDIE (Fly Tech UAV, Krakow, Poland) was used to conduct the photogrammetric flights in Jaworzno 02.2020. It was a fixed-wing system with a GNSS PPK (postprocessing kinematic) module for high-accuracy positioning [42]. The platform was fitted with a Sony DSC-RX1RM2 camera equipped with a Carl Zeiss Sonnar T* 35 mm F2 lens. The sensor used in the digital camera was 35.9 mm × 24.0 mm in size with a resolution of 7952 × 5304 px. A double perpendicular grid was used for flights in Jaworzno 02.2020 to increase photo overlap.
The photogrammetric mission plans were prepared after considering the specifications of the surveying equipment used, the site characteristics, the target ground sample distances (GSDs), and on-board UAV instrument errors, which were diagnosed [43]. Forward and side overlaps and flight altitudes were determined based on these parameters ( Table 1). Because of the UAV flight duration, the missions were divided into parts. The GCP coordinates were obtained using the GNSS real time kinematic (RTK) method. The 3D accuracy of reference point coordinate determination was determined based on the instrument specification at 20-30 mm. The time dedicated to fieldwork did not exceed 1 day in any case (Table 1), including time spent establishing and measuring the GCPs. outliers were removed using the gradual selection tool in the software. Optimization was performed in the next stage, including a realignment of the aerotriangulatory block and determination of camera calibration parameters. The root mean square (RMS) spatial errors for the GCPs are presented in Table 1. A dense point cloud was generated at the high-detail level, which means that the software algorithm sought to determine spatial coordinates for each group of four pixels in the image (2 × 2 pixels).
Piekary Site
In order to assess the accuracy of UAV-derived coordinate determination, 34 Piekary 11.2016 measurement series check points were measured using GNSS RTK. These check points were located along the P profile that runs in the west-east direction along the railway track. This route is marked with blue dots in Figure 1. Checkpoint coordinate determination accuracy was estimated to be about 1 cm in the XY plane and about 1.5 cm for height based on the instrument specifications.
The Jaworzno Site
For the Jaworzno study area, 63 points were included in the reference measurement ( Figure 2). Of these, eight were measured in two series (04.2016 and 02.2020) and the others were measured only in the 02.2020 series. The points measured in both series are marked in red and identified as W1-W8 in Figure 2, while the remaining points are light blue. The distance between points on the observation lines is approximately 25 m for most points but 50 m in a few cases.
The coordinates of the reference points in the Jaworzno 02.2020 series were determined based For the PiekaryŚląskie study area, the coordinates of points within the network of observation lines shown as red dots in Figure 1 were used as reference data when estimating the displacement Remote Sens. 2020, 12, 1733 6 of 25 determination accuracy. The observations, which were used to determine reference point coordinates, were obtained as two measurement series performed in parallel with the UAV missions marked Piekary 03.2016 and Piekary 09.2016. There were 93 reference points in the range of the UAV missions. Horizontal distances between reference points in the observation line network were approximately 25 m along traffic routes and 50 m in green areas.
The horizontal coordinates of the points that constituted this observation network were determined based on measurements performed using a precision total station (Leica TCRA 1102+, Canton St. Gallen, Switzerland). The total station measurements were tied to the points determined via static GNSS measurements performed using two Active Geodetic Network--European Position Determination System (ASG-EUPOS) network reference stations. The measurement vector lengths did not exceed 20 km. The RMS error in determining the horizontal coordinates of network points never exceeded 10 mm, and was less than 4 mm in most cases.
Reference point heights were determined with an accuracy of 1-2 mm (RMS error) using precision leveling (Leica DNA 03) tied to points beyond the range of the impact of mining operations. The set of reference points could be divided into 58 points located in urban areas (asphalt, concrete, and cobblestones) and 35 points located in areas covered by vegetation (meadows, green areas, wooded areas). The estimated accuracy (RMS) of displacement component determination in the XY plane for each point was not worse than 15 mm and~3-4 mm for height.
The Jaworzno Site
For the Jaworzno study area, 63 points were included in the reference measurement ( Figure 2). Of these, eight were measured in two series (04.2016 and 02.2020) and the others were measured only in the 02.2020 series. The points measured in both series are marked in red and identified as W1-W8 in Figure 2, while the remaining points are light blue. The distance between points on the observation lines is approximately 25 m for most points but 50 m in a few cases.
Determining and Estimating the Accuracies of Coordinates Obtained Using Unmanned Aerial Vehicles (UAVs)
When acquiring and processing data from UAVs, it is assumed that each detail is visible on a minimum of three photos. However, most objects are visible on a much larger number of photos due to their large front and side overlaps. Thanks to direct measurement of pixel coordinates on many The coordinates of the reference points in the Jaworzno 02.2020 series were determined based on GNSS RTK measurements. In addition, angular-linear measurements using a total station (Geodimeter 650 Pro, Zeiss, Oberkochen, Germany) and precision leveling (Zeiss DiNi 012, Sunnyvale, Kalifornia, USA) were performed at each point ( Figure 2). The base station used for GNSS RTK measurements was tied via static GNSS measurements to two ASG-EUPOS network reference stations (KATO, KRA1), which were located~23 km and 44 km from the measured point, respectively. As a result of this adjustment, the position determination uncertainty (RMS) for the reference points was estimated to bẽ 1.5-2 cm for both the XY plane and height. The coordinates of reference points in the Jaworzno 04.2016 series were determined based only on the GNSS RTN measurements conducted in reference to the ASG-EUPOS network. The estimated accuracy (RMS) for these coordinates was about 2-3 cm in the XY plane and about 3-4 cm for height.
The estimated accuracy (RMS) of displacement component determination is not worse than 3-4 cm in the XY plane and about 4 cm in the vertical plane for each point.
Determining and Estimating the Accuracies of Coordinates Obtained Using Unmanned Aerial Vehicles (UAVs)
When acquiring and processing data from UAVs, it is assumed that each detail is visible on a minimum of three photos. However, most objects are visible on a much larger number of photos due to their large front and side overlaps. Thanks to direct measurement of pixel coordinates on many photos, it is possible to determine 3D coordinates from the intersection of several rays that converge at the measured point, i.e., using a photogrammetric intersection. This approach to determining point coordinates is called AERO in this article. However, most UAV photogrammetry users use DSMs and orthogonal processes that form orthomosaics instead of performing measurements of photos. Thus, it is possible to obtain horizontal coordinates from orthomosaics and heights from DSMs. This method of determining coordinates from UAV measurements will hereinafter be referred to as ORTO.
The advantages of ORTO measurement are speed and ease of measurement, because any geographic information system (GIS) can be used for this purpose. In general, it is always worth measuring orthomosaics because many images are used for each pixel when they are created. The AERO measurement mentioned earlier is theoretically more accurate because its data processing path is shorter in comparison with ORTO method. It includes errors in image orientation elements obtained via aerotriangulation and errors in identification and measurement of points on the images. From this point of view, ORTO measurements should produce less accurate results in relation to AERO measurement because they are performed using a data source of lower quality than photos. In addition, the accuracy of the ORTO method depends on the quality of the dense point cloud from which the DSM is generated and that is used to orthorectify the UAV images.
Coordinates of points from UAV data used in the presented research were obtained via both ORTO and AERO. This allowed us to compare the actual accuracies of both methods using the observation data and to make decisions regarding data processing optimization in later parts of the study. The accuracy analysis included a comparison of the errors of the two methods (ORTO and AERO), as determined by comparing their results to the coordinates determined via reference methods.
For the Piekary 11.2016 site, the analysis was based on the coordinates of 34 points measured via RTK GNSS. Most of these points were located along the railway embankment and thus on hard ground not covered by vegetation ( Figure 1, blue dots). In the case of the Jaworzno site, an analogous analysis of coordinate determination accuracy was performed using the data from the Jaworzno 02.2020 series. In total, 63 points divided into two sets were used for the analysis: 28 points were on hard ground and 35 points were in fields ( Figure 2, blue and red dots).
Determining and Estimating the Accuracies of Displacements Obtained Using UAVs
In order to estimate the uncertainties of spatial (3D) displacements determined based on UAV measurements, their values were compared to reference values. The analysis included the 93 points Only displacements were compared; point coordinates from the series were not compared. This seems justified, given the purpose of these types of measurements. In addition, not all points were visible in the UAV images (in the case of the Piekary site). In such cases, displacements of characteristic points located not more than 5 m from the measurement point were determined. Characteristic points included sewage wells, lines separating road lanes, kinks in curbs, etc.
In order to highlight the potential of the UAV photogrammetric method for both research sites, many situational details located far from existing observation lines were also identified and their horizontal displacements determined. These points generally formed an even grid over the entire research site. They were selected in such a way that they were clearly identifiable and visible on images collected for all observation series.
Identifying Discontinuous Deformations
Detection of discontinuous deformations at the Piekary site was performed via visual analysis of DSM models, DSM differential models, and orthomosaics. First, the DSMs of the first and last measurement series (03.2016 and 04.2017) were compared. Areas that indicated discontinuous deformations were verified using orthomosaics for exclusion of possible vegetation influence. Then, we searched for these discontinuous deformation areas in the intermediate series (09.2016 and 11.2016).
In the case of the Jaworzno site, extensive discontinuous deformation was identified during a field inspection at the beginning of 2009. This was confirmed via measurement of spatial observation network point displacement near the deformation. At that time, it was not possible to determine the course of this deformation and its changes over time. In 2016, UAV mission results made this analysis possible. The development of DSM provided the basis for determining the location and extent of discontinuous deformation (imaging of spatial data via the shaded relief method). Obtaining the subsidence of the ground surface based on subsequent UAV missions (04.2016 and 02.2020) made it possible to study further deformation geometry changes and their impacts on development of continuous terrain surface deformations.
Assessment of Point Coordinate Determination Accuracy
Analysis of the results indicates that the RMS accuracy of determination of horizontal coordinates is approximately 3-5 cm at the Piekary site ( Table 2) and 1.5 cm at the Jaworzno site. There is a clear correspondence between accuracy and the size of the GSD, which is about twice as large at the Piekary site as at the Jaworzno site (Table 1). It is also worth noting the lack of correlation between land cover and horizontal coordinate accuracy. The accuracies of points located in fields and on asphalt are similar. The height determination situation is different, as land cover is of some importance. For the Jaworzno site, the height determination errors for points located along an asphalt (~2 cm) are about half the size of those for points located on dirt road (~3 cm). The accuracy differences are small but noticeable here. This is probably closely related to the season in which the measurement is performed (winter) and the associated vegetation conditions.
Comparison of the accuracies of the heights of points obtained in the two research areas reveals a significant dependence on the GSD, as is noted with horizontal coordinates. The height errors are larger at the Piekary site (about 3 cm) than for comparable land cover (asphalt) at the Jaworzno site (about 2 cm).
Further analysis of the results summarized in Table 2 shows that there is no improvement in accuracy when using the AERO method of determining coordinates instead of ORTO method. In some cases, errors obtained via the ORTO method are even smaller than those from the AERO method. For this reason, it was decided to perform further analyses using only the ORTO method.
Determination and Assessment of Point Displacement Accuracy
Given the results of the analysis of coordinate determination accuracy, the analysis of displacement determination accuracy was performed using only the ORTO method. It was performed using the displacement values determined for 93 points from the Piekary site (series 03.2016 and 09.2016) and eight points from the Jaworzno site (series 04.2016 and 02.2020). During development of UAV data from the Piekary site, we were unable to determine displacements for one point located on hard ground and three points located in green areas. This means that a total of 89 points were used to calculate error values for this area. Of these, 57 were located on hard surfaces and 32 were in green areas. In the case of the Jaworzno site, the analysis was performed on all points that were observed in both series. According to the methodology applied, if it was impossible to directly indicate a point, an attempt was made to find and determine the displacement of a situational detail located within 5 m. However, it was not always possible to find such a detail. The reference values of displacements at the Piekary site ranged from 0.00 m to −1.80 m vertically and reached 0.50 m horizontally. At the Jaworzno site, reference vertical displacements ranged from −0.17 m to −0.42 m, while horizontal displacements were within the range of 0.12 m to 0.24 m. Appendix A contains Figures A1-A4 that summarize the differences between displacement values determined via the UAV photogrammetric (ORTO) and reference methods. Table 3 presents the RMS errors of displacement determination for both sites as a function of the land cover on which the analyzed points are located. This parameter reaches approximately 1.5-2 times the GSD (Table 1) for horizontal coordinates and~2-3 times the GSD for heights. This analysis also shows the differences associated with the characteristics of the land cover on which the point is fixed, especially in the case of height determination. The displacement determination accuracy at the Piekary site is noticeably worse in the areas covered by vegetation. This is particularly apparent from the errors of the respective vertical displacements. Differences between DSMs developed for individual measurement series allow us to determine approximate subsidence value distributions. However, these subsidence values are affected by the influence of vegetation, the significance of which depends on the type, density, and vegetation cycle.
Under favorable conditions, such imaging can allow us to determine the range of land surface deformation and the maximum subsidence. It also allows us to reveal possible land deformation anomalies. Figure 3 presents differential imagery (DSM differences) of the Piekary site during the periods 03.2016-09.2016, 03.2016-11.2016, and 03.2016-04.2017. The results of this analysis may contain disturbances associated with periodic variability of the vegetation that covers the area. When creating differential images, data for which apparent uplifts exceeded 25 cm were omitted. These areas are therefore presented in white. This is clearly visible in the image of the developing subsidence basin in the north-western part of the area. Comparing the 03.2016 and 09.2016 series reveals significant gaps in data related to afforestation in the area, which was still producing vegetation gains in September. Comparison of the 03.2016 and 11.2016 series allows us to perform a wider analysis due to the partial lack of foliage, while comparing the data obtained from both series during early spring allows us to obtain the best results. The image of the subsidence basin that is formed in the south-eastern part of the study area is not significantly disturbed because of a lack of large clusters of high vegetation. As with the Piekary site, the influence of vegetation is visible and dominates the observed subsidence values in most of the analyzed area. As the base and current series were measured in April (spring) and February (winter), respectively, the influence of vegetation is visible here with a sign that is the opposite of that noted in the Piekary site. Vegetation disturbs the subsidence values, increasing them by up to half a meter. This is particularly evident in the eastern part of the area subject to UAV measurement. As the analysis of reference measurements shows that this area is already beyond the reach of underground mining, we can easily identify these errors. The clear, almost rectilinear border where there are often no clear points. It can be assumed that the accuracy of point displacement determination is similar to that presented in Table 3. The directions and lengths of the displacement vectors correspond well to the approximate subsidence values determined via DSM differences. Figure 4 shows DSM differences for the Jaworzno site (series 04.2016 and 02.2020). As with the Piekary site, the influence of vegetation is visible and dominates the observed subsidence values in most of the analyzed area. As the base and current series were measured in April (spring) and February (winter), respectively, the influence of vegetation is visible here with a sign that is the opposite of that noted in the Piekary site. Vegetation disturbs the subsidence values, increasing them by up to half a meter. This is particularly evident in the eastern part of the area subject to UAV measurement. As the analysis of reference measurements shows that this area is already beyond the reach of underground mining, we can easily identify these errors. The clear, almost rectilinear border between colors visible in the northwestern part of Figure 4 is the result of a discontinuous deformation in this part of the area, which manifests as a long "hump". This deformation and its impact are described in detail in subsequent sections. Profiles were generated along the observation lines based on the DSMs created. These are marked with blue lines in Figures 3 and 4 as profiles P and W, respectively. When processing UAV data, the profiles can be used as an additional tool to facilitate the interpretation of results. They also facilitate the assessment of noise levels in selected areas of the DSMs. Figure 5 shows a profile of changes in terrain heights at the Piekary site relative to reference data (points P1-P24). Profile line disturbances related to vegetation are clearly visible, and sometimes reach up to ±0.50 m. For most of the profile, the measurement noise associated with the accuracy of the method and the influence of vegetation does not exceed ±0.10 m. As with the Piekary site, horizontal displacements of selected, generally evenly distributed situational details are identified and determined. The analysis of displacement vector length and direction corresponds well to the approximate subsidence values determined from DSM differences as well as to the mining performed between the series. In this area, it is not possible to locate clear objects in agricultural and forest areas. However, finding such details in urban areas is easy.
Profiles were generated along the observation lines based on the DSMs created. These are marked with blue lines in Figures 3 and 4 as profiles P and W, respectively. When processing UAV data, the profiles can be used as an additional tool to facilitate the interpretation of results. They also facilitate the assessment of noise levels in selected areas of the DSMs. Figure 5 shows a profile of changes in terrain heights at the Piekary site relative to reference data (points P1-P24). Profile line disturbances related to vegetation are clearly visible, and sometimes reach up to ±0.50 m. For most of the profile, the measurement noise associated with the accuracy of the method and the influence of vegetation does not exceed ±0.10 m. marked with blue lines in Figures 3 and 4 as profiles P and W, respectively. When processing UAV data, the profiles can be used as an additional tool to facilitate the interpretation of results. They also facilitate the assessment of noise levels in selected areas of the DSMs. Figure 5 shows a profile of changes in terrain heights at the Piekary site relative to reference data (points P1-P24). Profile line disturbances related to vegetation are clearly visible, and sometimes reach up to ±0.50 m. For most of the profile, the measurement noise associated with the accuracy of the method and the influence of vegetation does not exceed ±0.10 m.
Figure 5.
A profile of Piekary site subsidence along the P profile (marked by blue line in figure 1) based on UAV photogrammetric data is compared to reference measurements. Figure 6 shows Jaworzno site terrain subsidence along the W line (points W1-W8). There is a clear difference between the more regular course in the western part of the subsidence profile and the eastern part, which has larger subsidence value fluctuations. The first part of the profile, which is Figure 1) based on UAV photogrammetric data is compared to reference measurements. Figure 6 shows Jaworzno site terrain subsidence along the W line (points W1-W8). There is a clear difference between the more regular course in the western part of the subsidence profile and the eastern part, which has larger subsidence value fluctuations. The first part of the profile, which is about 200 m long, runs along a partly asphalted field road, while the remainder runs through an area covered in tall grass. based on UAV photogrammetric data is compared to reference measurements.
Detection of Discontinuous Deformations and Analysis of Their Development over Time
In the case of the Piekary site, the occurrence of discontinuous deformations in the form of sinkholes over voids in the rock mass produced by shallow mining of zinc and lead ores during the 19th century is common. Several such sinkholes were found in the study area during the site visit, while others were found during orthomosaic analysis. Figures 7-10 show the development of two selected discontinuous deformations. Figure 2) based on UAV photogrammetric data is compared to reference measurements.
Detection of Discontinuous Deformations and Analysis of Their Development over Time
In the case of the Piekary site, the occurrence of discontinuous deformations in the form of sinkholes over voids in the rock mass produced by shallow mining of zinc and lead ores during the 19th century is common. Several such sinkholes were found in the study area during the site visit, while others were found during orthomosaic analysis. Figures 7-10 show the development of two selected discontinuous deformations. The second of the discontinuous deformations denoted C-D (Figure 1) is located on the southern edge of the study area. Its development is presented in Figures 9 and 10 in a manner analogous to that used before. There are no signs of a developing discontinuous deformation in the first measurement. In the 09.2016 and 11.2016 series, slow deformation development is noticeable to some The second of the discontinuous deformations denoted C-D ( Figure 1) is located on the southern edge of the study area. Its development is presented in Figures 9 and 10 in a manner analogous to that used before. There are no signs of a developing discontinuous deformation in the first measurement. In the 09.2016 and 11.2016 series, slow deformation development is noticeable to some extent but masked by vegetation. Orthomosaics generated based on UAV imagery collected during the 04.2017 series allow one to note significant deformation development relative to previous Due to the geological structure of the mining area, the Jaworzno site also exhibits a tendency towards discontinuous deformations. However, these deformations are not caused by shallow historical mining. Geophysical surveys performed in this area show that discontinuous deformations appear in places with discontinuous geological and tectonic structures. These areas also have a thin Due to the geological structure of the mining area, the Jaworzno site also exhibits a tendency towards discontinuous deformations. However, these deformations are not caused by shallow historical mining. Geophysical surveys performed in this area show that discontinuous deformations appear in places with discontinuous geological and tectonic structures. These areas also have a thin The second of the discontinuous deformations denoted C-D (Figure 1) is located on the southern edge of the study area. Its development is presented in Figures 9 and 10 in a manner analogous to that used before. There are no signs of a developing discontinuous deformation in the first measurement. In the 09.2016 and 11.2016 series, slow deformation development is noticeable to some extent but masked by vegetation. Orthomosaics generated based on UAV imagery collected during the 04.2017 series allow one to note significant deformation development relative to previous measurement series. These observations are confirmed in the images that show the differences between DSMs and the C-D profile created on this basis ( Figure 10).
Due to the geological structure of the mining area, the Jaworzno site also exhibits a tendency towards discontinuous deformations. However, these deformations are not caused by shallow historical mining. Geophysical surveys performed in this area show that discontinuous deformations appear in places with discontinuous geological and tectonic structures. These areas also have a thin overburden layer of loose quaternary, paleogene, and neogen rocks, under which a thick bench of concise Triassic rocks dominated by limestone is located [44].
The deformation presented in Figure 11 was initially identified when deformation measurements were carried out in this area in 2009. Originally its length was estimated to be about 200 m. It has the form of a linearly extended hump. Geophysical research performed using electrical resistivity tomography suggests that this form is the result of rock mass deformation, which leads to squeezing of plastic rocks (such as clays) to fill the spaces created by specific dislocation systems (faults) or karst forms that occur within more rigid limestone blocks. Therefore, it has a completely different character and is larger than discontinuous deformations described previously. Its full range can be determined only after the UAV DSM analysis is performed (Figure 11). Repeated UAV measurements allow for analysis of changes in the geometry of this deformation.
Remote Sens. 2020, 12, x FOR PEER REVIEW 16 of 26 overburden layer of loose quaternary, paleogene, and neogen rocks, under which a thick bench of concise Triassic rocks dominated by limestone is located [44]. The deformation presented in Figure 11 was initially identified when deformation measurements were carried out in this area in 2009. Originally its length was estimated to be about 200 m. It has the form of a linearly extended hump. Geophysical research performed using electrical resistivity tomography suggests that this form is the result of rock mass deformation, which leads to squeezing of plastic rocks (such as clays) to fill the spaces created by specific dislocation systems (faults) or karst forms that occur within more rigid limestone blocks. Therefore, it has a completely different character and is larger than discontinuous deformations described previously. Its full range can be determined only after the UAV DSM analysis is performed (Figure 11). Repeated UAV measurements allow for analysis of changes in the geometry of this deformation. Figure 11. A discontinuous deformation identified at the Jaworzno site using 02.2020 series data collected via UAV.
At its most visible point (E-F profile), the deformation cross section has a width of about 26 m and a relative height that ranges from 0.8 m to 1.8 m (Figure 12). To the southeast of this area, the hump dimensions decrease and the deformation gradually disappears.
Changes in discontinuous deformation morphology against the background of land surface subsidence caused by underground mining in 2016-2020 are shown in Figure 12. Analyzing the terrain profiles and land surface subsidence distributions on both sides of the deformation and at the deformation site (in several cross sections) lets one identify heterogeneous deformation terrain surfaces in the deformation area. Subsidence is greater on the southern side of the hump than on the
Discussion
The accuracies of coordinate and displacement determination summarized in Tables 2 and 3 should be treated as limit values that can be obtained only under favorable conditions. In practice, height RMS error values of two to three times the GSD are achieved only for selected points. These are objects with clear contours or areas for which the influence of vegetation is quite small or even negligible. During the development of Piekary site UAV data, it is not possible to determine the displacements using UAV-derived products for four points on the observation lines (one on hard ground and three in green areas). For these points, it is not possible to identify clear details located less than 5 m from the analyzed point. However, at the Jaworzno site, a comparison of coordinates indicates that two observation line points located on hard ground are not visible in the UAV data. When planning similar future observations, it will be necessary to ensure that points are visible on aerial images when designing classical observation lines.
A comparison of coordinate determination accuracy using the AERO and ORTO methods does not show significant differences. Given the speed of data acquisition, the use of ORTO is sufficient, especially for ground elements on roads and pavements.
In most green areas, the accuracies of terrain height and subsidence determinations are significantly burdened by the impact of vegetation. The height model obtained from the raw measurement data is DSM, not DEM. The difference in the heights between DSM and DEM often reaches tens of centimeters for meadows and fields and several meters for bushes and trees. These values are many times greater than the internal errors (noise) associated with the measurement methods themselves. The transition from DSM to DEM requires additional data processing to minimize the impact of vegetation. This approach is often referred to in the literature as vegetation filtration or ground filtering and is the subject of many past and current studies. Most of the studies performed in this field target algorithms used with LIDAR data [45][46][47][48]. However, these data have characteristics that are different from UAV photogrammetric data. On the one hand, the resolution Changes in discontinuous deformation morphology against the background of land surface subsidence caused by underground mining in 2016-2020 are shown in Figure 12. Analyzing the terrain profiles and land surface subsidence distributions on both sides of the deformation and at the deformation site (in several cross sections) lets one identify heterogeneous deformation terrain surfaces in the deformation area. Subsidence is greater on the southern side of the hump than on the northern side. On the other hand, the deformation itself exhibits little subsidence, which may indicate further extrusion of the plastic material during discontinuous deformation formation.
Discussion
The accuracies of coordinate and displacement determination summarized in Tables 2 and 3 should be treated as limit values that can be obtained only under favorable conditions. In practice, height RMS error values of two to three times the GSD are achieved only for selected points. These are objects with clear contours or areas for which the influence of vegetation is quite small or even negligible. During the development of Piekary site UAV data, it is not possible to determine the displacements using UAV-derived products for four points on the observation lines (one on hard ground and three in green areas). For these points, it is not possible to identify clear details located less than 5 m from the analyzed point. However, at the Jaworzno site, a comparison of coordinates indicates that two observation line points located on hard ground are not visible in the UAV data. When planning similar future observations, it will be necessary to ensure that points are visible on aerial images when designing classical observation lines.
A comparison of coordinate determination accuracy using the AERO and ORTO methods does not show significant differences. Given the speed of data acquisition, the use of ORTO is sufficient, especially for ground elements on roads and pavements.
In most green areas, the accuracies of terrain height and subsidence determinations are significantly burdened by the impact of vegetation. The height model obtained from the raw measurement data is DSM, not DEM. The difference in the heights between DSM and DEM often reaches tens of centimeters for meadows and fields and several meters for bushes and trees. These values are many times greater than the internal errors (noise) associated with the measurement methods themselves.
The transition from DSM to DEM requires additional data processing to minimize the impact of vegetation. This approach is often referred to in the literature as vegetation filtration or ground filtering and is the subject of many past and current studies. Most of the studies performed in this field target algorithms used with LIDAR data [45][46][47][48]. However, these data have characteristics that are different from UAV photogrammetric data. On the one hand, the resolution of LIDAR data is lower. On the other hand, LIDAR data support the analysis of many reflections. For this reason, there is a need to create and develop algorithms dedicated to filtration and classification of UAV-derived point clouds [49][50][51].
When determining vertical displacements, the impact of vegetation causes systematic errors whose sign depends on the phase of the vegetation-cultivation cycle that occurs in the base series versus subsequent series. The most favorable conditions from this point of view are usually in the winter months or early spring, provided that the area is not covered with snow. The least favorable conditions with regard to determining the terrain height, and thus vertical displacement, prevail in the summer months when both the vegetation density and height are maximized. Unfortunately, the measurement periods used to monitor surface geometries are often imposed in advance. For this reason, the development and use of algorithms that can minimize the impact of vegetation is important.
Our research also allows one to estimate the accuracy with which underground mining-driven surface horizontal displacements are determined. The fact that the results obtained are promising can be seen in both the analysis of horizontal coordinates and the horizontal displacement RMS error values. Thus, one can conclude that UAV-derived photogrammetric products allow determination of horizontal displacements with an accuracy of two to three times the GSD. The values listed in Table 3 were determined using observation lines located both on hard surfaces and in areas covered with grass. However, clearly identified points are selected for the analysis. These are usually observation line points visible on orthomosaic or terrain details near a measuring point that cannot be identified effectively (or at all) on an orthomosaic.
Interesting results were also obtained when analyzing horizontal displacements against the background of differential models made via UAV-derived DSM (Figures 3 and 4). The results obtained correspond well with the image of the developing subsidence basin. The points analyzed are details that are uniquely identifiable from UAV orthomosaics of various measurement series. In this way, quasi-surface data showing horizontal displacements and their changes over time are obtained. One can assume that the accuracy of the determined parameters is comparable to those produced when analyzing the observation line results. In both cases, horizontal displacements of points that could be identified clearly on orthomosaics are determined. However, finding and identifying points for these analyzes is problematic and time-consuming. The solution to this problem is the implementation of algorithms that perform automatic comparison of orthomosaics from different measurement series to determine horizontal displacements. Image-matching algorithms based on cross-correlation or feature detection (such as SIFT, SURF (speeded up robust features), ORB (oriented features from accelerated segment test (FAST), and rotated binary robust independent elementary features (BRIEF)), and BRISK (binary robust invariant scalable keypoints) may be particularly useful for this purpose [52][53][54]. However, one must consider that aforementioned algorithms were created to combine photos that are usually taken simultaneously. When analyzing orthomosaics from different measurement periods, it is necessary to solve problems related to changes in various objects over time that are not the result of mining operations. The main sources of error include vegetation, as its development and variability are associated with the growing season. For this reason, the impact of vegetation should be minimized when conducting analyzes. One of possible approaches to this problem is image segmentation. The results of this process allow for elimination of pixels with vegetation from the further processing. One can also limit the use of UAV products in horizontal displacement determination to developed areas.
Similar analyzes can be performed based on satellite images. However, their resolution is lower than that provided by UAV orthomosaics. High-resolution satellite imagery has a GSD of 0.3 m (WorldView-4), while free satellite data has significantly lower resolutions (e.g., 10 m for Sentinel). Therefore, only horizontal displacements with large values can be detected using these data.
The detection of discontinuous deformations was performed manually using DSM and orthomosaic analysis. Determining precise dimensions in the horizontal plane is possible when a change occurs but becomes difficult when an object is overgrown with vegetation. With deep, collapsed structures, it is also problematic to faithfully recreate the shape of an object's bottom due to frequent shading of these structures during UAV missions. The clear advantages of this method with regard to detection of discontinuous deformations include the possibility of measuring a large area in a relatively short time and the lack of need for physical access to the deformation area. Considering the safety risks associated with conducting such inventories, this advantage appears important. Automated detection processes are an important field of research in discontinuous deformation detection. One possible approach is the creation of a hybrid system that combines the detection of potential discontinuous deformations via other methods, e.g., InSAR [55], with detailed verification and inventory using images obtained from UAV platforms.
Forecasting the influence of mining operations on land deformations is of fundamental importance to the safety of building structures in mining areas. These forecasts predict the sizes of continuous land surface deformations and are used to assess the risk of damage to building structures [56]. Deformation forecasts require verification as they do not consider all factors involved in deformation formation. In many cases, results from surveying of observation lines are still used for this purpose. UAV measurements allow verification of a whole deformation forecast area using points outside of existing observation networks. The ability to determine the parameters of calculation models based on this type of data can significantly improve the reliability of forecasted results. One important feature is the ability to detect anomalous deformations that deviate from the direct mining influences predicted by theoretical models.
For example, the results of UAV measurements at the Jaworzno site were subject to additional analysis via comparison with mining deformation modeling results. Model calculations were made for the period (04.2016-02.2020) for which deformations were determined using UAV measurements.
Deformation rates were calculated using Knothe model which is the most widespread in Poland [57,58]. The parameter values were determined based on subsidence recorded on observation lines at the Jaworzno site. Measurement data from 2009-2012 were used to determine the parameters.
The resulting image of model vertical and horizontal displacements ( Figure 13) considers only direct and continuous impacts because Knothe's theory only allows modeling of such deformations. Comparison of modeling and geodetic measurement results provides a basis for verification of the computational model [59] and for detection of terrain surface deformation anomalies, i.e., the occurrence of indirect influences or discontinuous deformations that result from the geological and tectonic structures of a given area. to frequent shading of these structures during UAV missions. The clear advantages of this method with regard to detection of discontinuous deformations include the possibility of measuring a large area in a relatively short time and the lack of need for physical access to the deformation area.
Considering the safety risks associated with conducting such inventories, this advantage appears important. Automated detection processes are an important field of research in discontinuous deformation detection. One possible approach is the creation of a hybrid system that combines the detection of potential discontinuous deformations via other methods, e.g., InSAR [55], with detailed verification and inventory using images obtained from UAV platforms. Forecasting the influence of mining operations on land deformations is of fundamental importance to the safety of building structures in mining areas. These forecasts predict the sizes of continuous land surface deformations and are used to assess the risk of damage to building structures [56]. Deformation forecasts require verification as they do not consider all factors involved in deformation formation. In many cases, results from surveying of observation lines are still used for this purpose. UAV measurements allow verification of a whole deformation forecast area using points outside of existing observation networks. The ability to determine the parameters of calculation models based on this type of data can significantly improve the reliability of forecasted results. One important feature is the ability to detect anomalous deformations that deviate from the direct mining influences predicted by theoretical models.
For example, the results of UAV measurements at the Jaworzno site were subject to additional analysis via comparison with mining deformation modeling results. Model calculations were made for the period (04.2016-02.2020) for which deformations were determined using UAV measurements.
Deformation rates were calculated using Knothe model which is the most widespread in Poland [57,58]. The parameter values were determined based on subsidence recorded on observation lines at the Jaworzno site. Measurement data from 2009-2012 were used to determine the parameters.
The resulting image of model vertical and horizontal displacements ( Figure 13) considers only direct and continuous impacts because Knothe's theory only allows modeling of such deformations. Comparison of modeling and geodetic measurement results provides a basis for verification of the computational model [59] and for detection of terrain surface deformation anomalies, i.e., the occurrence of indirect influences or discontinuous deformations that result from the geological and tectonic structures of a given area. The discrepancies between modeled and UAV-determined vertical and horizontal displacements are presented in Figure 14. This colored map includes subsidence differences and vectors that illustrate horizontal displacement differences. In the case of subsidence, the discrepancies between the modeled and UAV-determined values range from −0.5 m to + 0.4 m. To a large extent, Figure 13. Forecasted subsidence and horizontal displacement at the Jaworzno site.
The discrepancies between modeled and UAV-determined vertical and horizontal displacements are presented in Figure 14. This colored map includes subsidence differences and vectors that illustrate horizontal displacement differences. In the case of subsidence, the discrepancies between the modeled and UAV-determined values range from −0.5 m to +0.4 m. To a large extent, these discrepancies result from the impact of vegetation, but they are also affected by forecasting uncertainty [60]. However, the influence of discontinuous deformation is quite clear in the northwest section. The deformation shape detected via UAV data is much clearer than it would be if only observation lines data were used for this purpose.
Remote Sens. 2020, 12, x FOR PEER REVIEW 20 of 26 these discrepancies result from the impact of vegetation, but they are also affected by forecasting uncertainty [60]. However, the influence of discontinuous deformation is quite clear in the northwest section. The deformation shape detected via UAV data is much clearer than it would be if only observation lines data were used for this purpose. Comparison of the calculated and observed horizontal displacements is possible at points where displacement vectors are determined via UAV imagery analysis (see Figure 4). The difference vector lengths range from 0.01 m to 0.22 m. The changes in difference vector directions visible in Figure 14 result from the effects of discontinuous deformation. It is a barrier to deformation propagation on the surface. For this reason, the nature of the displacement on one side of this deformation is different from that noted on the other side. This can be seen by comparing the displacement vector fields in Figures 4 and 13. The modeled horizontal displacement vectors generally do not change direction in the analyzed area, while the displacement vectors obtained via UAV point in different directions on opposite sides of the discontinuous deformation. Figure 14. Differences between forecasted and observed subsidences (color map) and horizontal displacements (differential vectors).
Conclusions
In recent years, technical and scientific advances have made UAVs increasingly inexpensive, accessible, and versatile measurement tools. The accuracies of coordinates and displacements determined based on UAV photogrammetry are already sufficient for many purposes, including surface monitoring of mining areas. UAV photogrammetry can be treated as another tool available for observation of surface geometries and their changes, especially as it allows collection of surface data.
In UAV photogrammetry, the main limitations on accuracy and the ability to use the measured data are largely associated with vegetation coverage in the monitored areas. For this reason, one important focus in the development of this technology is minimization of the impact of vegetation during processing of measurement results. Other important research directions are automation of displacement determination and discontinuous deformation detection.
This study showed that UAV photogrammetry can be used to determine many key parameters associated with the current state of land deformation caused by underground mining operations. In the coming years, dynamic development is expected to expand the range of use for this technology and further automate the processing of collected data.
Comparison of the calculated and observed horizontal displacements is possible at points where displacement vectors are determined via UAV imagery analysis (see Figure 4). The difference vector lengths range from 0.01 m to 0.22 m. The changes in difference vector directions visible in Figure 14 result from the effects of discontinuous deformation. It is a barrier to deformation propagation on the surface. For this reason, the nature of the displacement on one side of this deformation is different from that noted on the other side. This can be seen by comparing the displacement vector fields in Figures 4 and 13. The modeled horizontal displacement vectors generally do not change direction in the analyzed area, while the displacement vectors obtained via UAV point in different directions on opposite sides of the discontinuous deformation.
Conclusions
In recent years, technical and scientific advances have made UAVs increasingly inexpensive, accessible, and versatile measurement tools. The accuracies of coordinates and displacements determined based on UAV photogrammetry are already sufficient for many purposes, including surface monitoring of mining areas. UAV photogrammetry can be treated as another tool available for observation of surface geometries and their changes, especially as it allows collection of surface data.
In UAV photogrammetry, the main limitations on accuracy and the ability to use the measured data are largely associated with vegetation coverage in the monitored areas. For this reason, one important focus in the development of this technology is minimization of the impact of vegetation during processing of measurement results. Other important research directions are automation of displacement determination and discontinuous deformation detection.
This study showed that UAV photogrammetry can be used to determine many key parameters associated with the current state of land deformation caused by underground mining operations. In the Remote Sens. 2020, 12, x FOR PEER REVIEW 22 of 26 Figure A2. Discrepancies between vertical displacements determined via the UAV photogrammetric method (ORTO method) and reference measurements at the Piekary site. Figure A2. Discrepancies between vertical displacements determined via the UAV photogrammetric method (ORTO method) and reference measurements at the Piekary site.
Remote Sens. 2020, 12, x FOR PEER REVIEW 23 of 26 Figure A3. Horizontal displacements determined via the UAV photogrammetric method (ORTO method) and reference measurements at the Jaworzno site. Figure A3. Horizontal displacements determined via the UAV photogrammetric method (ORTO method) and reference measurements at the Jaworzno site. Figure A3. Horizontal displacements determined via the UAV photogrammetric method (ORTO method) and reference measurements at the Jaworzno site. Figure A4. Discrepancies between vertical displacements determined via the UAV photogrammetric method (ORTO method) and reference measurements at the Jaworzno site. Figure A4. Discrepancies between vertical displacements determined via the UAV photogrammetric method (ORTO method) and reference measurements at the Jaworzno site.
|
v3-fos-license
|
2018-05-31T08:34:10.219Z
|
2001-01-01T00:00:00.000
|
21115319
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/s0021-9258(17)37278-2",
"pdf_hash": "7f8ae4c2dd1aaabfd93ba996a6d004acf60acad6",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43049",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "742486218da8bf53c67f86601ebfe8ca9c193616",
"year": 2001
}
|
pes2o/s2orc
|
UvA-DARE ( Digital Academic Repository ) In vivo manipulation of the xanthophyll cycle and the role of zeaxanthin in the protection against photodamage in the green alga Chlorella pyrenoidosa
Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.
protection mechanism against photodamage was first suggested by Krinsky (1971). Chl triplet states can relax via triplet energy transfer to zeaxanthin followed by dissipation of the excited triplet via the trans-cis isomerization of zeaxanthin, the latter reaction is exothermic. More recently, another energy dissipating process (non-photochemical quenching, qNP) connected with the xanthophyll cycle was introduced by Demmig- Adams (1990). Contrary to the relaxation of Chl triplet states in the former process the qNP has been proposed to be a singletsinglet exchange between chlorophyll and carotenoids (Owens et al., 1992).
The scheme of reactions that take place in the light involves two de-epoxidation steps through which violaxanthin via the intermediate antheraxanthin becomes zeaxanthin (Hager and Stransky, 1970). This way the latter compound accumulates in the light. In darkness the reactions are reversed to violaxanthin. All reaction steps have been well characterized, except for the epoxidizing step from zeaxanthin to antheraxanthin in which a "mixed-function oxygenase" (Hager, 1981) was suggested to be involved. The different pH ranges at which the respective enzymes operate give rise to a scheme in which the steady-state concentrations of the components of the xanthophyll cycle are determined by the pH of the lumen. The deepoxidation reactions yielding the final product zeaxanthin rely on enzymes that become activated at a thylakoid lumen pH of 5.2 and thus operate in the light. The backreactions involve enzymatically catalyzed epoxidation steps that rely on a higher pH of the thylakoid lumen and by consequence operate in darkness (Pfundel and Dilley, 1993;Gilmore and Yamamoto, 1993). Thus, according to these observations, pH transitions between light and dark effect the differences in the presence of violaxanthin and zeaxanthin relative to one another.
Dithiothreitol has been applied as a successful inhibitor of the violaxanthin de-epoxidation steps (Yamamoto and Kamite, 1972). However, additional effects of dithiothreitol under in vivo conditions on several other thioredoxin-regulated reactions, such as carbon metabolism enzymes (Rowell et al., 1986) or the ATP synthase (Mills, 1986) may obscure the answer to the question whether in addition to the decreased availability of zeaxanthin other inhibitory effects of dithiothreitol are responsible for the observed increased sensitivity to photodamage in the presence of dithiothreitol. In addition, the use of a n inhibitor in the study of a cyclic process, excludes the possibility to retrieve information about the dynamic properties of such a cycle.
The aim of the present study was to evaluate the photoprotective potential of the xanthophyll cycle with different steadystate contents of violaxanthin and zeaxanthin generated in vivo without disturbance of the cellular metabolism by external additions other than light. The data presented indicate that epoxidation of zeaxanthin also proceeds in the light as a result of the photoprotective (excited oxygen quencher activity) proc-
In Vivo Manipulation
of the Xanthophyll Cycle essing of zeaxanthin. This observation reveals that the dynamic function of the xanthophyll cycle in vivo is larger than would be predictable from existing data. Our approach to assess photodamage in a constant background of photoprotection, established by introducing continuous background illumination, may be useful in other areas of photosynthesis research.
MATERIALS AND METHODS
Cu1ture"Two types of steady-state continuous cultures of Chlorella pyrenoidosa were used, both were grown in 2-liter chemostats in BG-11 medium (Rippka et aZ., 1979) at 20 "C. One was grown at 30 pE.m-2.s-1 (low light, LL) the other one at 240 pE.m-2.s-1 (high light, HL). Circular fluorescent tubes (Philips TLE 32W/33) were used for continuous illumination. The set up of the culture system was as in Van Liere and Mur (1978). Aeration at 60 litedmin provided adequate mixing and COP supply. The cultures were maintained at an A,,, of 0.18-0.20.
Preaclaptation and Hash Experiments-Samples from the HL and LL cultures were preadapted during 30 min at 20 "C in either darkness or in the presence of actinic (background) light. The actinic light intensities for the LL and HL samples were 430 and 600 pE.m-2.s-1, respectively. These light conditions were arrived at to be saturating from the photosynthesis uersus irradiance (Pn) curves (see Fig. 1). Preadaptation proceeded directly in the 12-ml oxygen electrode measuring chamber. This device has been described elsewhere (Dubinsky et al., 1987). The samples were bubbled with air to ensure a constant partial oxygen pressure. Next, while maintaining the conditions of preadaptation (i.e. background light or darkness), one group of samples was exposed to one thousand supersaturating flashes (see below) in order to incite photodamage. The other group was not exposed to flashes and remained in the preincubation conditions during this time.
A delay between the flashes of 3 s was chosen in the samples without background light. At this frequency, controls demonstrated that the oxygen consumption rate (dark respiration) remained identical, i.e. no oxygen production was revealed, with or without flashes. The light preadapted samples (which receive the flashes in the continued presence of saturating background light), already perform photosynthesis at a maximal rate. This allowed a faster flashing regime with 300-ms intervals. A General Electric FT 230 flash tube was used at a discharge voltage of 1.3 kV, which provided flashes of 5 ps half-width with an energy output of 2 J/flash in the forward direction. Calculated by the surface of the incubation chamber this amounts to the supersaturating photonflux of approximately 10,000 pE.m-Vflash. The flash tube was connected directly to the incubation chamber (Le. the one used for the oxygen and fluorescence measurements, cf. below). During the flashes aeration was continued. The number of flashes was selected to yield appreciable photodamage (as judged from changes in the pigment content and physiological activity presented), while avoiding lethality. All flash-treated sampies used for the photosynthesis activity assays were allowed recovery during 15 min in darkness to equalize the metabolic conditions of the various samples. Although the pigment composition changes during this 15-min period (especially zea-and antheraxanthin disappear, whereas violaxanthin increases, data not shown), the overall losses of pigment are not replenished in this short period. Samples for pigment analysis were taken immediately after the incubation period (with or without flashes) but before the relaxation time introduced in the other assays.
PII Curves and Fluorescence Measurements-After the preadaptation, flash and recovery periods were terminated, the cuvette was closed, and PII curves were recorded according to standard procedures (Dubinsky et al., 1987). Fluorescence measurements included two types of experiments. A, photochemical quenching was monitored with a pulse-amplitude-modulated chlorophyll fluorescence measuring system (Walz, Germany) as described by Schreiber et at. (1986). During the measurements the fiber optic light guide was directly placed against one side of the oxygen measuring chamber. This way, oxygen production and photochemical quenching (qP) could be estimated simultaneously. qP was estimated every 120 s by firing saturating pulses of 500-ms duration (Schott KL-1500 light source, 12,000 pE.m-2.s-1).
E, relative energy transfer efficiency from carotenoids to Chl was measured by comparing the fluorescence yield after excitation with broad blue and orange light. The former excites both carotenoids and chlorophyll, the latter chlorophyll only. Data were normalized on the emission resulting from the chlorophyll excitation in the orange. These measurements were done with a Perkin-Elmer 1000 spectrofluorometer, emission wavelength 685 nm, slit width "M." Before the measurements the samples were preadapted to light or dark conditions as described above. Samples were treated with 3-(3,4-dichlorophenyl)-l,l-dimethylurea (10 p~) during 1 min, either in the dark or light analogous the preadaptation conditions, immediately before the assay. Excitation was done with orange light through a 628-nm interference filter (Schott) and with blue light through a 2-mm BG28 cut-off filter (Schott).
Data shown are the average of three separate experiments. Differences between comparable data points in the three experiments were below 10% of each of the numeric values given.
PS I and PS II Actiuity Measurements-Cells were harvested and resuspended in a buffer containing: 0.33 M sorbitol, 2 m~ EDTA, 1 m~ Mg&, 1 m~ MnCI,, 50 m~ Hepes-KOH, pH 7.6. Cells were broken by one passage through a French press (Aminco) at 8,000 megapascals. Oxygen uptake was measured with a Clark oxygen electrode (Yellow Spring Instruments) in a thermostated 1-ml laboratory built oxygen electrode cuvette. Saturating white light was supplied with a Schott 1500 light source, equipped with a light guide. Full chain (PS I1 and PS I) electron transfer capacity was estimated in the presence of 100 p~ methylviologen and 1 m~ sodium azide, following the recommendations ofAllen and Holmes (1986a). PS I electron transfer was measured with 2 m~ ascorbate and 50 p~ dichlorophenolindophenol as electron donor system and 100 p~ methylviologen as electron acceptor. The PS I assay was done in the presence of 3-(3,4-dichlorophenyl)-1,l-dimethylurea (10 p~) and 1 m~ sodium azide (Allen and Holmes, 1986b).
Pigment Analysis and Zeaxanthin Conversion-HPLC analysis was done as in Mantoura and Llewellyn (1983), as specified in Van der Staay et al. (1992). Detection wavelengths were chosen at 440 and 480 nm. This way, zeaxanthin and lutein contents (which have very similar retention times in the HPLC procedure used) could be estimated separately. The specific extinction coefficients were taken from Mantoura and Llewellyn (1983). At the end of the flash periods samples were immediately removed from the reaction chamber and processed for pigment content estimation. This involved immediate centrifugation during precisely 1 min and mixing of the pellet with ice-cold acetone. Before the actual HPLC analysis, the samples were stored at -18 "C until used. The zeaxanthin samples used in the in vitro degradation studies (cf, Fig. 2) were from Chlorella and were purified by HPLC.
A, , and Chl Estimation-A,,o was measured on a Pharmacia Novaspec I1 photometer. Chlorophyll was measured in acetone extracts (Jeffrey and Humphrey 1975).
RESULTS
Changes in photosynthetic activity (02 production) and photochemical quenching (qP) after exposure of C. pyrenoidosa cells to control or photodamaging conditions are shown in Fig. 1. Control samples of LL and HL Chlorella cells behave differently. The LL cells have a lower maximal photosynthesis activityXh1 than the HL ones. The LL cells show a stronger qP decrease than the HL cells. The rate of O2 evolution decreases at higher irradiance of the LL cells. Using preadaptation in the light or darkness did not induce major changes, both HL and LL cells retain comparable activities. Following exposure to the photodamaging flash treatment in the continued presence of actinic background light gave rise to relatively minor losses of activity through photodamage, both in the LL and HL cells. In contrast, clear photodamage is obvious in the samples that were kept in darkness during preadaptation and while being exposed to the photodamaging flashes. Especially the LL cells show a n appreciable loss of oxygen evolution and qP at increasing actinic light intensities over the course of the P/I curve determination.
The observed differences of the photosynthetic activities were related to changes in the pigment composition of the samples. Table I depicts the pigment analysis of the HL and LL cultured cells. The data reflect that in the LL cells the total Chl to carotenoid ratio is at least twice that of the HL cells, the Chl to summed xanthophyll cycle components ratio is 3-fold higher. Differences produced by the dark or light preadaptation conditions are mainly restricted to the three xanthophyll cycle pigments. The violaxanthin content decreases in the light and the zeaxanthin content increases. This way, variable pool sizes of the xanthophyll cycle components were established before exposure to potentially photodamaging conditions.
The Table I, the overall picture depicts photodamage of most pigments, including Chl a, lutein, and p-carotene, with the marked exception of the antheraxanthin content in the light preadapted samples. The neoxanthin content decreases in the LL cells only. In general, the damage is small in the light-preadapted HL cells and somewhat more pronounced in the dark-preadapted HL cells. Noticeable damage is induced in the dark preadapted cells of the LL culture. As opposed to the HL grown cells in which the total of xanthophyll cycle components becomes reduced by 17% in the dark-flashed group, the loss in the analogous LL experiment amounts to 66% (Table 11).
Zeaxanthin is the predominantly disappearing compound in the dark-flashed HL cells with reference to the just dark-incubated HL control cells. In the absence of zeaxanthin, p-carotene is a target for breakdown, as can be seen most clearly in a comparison of the LL dark-adapted and dark-flashed samples. Lutein appears to be relatively little involved in the protection. To define the site where the actual photodamaging process occurs and especially to locate the site a t which the xanthophyll cycle provides protection against photodamage, the electron transfer capacity of the total electron transfer chain (PS I1 and PS 1) was compared to the capacity of PS I alone (Table 111).
Full chain electron transfer rates in the samples that had received the strong flashes in the presence of background light appeared to remain nearly unaltered. The samples that were exposed to the flashes in the absence of background light displayed more than 20% photodamage (both HL and LL), comparable to the data given in Fig. 1. As opposed to the full chain data, PS I capacity appeared to diminish even when the strong flashes were administered in the presence of background light. The inhibition was stronger in the LL samples. However, in the dark-flashed samples and in comparison to the full chain, the damage to PS I appeared relatively low. Compared to the full chain electron transfer rate numbers, the PS I change in the light-flashed samples is already big. The increased damage observed for the full chain rates in the dark-flashed sample does not correspond to a similar decrease in the PS I sample. The protective function of the xanthophyll cycle therefore ap-
The effect of phofodamaging conditions on the electron transfer capacity of PS II plus PS I and of PS I measured separately
Results are given in m o l of oxygen per mg of Chl a and hour. Assay conditions are given under "Materials and Methods." The control cells were preadapted in light, otherwise sample preparation and exposure to photodamaging conditions were as in Fig. 1 pears to be predominantly effective for PS 11. The observed changes in the relative abundance of the carotenoids may exert effects on the light energy transfer efficiency of PS 11. If so, a lower light energy conductance would give rise to a lesser fluorescence output from PS I1 in the presence of 3-(3,4-dichlorophenyl)-l,l-dimethylurea. To eliminate effects of sample geometry, fluorescence excitation was done with broad blue as well as 628-nm orange light. The latter excites chlorophylls only, the former both chlorophylls and carotenoids. Normalizing on fluorescence yield in the orange by using ratio's equalizes any changes in fluorescence yield of Chl related, for example, to qNP. In HL and LL cultures, the lower fluorescence ratio observed in the dark-preadapted samples indicates that the light energy transfer efficiency from carotenoids to chlorophyll remains higher in darkness than following preadaptation in the light ( Table IV). The difference in the efficiency of energy transfer carotenoids + chlorophyll between the dark and light adapted samples is more pronounced in the HL cells.
The results presented in Table I1 indicate that the formation of the monoepoxide antheraxanthin can only in part be accounted for by conversion of violaxanthin, in addition, the disappearance of zeaxanthin appears to add to antheraxanthin formation as well. The apparent two reactions by which antheraxanthin can be formed addresses the question on the nature of the molecular conversions of zeaxanthin that happen as a part of the protective function. In order to investigate the involvement of nonenzymatic processes, the breakdown of zeaxanthin under in vitro conditions was examined. The HPLC chromatograms shown in Fig. 2 indicate that when isolated zeaxanthin (retention time 13.4 min) was exposed to damaging conditions (10,000 pE.m-2.s-1 white light, 50 "C, air) degradation occurred. With oxygen present, formation of violaxanthin (retention time of 8.6 min) became evident. The identity of the other "breakdown products" with retention times between 16 and 17 min has not yet been extensively determined. The cis peak in the UV region in the absorbance spectra (data not shown) indicated that these may be the different cis-isomers of zeaxanthin. A similar experiment with zeaxanthin was performed in the presence of the singlet oxygen-generating agent eosine, in just room light and at room temperature. Violaxanthin was formed, other zeaxanthin conversion products were nearly absent (data not shown). The in vitro conversion reactions of zeaxanthin support our view on the actual process of singlet oxygen quenching as part of a dynamic xanthophyll cycle in the light: zeaxanthin is recycled into violaxanthin in the light. Zn vivo, antheraxanthin is formed this way as well, either by monoepoxidation of zeaxanthin or by the normal viola-to anther-TMLE IV Light energy transfer eficiency HL and LL cells were preadapted in light or in darkness. DCMU (10 w) was added 1 min prior the fluorescence measurements. Maximal fluorescence emission at 685 nm was measured after excitation with broad blue or orange light (cf. "Materials and Methods"). The Table depicts light (10,000 pE.m-2.s-1) during 20 min at 50 "C in the presence of air; B , as A but in the presence of nitrogen gas instead of air, and C , rechromatograped after storage on ice in the dark in the presence of air. Other details are given under "Materials and Methods." axanthin enzymatically catalyzed reactions of the xanthophyll cycle in the light.
DISCUSSION
The two different types of cultures (i.e. LL and HL grown) allowed assays with different contents of xanthophyll cycle pigments present, i.e. relatively abundant in HL cells and low in LL cells as in Thayer and Bjorkman (1990). Applying or omitting actinic background light appeared to be a useful approach to allow or avoid the conversion of violaxanthin to zeaxanthin (Blass et al. (1959) and Yamamoto et al. (1962)).
In earlier studies dithiothreitol was used to study the role of zeaxanthin in the prevention of photodamage. Those experiments precluded the possibility of studies on a dynamically operating cycle. Our approach involved preadaptation of the cells in either darkness or light to install a stable pH in the thylakoid lumen. To this end, the light intensity was chosen to just reach the P,, condition (Fig. l), while avoiding the occurrence of appreciable photodamage (Table I). The lumen pH has been associated with the equilibria of the xanthophyll cycle (Rees et al., 1989 Pfundel andDilley, 1993). By the preadaptation step, the ratio of the xanthophyll cycle pigments was fmed in a given status before the photodamaging flashes were given. The flashes were administered at a low frequency in order to prevent the built up of a proton gradient for the darkpreadapted cells as much as possible. Obviously, if there had been a substantial acidification of the thylakoid lumen analogy with the samples prepared in the presence of actinic background light would have given a diminished breakdown of pigments through the installment of the xanthophyll cycle in the protective mode.
Regardless of the growth conditions and the preincubations, the flashed light induces general photodamage of nearly all FIG. 3. Ascheme depicting the interactions between the various singlet and triplet states of chlorophyll, oxygen, and carotenoids and the role of the xanthophyll cycle in these processes. Reactions indicated with numbers are: 1 , excitation of ground state chlorophyll; 2, direct singlet ground state relaxation of excited chlorophyll by photosynthesis, radiationless transfer, fluorescence, heat release or singlet quenching via zeaxanthin (Demmig- Adams, 1990Owens et al., 1992; 3, chlorophyll triplet quenching by ground state oxygen which produces (via spin reversal) singlet excited oxygen or by ground state carotenoid which produces triplet excited carotenoid; 4 , reaction of singlet excited oxygen with ground state non-or monoepoxy carotenoids resulting in the epoxidated compounds; 5, singlet energy transfer of violaxanthin-absorbed light to chlorophyll; 6 and 7, enzymatic conversions operating in the xanthophyll cycle. Further details are presented in the text. pigments, be it to different extents. A clear exception is the increase for antheraxanthin in the samples that were flashed in the presence of actinic background light. This increase is of great interest for the understanding of the physiological function of the xanthophyll cycle. Comparison of the pigment distribution in between HL with background light only (Table I) and HL flashed with background light present (Table II), shows that the decrease of the violaxanthin content is less than the actual increase of antheraxanthin. The only feasible explanation for this observation is epoxidation of zeaxanthin. We conclude that epoxidation of zeaxanthin under photodamaging conditions in the light also contributes to antheraxanthin formation. Interestingly, earlier work (Hager, 1981;Pfiindel and Dilley, 1993) established the regulatory function of the lightdependent proton gradient formation for the xanthophyll cycle. From that work can be concluded that epoxidation occurs only after relaxation of the proton gradient, i.e. in darkness. Given the conditions in our experiment, changes of the content of antheraxanthin, other than at the expense of violaxanthin, would not be expected (see above). It is concluded that in addition to the "mixed oxidase" function operating in high lumen pH, i.e. darkness (Hager, 1981), a nonenzymatic epoxidation reaction occurs in the light as well, in accordance with Fig. 2.
Control experiments in which purified zeaxanthin was treated with light plus heat in the presence of air indeed gave rise to the formation of the (di-)epoxy compound violaxanthin. This is comparable to the earlier report on the oxidative degradation of antheraxanthin for which in vitro treatment with heat and oxygen has been shown to facilitate the formation of violaxanthin (Thomas and Goodwin, 1965). A recent report describes that oxidative degradation of p-carotene yields monoand diepoxides (Liebler and Kennedy, 1992). This explains our observation that, regardless of the continued presence of a stable proton gradient, formation of antheraxanthin in the light is possible via a nonenzymatic epoxidation of zeaxanthin, The nonenzymatic epoxidation of zeaxanthin results from its function as a photoprotective pigment, i.e. in quenching of singlet oxygen in this particular case. Thus in the light a complete cycle is active. This includes enzymatic reutilization of nonenzymatically epoxidized zeaxanthin (i.e. recycled violaxanthin).
Our work shows that the quenching of excited oxygen by zeaxanthin involves an epoxidation reaction which effectively results in recycling to antheraxanthin and probably violaxanthin as made likely in the in vitro assay. This means that after reaction of zeaxanthin with singlet oxygen, the zeaxanthin is not lost from the cycle but is actually converted into the epoxy compounds antheraxanthin and violaxanthin, through which in the presence of the appropriate acidification of the lumen in the light zeaxanthin can be made again. Table I11 showed that the xanthophyll cycle was most effective in relation to PS 11, the site at which singlet oxygen generation is most likely to occur.
The position of the steady state of all the processes involved determine the actual distribution of viola-, anthera-, and zeaxanthin in a given sample. This way, the xanthophyll cycle has a real dynamic function in the photoprotective process (Fig. 3).
The net decrease of xanthophyll cycle components over the course of exposure to photodamaging conditions is due to the limited number of times that zeaxanthin, in its function as quencher of excited chlorophyll triplet states, is able to withstand trans-cis-trans transitions. In this, according to Krinsky (1971), zeaxanthin has t o become damaged during the quenching at a statistical rate of 1000 quenching events per degradation.
In addition to the chemical modifications associated with the operation of the xanthophyll cycle, a change of the energy transfer efficiency in the carotenoid absorbance region related to the state of the xanthophyll cycle and the amount of the xanthophyll cycle pigments as well, was observed ( Table IV). The difference in the molecular absorbance coefficient between zeaxanthin and violaxanthin cannot be the only reason for this appreciable change. This points to differences in the transfer efficiency between violaxanthin and zeaxanthin to Chl. An explanation for these differences is the number of conjugated double bonds: 9 in violaxanthin and 11 in zeaxanthin. With an increasing number of conjugated double bonds the energy level of the excited states becomes lower, i.e. the zeaxanthin excited states ('4, 'BU) lies below that of violaxanthin by which the possibility of a n energy transfer to the S1 of Chl a from zeaxanthin becomes increasingly unfavorable (Owens et al., 19921.' Violaxanthin has been shown to act as light-harvesting pigment (Owens et al., 1987). This implies that the energy level of the first excited state of violaxanthin is higher than the one of the final Chl acceptor.
The three ways in which the xanthophyll cycle provides protection against photodamage are qNP (singlet transfer), decreased light harvesting capacity (singlet transfer), and photosensitizer-quenching reactions (triplet related). These processes are cooperative: if a carotenoid has a protective func-* A. Friedman and H. Schubert, unpublished results, tion it also has a shadowing effect in the blue region of Chl absorbance and the possibility to quench excited chlorophylls (both singlet and triplet). This effect is important, not only with reference to the mole % numbers presented in Tables I and 11, but more so because of the about 3.5 times higher molar absorbance coefficient of a carotenoid in comparison to Chl. In other words, in cases of excessive irradiation the shadowing effect is useful, but it should be reversed at less than optimal irradiance, which indeed occurs through the enzymatic epoxidation steps in darkness.
The advantages of the xanthophyll cycle are clear, its dynamically adjustable surdshade function excludes the need for a constantly present shadowing pool of carotenoids, the chemical trans-cis-trans heat release involved in triplet Chl a photosensitizer quenching strongly reduces the need for de nouo synthesis to replace photodamaged molecules. To this, the observation in the present study that singlet oxygen quenching provides a means for recycling of zeaxanthin to violaxanthin in the light further extends the functional role of the xanthophyll cycle. The equilibria of the system can rapidly switch from a protective (shadowing, chl triplet, and singlet quenching (Demmig-Adams, 1990)) to a light harvesting (singlet transfer from violaxanthin to Chl) function.
As stated by Hager (1981), the xanthophyll cycle is present in higher plants and green algae but is absent in phycobilisome containing organisms. This remarkable difference may be related to another way of discarding excess excitation energy in cyanobacteria via decoupling of the phycobilisome antennae (Mullineaux et al., 1990). Otherwise, the spectral region of light harvesting in phycobiliprotein containing organisms is largely shifted outside the carotenoid region. This way, light harvesting in cyanobacteria in the blue spectral region is circumvented through which the capacity losses by shadowing carotenoids are in principle negligible in comparision to Chl a and b containing organisms. Cyanobacteria indeed contain a high carotenoid over Chl ratio, to provide for a shading and a photosensitizer quenching function. Likely, these two functions are confined to the cytoplasmic and thylakoid membranes, respectively. The apparent need for a n appreciably higher poolsize of carotenoids acting as photosensitizer quenching pigments may be explained by the lack of a recycling system.
In conclusion, the xanthophyll cycle provides a dynamic tool for Chl a and b containing organisms and possibly also for brown algae with the diatoxanthiddiadinoxanthin conversion: tailor made photosensitizer quenching without loss of light harvesting efficiency under changing light conditions.
|
v3-fos-license
|
2020-01-21T14:02:29.503Z
|
2020-01-20T00:00:00.000
|
210831923
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/bit.27273",
"pdf_hash": "7493bfa4bd30df0c6f0dad163dd14b5527140fa9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43050",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "914fa25173a7fead2578c492d82bd85ca00bfaab",
"year": 2020
}
|
pes2o/s2orc
|
Plant‐derived protein bodies as delivery vehicles for recombinant proteins into mammalian cells
Abstract The encapsulation of biopharmaceuticals into micro‐ or nanoparticles is a strategy frequently used to prevent degradation or to achieve the slow release of therapeutics and vaccines. Protein bodies (PBs), which occur naturally as storage organelles in seeds, can be used as such carrier vehicles. The fusion of the N‐terminal sequence of the maize storage protein, γ‐zein, to other proteins is sufficient to induce the formation of PBs, which can be used to bioencapsulate recombinant proteins directly in the plant production host. In addition, the immunostimulatory effects of zein have been reported, which are advantageous for vaccine delivery. However, little is known about the interaction between zein PBs and mammalian cells. To better understand this interaction, fluorescent PBs, resulting from the fusion of the N‐terminal portion of zein to a green fluorescent protein, was produced in Nicotiana benthamiana leaves, recovered by a filtration‐based downstream procedure, and used to investigate their internalization efficiency into mammalian cells. We show that fluorescent PBs were efficiently internalized into intestinal epithelial cells and antigen‐presenting cells (APCs) at a higher rate than polystyrene beads of comparable size. Furthermore, we observed that PBs stimulated cytokine secretion by epithelial cells, a characteristic that may confer vaccine adjuvant activities through the recruitment of APCs. Taken together, these results support the use of zein fusion proteins in developing novel approaches for drug delivery based on controlled protein packaging into plant PBs.
the harsh conditions of the gastric system, such as low pH and digestive enzymes. To ensure that the active components remain intact upon arrival at their effector site, they need to be fortified to prevent degradation. One way to achieve such robustness is by encapsulating therapeutics into micro-or nanoparticles.
Alternatively, zein-containing protein storage organelles, socalled zein protein bodies (PBs), found in maize endosperm cells (Lending & Larkins, 1989), may offer natural bioencapsulation strategies for recombinant oral pharmaceuticals. This assumption has been substantiated by experiments with rice seeds showing that the sequestration of recombinant proteins in endogenous storage organelles containing rice prolamins confers protection from digestive proteolysis after oral administration in an animal model (Nochi et al., 2007). A faster and more versatile method for encapsulating proteins into the protective environment of zein micro/nanocarriers is to create a fusion protein in which the protein of interest is fused to a partial sequence of zein. Expression of such fusion protein results in in vivo bioencapsulation in various production hosts, within newly induced storage organelles. Amongst the various classes of zeins: α (19 and 22 kDa), β (15 kDa), γ (16, 27, and 50 kDa), δ (10 kDa; Woo, Hu, Larkins, & Jung, 2001)-the 27 kDa γ-zein was identified as the key element that induces the formation of endogenous as well as recombinant PBs. Furthermore, it was discovered that the N-terminal 93 amino acids of 27 kDa γ-zein (abbreviated gz93 from here on) are sufficient to produce PBs in other plants, and even in heterologous expression systems such as fungal, insect, and mammalian cells (Llop-Tous et al., 2010;Torrent et al., 2009). Various proteins with different properties in terms of molecular mass and function, including growth factors (Torrent et al., 2009), viral vaccine candidate proteins (Hofbauer et al., 2016;Mbewana, Mortimer, Pêra, Hitzeroth, & Rybicki, 2015;Whitehead et al., 2014), and enzymes (Llop-Tous, Ortiz, Torrent, & Ludevid, 2011), have been successfully incorporated into newly induced PBs in plants like Nicotiana benthamiana when fused to gz93. N. benthamiana is frequently used for the production of biopharmaceuticals because it is well suited for the transient expression of recombinant proteins, and this method offers advantages over other expression systems in terms of speed, safety, scalability, and reduced upstream production costs. However, the cost savings in the upstream process are sometimes offset by industrial downstream processes for the purification of biopharmaceuticals, which are often quite laborious and may account for approximately 70-80% of the total manufacturing costs regardless of the expression host (Schillberg, Raven, Spiegel, Rasche, & Buntru, 2019). In the case of orally delivered plant-made products, the complexity of the downstream process could be reduced and plant tissues could be administered after minimal processing, allowing to take maximum benefit of the competitive upstream production costs offered by plants.
Previously, it was reported that zein PBs can have an adjuvant effect when administered by injection. For example, the fusion of a therapeutic HPV vaccine candidate to the Zera® peptide, a selfassembly domain very similar to gz93, enhanced the immune responses in mice (Whitehead et al., 2014). Similarly, when we fused hemagglutinin-5 (H5) to gz93, the resulting PBs were able to elicit a strong immune response that was on par with soluble H5 plus Freund's complete adjuvant, while soluble H5 without adjuvant failed to induce an immune response (Hofbauer et al., 2016). Particulate formulations of antigens generally show this immunostimulatory effect and one possible explanation is that upon internalization of a single particle, many copies of the antigen enter the cell, whereas a much higher dose must be administered to achieve comparable local concentrations surrounding the cell (Colino et al., 2009;Snapper, 2018). Alternatively, the enhanced immune response may also be due to superior antigen display and stability or other immunostimulatory signals (Smith, Simon, & Baker, 2013). In addition, gz93 harbors eight repeats of a proline-rich domain (VHLPPP) 8 that closely resembles the sweet arrow peptide (VRLPPP) 3 , which is known for having cellpenetrating properties (Sánchez-Navarro, Teixidó, & Giralt, 2017).
In the present study, we focus on the potential of PBs for oral application. We explore a downstream procedure based on two consecutive tangential flow filtrations (TFFs) as a means to enrich the zein PBs from larger amounts of leaf tissue, and we investigate the internalization efficiency of zein PBs into cells of the mucosal lining by comparing the uptake of fluorescent gz93 PBs and polystyrene beads of comparable size. We demonstrate efficient PB internalization into intestinal epithelial cells as well as antigen-presenting cells (APCs). Finally, we analyze whether the epithelial cells secrete cytokines, which are known to recruit APCs.
| Molecular cloning
The coding sequences of gz93-enhanced green fluorescent protein (eGFP) and gz93-mTagBFP2 were designed in silico and synthesized by GeneCust, Europe. The sequences were then cloned into the pTRA vector, a derivative of pPAM (GenBank AY027531), by restriction cloning using SmiI and XbaI cut sites. The translated sequence starts with the N-terminus of 27 kDa γ-zein (GenBank accession number: AF371261) including its native signal peptide and the first 93 amino acids of the mature protein (hence gz93), followed by a short flexible (GGGGS) 2 linker, which finally connects to the eGFP or the monomeric blue fluorescent protein (mTagBFP2; Subach, Cranfill, Davidson, & Verkhusha, 2011). gz93-eGFP is expressed under control of a 35S promoter with a duplicated transcriptional enhancer and a 35S terminator, both originating from Cauliflower mosaic virus. In addition, the transcribed region contains a 5′-untranslated region from Tobacco etch virus, which confers the increased stability of the messenger RNA.
Two matrix attachment regions of tobacco Rb7 (Halweg, Thompson, & Spiker, 2005) flank the promoter and terminator up-and downstream, respectively, to suppress transgene silencing.
| Plant material and agroinfiltration
N. benthamiana plants were cultivated in the soil in a growth chamber with a 16 hr photoperiod at 70% relative humidity and day/night temperatures of 26°C and 16°C, respectively. The gz93-eGFP and gz93-mTagBFP2 plasmids were transferred into chemically competent Agrobacterium tumefaciens GV3101-pMP90RK. Cultures of this Agrobacterium strain were inoculated from glycerol cryo-stocks and cultivated in YEB medium containing 25 mg/L kanamycin, 25 mg/L rifampicin, and 50 mg/L carbenicillin. Cultures were incubated at 28°C while shaking at 200 rpm. Before infiltration, the cultures were pelleted and washed twice with infiltration medium (10 mM MES pH 5.6, 10 mM MgCl 2 , 100 µM acetosyringone) and adjusted to OD 600 0.2 with infiltration medium. The infiltration of N. benthamiana leaves was performed manually with 1 ml syringes. Leaves were harvested 8 days postinfiltration (dpi) for the production of PBs for uptake assays, while smaller samples for size determination were harvested at 4 and 12 dpi as well.
| PB size determination
The diameter of gz93-eGFP PBs was determined at 4, 8, and 12 dpi by analyzing the maximum projected z-stacks of confocal laser scanning microscopy (CLSM) pictures. For each sample, a 5 × 5 mm section was excised from the agroinfiltrated leaves of N. benthamiana and mounted on a glass slide with tap water as the immersion medium. The samples were observed under a Leica SP5 Confocal Laser Scanning Microscope using a ×63 water immersion objective (NA 1.20). The Argon laser power was set to 16% and the 488 nm laser line was set to 2% output for the excitation of eGFP. Forty-eight pictures along the z-axes were recorded at a resolution of 1,024 × 1,024 pixels for each picture with a step size of 1.1 µm (bidirectional scanning at 400 Hz, 2x line averaging). Maximum projections of z-stacks were exported from Leica Software and analyzed using Adobe Photoshop. In total, 832, 986, and 821 individual PBs from at least three samples per time point were measured for 4, 8, and 12 dpi, respectively.
| Processing of plant material
N. benthamiana leaf material expressing gz93-eGFP or gz93-mTagBFP2 was harvested at 8 dpi and stored at −20°C until processing. Leaf material, 200 g, was homogenized in a Waring-type blender with the addition of 800 ml phosphate-buffered saline (PBS) extraction buffer (137 mM NaCl, 2.7 mM KCl, 10 mM Na 2 HPO 4 , 1.8 mM KH 2 PO 4 , pH 7.4) supplemented with 2% Triton X-100. The extract was further homogenized with a disperser (IKA ULTRA-TURRAX® S 25 N-10 G) and then repeatedly pelleted by centrifugation at 15,000 rcf for 30 min at 4°C.
The supernatants were discarded, and the pellets were washed twice with PBS extraction buffer including 2% Triton X-100 and twice with PBS lacking Triton X-100. The resulting suspension was then filtered through a 180 µm nylon mesh filter utilizing a vacuum-assisted bottletop filter holder. Small amounts of antifoam Y-30 were added when necessary. This was then subjected to the first TFF using a nylon filter cloth with a 10 µm cut-off. Since TFF systems with this pore rating were not available, we built a prototype TFF filter holder that can be equipped with any cloth or membrane. This filter holder provided a surface area of 96 cm 2 and was operated by a peristaltic pump. gz93-eGFP PBs passed through the 10 µm filter and the permeate was washed and concentrated using a second TFF with a 0.65 µm cut-off (C02-E65U-07-N; Spectrum Labs). Once some of the permeate had passed the first filter, both systems could be operated simultaneously. The concentrated retentate was subjected to low-speed density centrifugation over a cushion of 40% CsCl (1.4225 g/cm 3 ) at 4,800 rcf for 30 min at 20°C. The top layer was collected and washed twice with five sample volumes PBS, to remove CsCl, by pelleting at 21,000 rcf for 5 min at 20°C.
| Flow cytometry of PBs
Processed samples of gz93-eGFP PBs were measured in a V-bottom 96-well plate and data were collected for 10,000 events using a flow cytometer (CytoFlex S; Beckman Coulter). eGFP signal was excited at 488 nm and emission was measured at 525 nm. Forward, side scatter, and eGFP gain was set to 40, 24, and 50, respectively. To show the reproducibility of the method, three independent measurements, each including five replicates, were performed. Flow cytometry data were analyzed with CytExpert 2.3 (Beckman Coulter).
| Determination of nicotine content
Nicotine extraction was performed as described (Moghbel, Ryu, & Steadman, 2015). PBs derived from 50 mg of leaves (FW) were extracted for 2 hr in a 1-ml extraction solution (40% aqueous methanol containing 0.1% 1 N hydrochloric acid). The supernatant was collected and the pellet was re-extracted twice. The super- | 1039 standard (N0267; Merck, Germany) was used for quantification. For high performance liquid chromatography-electrospray ionizationtandem mass spectrometry (HPLC-ESI-MS/MS) measurements, the sample was dissolved in 12 μl of 80 mM ammonium formate buffer (pH 3.0) and 5 μl was loaded on a BioBasic C18 column (BioBasic 18, 150 × 0.32 mm, 5 µm; Thermo Fisher Scientific, Waltham, MA) using a Dionex UltiMate 3000 system directly linked to a QTOF instrument (maXis 4G ETD; Bruker). A gradient from 99.0% to 6.2% of solvent A and 1.0-93.8% of solvent B (solvent A: 80 mM ammonium formate buffer at pH 3.0, B: 80% acetonitrile and 20% A) was applied over a 10 min interval at a flow rate of 6 μl/min. The mass spectrometer was equipped with the standard ESI source and measurements were performed in positive ion, DDA mode (= switching to MSMS mode for eluting peaks). MS scans were recorded (range, 100-1,500 m/z) and the four highest peaks were selected for fragmentation. Instrument calibration was performed using an ESI calibration mixture (Agilent).
| PB uptake and flow cytometry of HCEC cells
For uptake studies, the medium was supplemented with 100 units/ml of penicillin, and 100 μg/ml of streptomycin (Sigma-Aldrich) and 2 × 10 4 cells/cm 2 were seeded and differentiated for 48 hr until confluence was reached. On the basis of the results from the quantification of PBs using a flow cytometer, cells were incubated with 150 gz93-eGFP PBs/cell at 37°C (n = 3) for 2, 6, 12, 18, and 24 hr. Before cell detachment using 0.1%/0.02% Trypsin/EDTA for 5 min, the cells were washed thoroughly with PBS to remove the remaining particles. The uptake of fluorescent particles into the cells was analyzed in a flow cytometer (CytoFlex S; Beckman Coulter).
Yellow-green-labeled 1-µm polystyrene microspheres (F13081; Thermo Fisher Scientific) were used for comparison. As a negative control, cells were kept for 6 hr at 4°C to prevent active particle uptake. The negative control was carried out with 150 gz93-eGFP PBs or polystyrene microspheres (PS beads) per cell, respectively, and the signal obtained was subsequently subtracted from the fluorescent signal obtained from cells incubated at 37°C. To obtain sufficient amounts of PBs, we developed a new downstream procedure for the enrichment of zein PBs that is based on a combination of filtration steps ( Figure 2) and therefore more easily scalable than previously described processes based on ultracentrifugation (Hofbauer et al., 2016;Whitehead et al., 2014).
Our procedure comprises initial washing steps with buffer containing Triton X-100 to solubilize membranes and to remove soluble host proteins and other compounds from the insoluble fraction. This was followed by coarse straining through a 180 µm mesh and two subsequent TFFs with pore sizes of 10 and 0.65 µm, respectively. The first TFF removes large cell debris while gz93-eGFP PBs pass through the filter. The second TFF step was carried out to remove additional soluble host proteins and particles that are smaller than gz93-eGFP PBs. Through this procedure, it was possible to reduce the sample volume and concentrate it by a factor of 100. As a result, much more of the sample could be subjected to centrifugation over a cushion of 40% CsCl (1.4225 g/cm 3 ) that allows separating particles with a higher density than gz93-eGFP PBs (e.g., starch granules). In addition, this step is performed at 4,800 rcf, and this enables more of the sample to be processed compared with procedures where centrifugation is done at ultrahigh speeds (>50,000 rcf).
The resulting preparations of gz93-eGFP PBs were evaluated by flow cytometry. This method allowed us to identify two populations of particles with distinct fluorescence properties ( Figure S2). In agreement with visual inspection by confocal microscopy, we concluded that the population of fluorescent particles represents gz93-eGFP PBs while the rest is probably cell debris. The mean concentration of fluorescent particles (n = 3) was 3.18E+06 events/µl (SD ± 13.2%) corresponding to 5.12E+07 gz93-eGFP PBs/g fresh weight of leaves.
The nicotine levels of N. benthamiana leaves and of the gz93-eGFP PB preparation were determined using HPLC-ESI-MS/MS (Table S1). The nicotine content in N. benthamiana leaves was around 47,500 ng/g, whereas the residual nicotine content in a PB sample derived from 1 g of leaves was 3.89 ng (SD ± 0.2), demonstrating that during the downstream procedure, nicotine was depleted by a factor of 1.22E+04. The residual amount of nicotine is comparable with the nicotine content found in some vegetables. For example, the levels of nicotine in the edible parts of tomato and eggplant are 3-7 ng/g (Moldoveanu, Scott, & Lawson, 2016), and according to Andersson, Wennström, and Gry (2003), the average nicotine exposure from consumption of vegetables is approximately 1,000 ng/day. F I G U R E 2 A scalable process for the enrichment of zein PBs, based on two consecutive tangential flow filtrations. At the first step, cell debris is retained by a 10-µm nylon filter while PBs are able to pass and the second step concentrates the PBs while allowing soluble contaminants to permeate. In this process flow chart, the path of PBs is highlighted in green. PBs, protein bodies [Color figure can be viewed at wileyonlinelibrary.com] SCHWESTKA ET AL.
| 1041
of cell-type-specific markers and functions of colon epithelial cells (Roig et al., 2010). The uptake of gz93-eGFP PBs into HCEC-1CT cells was demonstrated by CLSM and quantified by flow cytometry.
CLSM images showed that cells are able to take up gz93-eGFP PBs within 4 hr of incubation (Figure 3a-d). The cellular internalization of a gz93-eGFP PB was confirmed by providing optical sections (xy-) with xz-and yz-projections (shown in Figure 3e), which allowed a clear differentiation between extracellular and internalized PBs.
Furthermore, the internalization is proven by the overlay of the green signal, originating from the gz93-eGFP PB, and the red signal emitted by FM4-64 reported to stain endocytic membranes (Hansen, Rasmussen, Niels-Christiansen, & Danielsen, 2009).
A second experiment was carried out to quantitatively assess the uptake of PBs by flow cytometry and to compare the uptake efficiencies of PBs and PS beads. On the basis of the quantification of fluorescent events per µl, 150 gz93-eGFP PBs or PS beads per cell were added to in vitro cultures of HCEC-1CT cells and incubated for 2, 6, 12, 18, and 24 hr. Endocytosis of gz93-eGFP PBs occurred faster than that of PS beads, as indicated by a sharper increasing curve for the PBs, reaching a plateau after 12 hr (Figure 4). Mean values after 12 hr reached 66.5% (SD ± 6.2) and 43.5% (SD ± 4.9) for gz93-eGFP PBs and PS beads, respectively. The difference of 22.9% (SD ± 4.6) was significant in Student's t test (p < .01). Also, after exposure for 18 and 24 hr, the overall number of fluorescent cells incubated with PS beads remained below the levels obtained with gz93-eGFP PBs (t test; p < .05).
Having confirmed that human colon epithelial cells are able to endocytose gz93-eGFP PBs, we investigated whether endocytosis might lead to the secretion of cytokines that can activate the immune system. Amongst others, the cytokine GM-CSF is known to have an activating effect on APCs, like macrophages and dendritic cells (Hamilton, 2002). We thus collected the culture medium supernatants from the uptake assays (n = 3) and subjected them to Luminex assays. The secretion of GM-CSF was only elevated upon administration of 150 gz93-eGFP PBs per cell but not after treatment with the same amount of PS beads (Figure 5a). IL-6 levels were also significantly increased upon incubation with gz93-eGFP PBs as compared with the same dose of PS beads (Figure 5b).
| 1043
after endocytosis (Johnson, Ostrowski, Jaumouillé, & Grinstein, 2016). Since eGFP fluorescence is not stable in the acidic environment of late endosomes, we also used for this experiment PBs containing mTagBFP2, a blue fluorescent protein variant with a pKa of 2.7 ± 0.2 (Subach et al., 2011). In the gz93-mTagBFP2 PBs, GFP was replaced with mTagBFP2, but otherwise, they were produced and recovered in the same manner as described for the gz93-eGFP PBs and had a similar appearance and size ( Figure S1B). We were able to observe the colocalization of gz93-mTagBFP2 PBs in compartments stained with Dextran Alexa Fluor 647 (Figure 6c). It is, therefore, likely that the PBs are transported to the late endosomes, where usually antigen processing takes place.
In this study, we focused on using zein PBs as alternative oral drug delivery vehicles since they combine several beneficial properties: Zein PBs have been shown to be recalcitrant against digestion by various proteases (S. H. Lee & Hamaker, 2006), have an adjuvant effect (Hofbauer et al., 2016;Whitehead et al., 2014), and they can mediate the sustained release of in vitro encapsulated small molecule drugs and even DNA (Acevedo et al., 2018;Farris, Brown, Ramer-Tait, & Pannier, 2017;Regier, Taylor, Borcyk, Yang, & Pannier, 2012;Zhang et al., 2015).
In addition, the encapsulation in zein PBs can be achieved directly in the plant production host as an integral part of the upstream process.
For the induction of a mucosal immune response, uptake of an and is assumed to have cell-penetrating effects that could promote cellular uptake (Fernández-Carneado, Kogan, Castel, & Giralt, 2004).
In addition to the uptake of fluorescent PBs, we also showed an immunostimulatory effect on the cells, resulting in an increased secretion of chemoattractant molecules such as GM-CSF. GM-CSF is involved in the differentiation of granulocytes and macrophages and in the activation and proliferation of neutrophils, macrophages, and dendritic cells (Hamilton, 2002). With respect to mucosal immunization, the presence of GM-CSF was shown to increase antigen-specific antibody production (Okada et al., 1997). GM-CSF also promotes IL-6 secretion (Evans, Shultz, Dranoff, Fuller, & Kamdar, 1998), and accordingly IL-6 levels were also elevated when cells were subjected to PBs. Both chemokines play a pivotal role in the initiation of a humoral response to antigenic proteins (Tada, Hidaka, Kiyono, Kunisawa, & Aramaki, 2018), and IL-6 has been explored as a molecular adjuvant for mucosal vaccines (Rath et al., 2013;Su et al., 2008;Thompson & Staats, 2011). The observed cytokine release indicates the PB formulation's potential to enhance immunity and to exert an adjuvant effect, which is in agreement with the findings of Whitehead et al. (2014) and Hofbauer et al. (2016).
In addition to antigen uptake via intestinal epithelial cells, dendritic cells can capture antigens directly from the intestinal lumen by extending dendrites through the epithelium (Rescigno et al., 2001). Since GM-CSF is known to recruit dendritic cells to the subepithelial layer (Egea, Hirata, & Kagnoff, 2010), it is feasible that its secretion would lead to an increased number of dendrites reaching through tight junctions. Therefore, we investigated the uptake of PBs into APCs using the monocytic model cell line U937 (Altaf & Revell, 2013 (Pavot, Rochereau, Genin, Verrier, & Paul, 2012). This presents a challenge in the development of oral vaccine applications, and the corresponding production platforms need to be highly scalable.
Even though plant-based production systems are very flexible with respect to upstream production, the downstream processing procedure often includes rate-limiting bottlenecks. For example, in most previous reports, the isolation of PBs from leaf material involved a density gradient ultracentrifugation step (Hofbauer et al., 2016;Joseph et al., 2012;van Zyl, Meyers, & Rybicki, 2017). In the present study, the PBs were recovered by a newly established enrichment process based on several low-speed centrifugations and TFF steps, which can be easily adapted to kg amounts of leaf material without the need to invest in expensive large equipment for continuous ultracentrifugation. The removal of nicotine during the process was demonstrated, and the residual amount of nicotine in the sample was comparable to the nicotine content found in widely consumed vegetables (Moldoveanu et al., 2016). We have also demonstrated that fluorescent zein PBs can be analyzed and quantified by flow cytometry. It is likely that the procedure can also be adapted for nonfluorescent particles by using antigen-specific antibodies with fluorescent labels, thereby providing a general procedure for quality control of particulate formulations. It is important to note that oral vaccine formulations do not require the extensive purification and sterile conditions necessary for injected formulations, and downstream processing procedures reported for plant-made oral vaccine candidates range from simple homogenization or minimal processing of plant material to partial purification (Chan & Daniell, 2015;Loza-Rubio et al., 2012;Merlin, Pezzotti, & Avesani, 2017;Pniewski et al., 2018). The presence of plant-derived contaminants such as cell wall debris or starch particles, which cannot be completely removed by filtration and density centrifugation steps, are therefore unlikely to constitute a regulatory problem. On the contrary, biocompatible plant constituents, such as starch microparticles, have even been studied as vaccine adjuvants (Rydell & Sjöholm, 2004;Stertman, Lundgren, & Sjöholm, 2006).
In conclusion, we have shown that zein PBs produced in
|
v3-fos-license
|
2017-07-31T00:48:42.578Z
|
2016-09-07T00:00:00.000
|
22714549
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.7717/peerj.2437",
"pdf_hash": "dc14e5618fa991fd733fecbd5c15e7b9c0f952dd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43055",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Materials Science"
],
"sha1": "9d044d54ba190b4c2f2fcdcf1dee9025b610737f",
"year": 2016
}
|
pes2o/s2orc
|
Genome-wide identification and characterization of WRKY gene family in Salix suchowensis
WRKY proteins are the zinc finger transcription factors that were first identified in plants. They can specifically interact with the W-box, which can be found in the promoter region of a large number of plant target genes, to regulate the expressions of downstream target genes. They also participate in diverse physiological and growing processes in plants. Prior to this study, a plenty of WRKY genes have been identified and characterized in herbaceous species, but there is no large-scale study of WRKY genes in willow. With the whole genome sequencing of Salix suchowensis, we have the opportunity to conduct the genome-wide research for willow WRKY gene family. In this study, we identified 85 WRKY genes in the willow genome and renamed them from SsWRKY1 to SsWRKY85 on the basis of their specific distributions on chromosomes. Due to their diverse structural features, the 85 willow WRKY genes could be further classified into three main groups (group I–III), with five subgroups (IIa–IIe) in group II. With the multiple sequence alignment and the manual search, we found three variations of the WRKYGQK heptapeptide: WRKYGRK, WKKYGQK and WRKYGKK, and four variations of the normal zinc finger motif, which might execute some new biological functions. In addition, the SsWRKY genes from the same subgroup share the similar exon–intron structures and conserved motif domains. Further studies of SsWRKY genes revealed that segmental duplication events (SDs) played a more prominent role in the expansion of SsWRKY genes. Distinct expression profiles of SsWRKY genes with RNA sequencing data revealed that diverse expression patterns among five tissues, including tender roots, young leaves, vegetative buds, non-lignified stems and barks. With the analyses of WRKY gene family in willow, it is not only beneficial to complete the functional and annotation information of WRKY genes family in woody plants, but also provide important references to investigate the expansion and evolution of this gene family in flowering plants.
The existence of either one or two highly conserved WRKY domains is the most vital structural characteristic of WRKY gene. WRKY gene consists of about 60 amino acid residues with a conserved WRKYGQK heptapeptide at its N-termini, and a zinc finger motif (C-X 4-5 -C-X 22-23 -H-X 1 -H or C-X 7 -C-X 23 -H-X 1 -C) at the C-terminal region. Previous functional studies indicated that WRKY genes could specifically interact with the W-box ([C/T]TGAC[T/C]), the promoter region of plant target genes, to adjust the expressions of downstream target genes (Ciolkowski et al., 2008). Additionally, SURE (sugar responsive elements), another prominent cis-element that can promote transcription processes, was also found to bind to the WRKY transcription factors under a convincing research (Sun, 2003). The proper DNA-binging ability of WRKY genes could be influenced by the variation of the conserved WRKYGQK heptapeptide (Duan et al., 2007;Maeo et al., 2001).
The WRKY proteins can be classified into three main groups (I, II and III) on the basis of the number of their WRKY domains and the pattern of the zinc finger motif. Proteins from group I contain two WRKY domains followed by a C 2 H 2 zinc finger motif, while the other WRKY proteins from group II and III only contain one WRKY domain followed by a C 2 H 2 or C 2 HC correspondingly (Yamasaki et al., 2005). Group II can be further divided into five subgroups from IIa to IIe based on additional amino acid motifs present outside the WRKY domain. Apart from the conserved WRKY domains and the zinc finger motif, there are some WRKY proteins appearing to have basic nuclear localization signal, leucine zipper (LZs) (Cormack et al., 2002), serine-threonine-rich region, glutaminerich region and proline-rich region (Ü lker & Somssich, 2004). Throughout the studies of WRKY gene family in many higher plants (Liu & Ekramoddoullah, 2009;Rushton et al., 2010;Wu, 2005), WRKY genes have been identified to be involved in various regulatory processes mediated by different biotic and abiotic stresses (Ramamoorthy et al., 2008). In plant defense against various biotic stresses, such as bacterial, fungal and viral pathogens, it has been well documented that the WRKY genes play vital roles (Cheng et al., 2015;Dong, Chen & Chen, 2003;Jaffar et al., 2016;Jiang et al., 2016;Kim et al., 2016;Li et al., 2006;Liu et al., 2016;Xu et al., 2006;Zhou et al., 2008). They are also involved in abiotic stress-induced gene expression. In Arabidopsis, with the either heat or salt treatments, the expressions of AtWRKY25 and AtWRKY33 are transformed apparently (Jiang & Deyholos, 2009). Furthermore, the expression of TcWRKY53 that belonged to alpine penny grass (Thlaspi caerulescens) is affected by salt, cold, and polyethylene glycol treatments (Wei et al., 2008). In rice, a total of 54 OsWRKY genes showed noticeable differences in their transcript abundance under the abiotic stress such as cold, drought, and salinity (Ramamoorthy et al., 2008). There is also accumulating evidence that WRKY genes are involved in regulating developmental processes, such as embryo morphogenesis (Lagacé & Matton, 2004), senescence (Robatzek & Somssich, 2002), trichome initiation (Johnson, Kolevski & Smyth, 2002), and some signal transduction processes mediated by plant hormones including gibberellic acid (Zhang et al., 2004), abscisic acid , or salicylic acid (Du & Chen, 2008).
The number of WRKY genes in different species varies tremendously. For instance, there are 72 members in Arabidopsis thaliana, at least 45 in barley, 57 in cucumber, 58 in physic nut (Jatropha curcas), 59 in grapevine, 104 in poplar, 105 in foxtail millet (Setaria italica), 112 in Gossypium raimondii and more than 109 in rice (Ding et al., 2015;Eulgem, 2000;Guo et al., 2014;He et al., 2012;Ling et al., 2011;Mangelsen et al., 2008;Muthamilarasan et al., 2015;Wu, 2005;Xiong et al., 2013). Zhang & Wang (2005) also identified the most basal WRKY genes in the lineage of non-plant eukaryotes and green alga. Interestingly, the WRKY genes in eukaryotic unicellular chlamydomonas, protoctist (Giardia lambliad), bryophyte (Physcomitrella patens) and fern (Ceratopteris richardii) all belonged to group I (Yu, Chen & Zhang, 2006;Ü lker & Somssich, 2004;Zhang & Wang, 2005). For example, the study in bryophyte (Physcomitrella patens) found at least 12 WRKY genes, and all the genes belonged to group I (Ü lker & Somssich, 2004). Additionally, the study in gymnosperm (Cycas revolute) identified at least 21 WRKY genes (Yu, Chen & Zhang, 2006), and they were divided into two groups, 15 WRKY genes therein belonged to group I and the other six WRKY genes belonged to group II. Further study suggested that the core WRKY domains of group II and III were similar to the C-terminal domain of group I; therefore, the group II WRKY genes might emerge from the breakage of the C-terminal domain in group I and the group III probably evolve from group II (Ü lker & Somssich, 2004). All the above studies indicated that the group I WRKY genes might be the oldest type, which evolved from the origin of eucaryon, and group II and III might generate after the origin of bryophyte (Xie et al., 2005;Zhang & Wang, 2005). In the evolution of WRKY genes, gene duplication events played prominent roles. As a matter of fact, gene duplication events can lead to the generation of new genes. For example, there are approximately 80% of OsWRKY (rice) genes located in duplicated regions (Wu, 2005), as well as 83% of PtWRKY (poplar) genes (He et al., 2012). However, no gene duplication events have occurred in cucumber (Ling et al., 2011).
In the last few years, the increasing consumption of fossil fuels induced in a substantial increase of CO 2 concentration, which has adverse impacts on global climate changes (Pleguezuelo et al., 2015). Therefore, an ever-increasing demand for energy from renewable sources has provided a new impetus to cultivate woody plants for bioenergy production. Due to its ease of propagation, rapid growth and high yield on short rotation systems, some willow species have been used as renewable resources since the 1970s. Additionally, with its essential physiological characteristics, willow becomes a prominent part of basket production, environmental restoration, analgesic extraction, phytoremediation, both riparian and upland erosion control and biomass production (Kuzovkina & Quigley, 2005). WRKY proteins participate in diverse physiological and developmental processes in plants. With these various important factors and the recent released Salix suchowensis genome sequence, which covers about 96% of the expressed gene loci (Dai et al., 2014), we have the opportunity to analyze the willow WRKY gene family. The characterization of WRKY genes in willow can provide interesting gene pools to be investigated for breeding and genetic engineering purposes in woody plants.
Identification and distribution of WRKY genes in willow
The procedure performed to identify putative WRKY proteins in willow was similar to the method described in other species (Guo et al., 2014;He et al., 2012;Wu, 2005). The Hidden Markov Model (HMM) profile for the WRKY transcription factor was downloaded from the Pfam database (http://pfam.xfam.org/) with the keyword 'PF03106' (Punta et al., 2012). The HMM profile was applied as a query to search against the all willow protein sequences (Willow.gene.pep) using BLASTP program (E-value cutoff = 1e-3) (Camacho et al., 2009). Another procedure was performed to validate the putative accuracy. An alignment of WRKY seed sequences in Stockholm format from Pfam database was used by HMMER program (hmmbuild) to build a HMM model, and then the model was used to search the willow protein sequences by another HMMER program (hmmsearch) with default parameters (Eddy, 1998). Finally, we employed the SMART program (http://smart.embl-heidelberg.de/) to confirm the candidates from the two procedures correlated with the WRKY structure features (Letunic, Doerks & Bork, 2015).
Sequence alignments, phylogenetic analysis and classification of willow WRKY genes
Using the online tool SMART, we obtained the conserved WRKY core domains of predicted SsWRKY genes, and then multiple sequence alignment based on these domains was performed using ClustalX (version 2.1) (Larkin et al., 2007). After alignment, we used Boxshade (http://www.ch.embnet.org/software/BOX_form.html) to color the alignment result online. To gain a better classification of these SsWRKY genes, a further multiple sequence alignment including 103 SsWRKY domains and 82 WRKY domains from Arabidopsis (AtWRKY) was performed using ClustalW (Larkin et al., 2007), and a phylogenetic tree based on this alignment was built by MEGA 6.0 with the Neighbor-joining (NJ) method (Tamura et al., 2013). Bootstrap values have been calculated from 1,000 iterations in the pairwise gap deletion mode, which is conducive to the topology of the NJ tree by divergent sequences. Based on the phylogenetic tree constructed by SsWRKY and AtWRKY domains, these SsWRKY genes were classified into different groups and subgroups. In order to get a better comparison of WRKY family in Salicaceae, a phylogenetic tree including all SsWRKY domains and 126 WRKY domains from poplar (PtWRKY) was constructed with the similar method to Arabidopsis. Additionally, a phylogenetic tree based on full-length SsWRKY genes was also constructed to get a better classification. The ortholog of each SsWRKY gene in Arabidopsis and poplar was based on the phylogenetic trees of their respective WRKY domains, and the members of group I WRKY genes were considered as orthologs unless the same phylogenetic relationship can be detected between N-termini and C-termini in the tree. Another method described by Zou et al. (2016), BLAST-based method (Bi-direction best hit), was used to verify the putative orthologous genes (E-value cutoff = 1e-20) (Chen et al., 2007).
Evolutionary analysis of WRKY III genes in willow
The group of WRKY III genes, only found in flowering plants, is considered as the evolutionary youngest groups, and plays crucial roles in the process of plant growth (He et al., 2012;Wu, 2005). As described by Wang et al. (2015), the WRKY III genes also have a prominent impact on disease and drought resistance. Previous study of Zhang & Wang (2005) held the opinion that duplications and diversifications were plentiful in WRKY III genes, and they appeared to have confronted different selection challenges. Phylogenetic analysis of WRKY III genes was performed using MEGA6.0 with 65 WRKY III genes from Arabidopsis (AtWRKY), Populus (PtWRKY), grape (VvWRKY), willow (SsWRKY) and rice (OsWRKY). A NJ tree was constructed with the same method described before. Additionally, we estimated the non-synonymous (Ka) and synonymous (Ks) substitution ratio of SsWRKY III genes to verify whether selection pressure participated in the expansion of SsWRKY III genes. Each pair of these WRKY III protein sequences was first aligned using ClustalW. The alignments generated by ClustalW and the corresponding cDNA sequences were submitted to the online program PAL2NAL (http://www.bork.embl.de/pal2nal/) (Suyama, Torrents & Bork, 2006), which automatically calculates Ks and Ka by the codeml program in PAML (Yang, 2007).
Analysis of exon-intron structure, gene clusters, gene duplication events and conserved motif distribution of willow WRKY genes
The exon-intron structures of the willow WRKY genes were obtained based on the protein annotation files assembled ourselves (http://bio.njfu.edu.cn/ss_wrky/version5_2.gff3), and the diagrams were obtained from the online website Gene Structure Display Server (GSDS: http://gsds.cbi.pku.edu.cn/) (Hu et al., 2015).
Gene clusters are very important for predicting co-expression genes or potential function of clustered genes in angiosperms (Overbeek et al., 1999). They can be defined as a single chromosome containing two or more genes within 200 kb (He et al., 2012;Holub, 2001).
Gene duplication events were always considered as the vital sources of biological evolution. Two or more adjacent homologous genes located on a single chromosome were considered as tandem duplication events (TDs), while homologous gene pairs between different chromosomes were defined as SDs (Liu & Ekramoddoullah, 2009). BLASTP (E-value cutoff = 1e-20) was performed to identify the gene duplication events in SsWRKY genes with the following definition (Gu et al., 2002;He et al., 2012): (1) the coverage of the aligned sequence 80% of the longer gene; and (2) the similarity of the aligned regions 70%. In this study, we set the cutoff of the similarity of the aligned regions as 65%, because the similarity of the unaligned regions may reduce the value in different species.
To better exhibit the structural features of SsWRKY proteins, the online tool MEME (Multiple Expectation Maximization for Motif Elicitation) was used to identify the conserved motifs in the encoded SsWRKY proteins (Bailey et al., 2006). The optimized parameters were employed as the following: any number of repetitions, maximum number of motifs = 20, and the optimum width of each motif was constrained to between 6-50 residues. The online program 2ZIP (http://2zip.molgen.mpg.de/) was used to verify the existence of the conserved Leu zipper motif (Bornberg-Bauer, Rivals & Vingron, 1998), whereas some other important conserved motifs, HARF, LXXLL (X, any amino acid) and LXLXLX, were identified manually.
Expression analyses of willow WRKY genes
The sequenced S. suchowensis RNA-HiSeq reads from five tissues including tender roots, young leaves, vegetative buds, non-lignified stems and barks generated in our previous study were separately mapped back onto the SsWRKY gene sequences using BWA (mismatch 2 bp, other parameters as default) (Li & Durbin, 2009), and the number of mapped reads for each WRKY gene was counted. Normalization of the mapped reads was done using RPKM (reads per kilo base per million reads) method (Wagner, Kin & Lynch, 2012). The heat map for tissue-specific expression profiling was generated based on the log 2 RPKM values for each gene in all the tissue samples using R package (Gentleman et al., 2004).
Identification and characterization of 85 WRKY genes in willow (Salix suchowensis)
In this study, we obtained 92 putative WRKY genes by using HMMER to search the HMM profile of WRKY DNA-binding domain against willow protein sequences, and validated the accuracy of the consequence by BLASTP. After submitting the 92 putative WRKY genes to the online program SMART, seven genes without a complete WRKY domain were removed, while the other 85 WRKY genes were selected as possible members of the WRKY superfamily. WRKY genes contain one or two WRKY domains, comprising a conserved WRKYGQK heptapeptide at the N-termini and a novel zinc finger motif (C-X 4-7 -C-X 22-23 H-X-H/C) at the C-termini (Eulgem, 2000). The variations of WRKY core domain or zinc finger motif may lead to the binding specificities of WRKY genes, but this remains to be largely demonstrated (Brand et al., 2013;Rinerson et al., 2015;Yamasaki et al., 2005). In order to identify the variations in WRKY core domains, a multiple sequence alignment of 85 SsWRKY core domains was conducted, and the result was shown in Fig. 1. Among the selected 85 WRKY genes, 81 (95.3%) were identified to have highly conserved sequence WRKYGQK, whereas the other four WRKY genes (SsWRKY14, SsWRKY23, SsWRKY38 and SsWRKY78) had a single mismatched amino acid in their core WRKY domains ( Fig. 1). In SsWRKY14 and SsWRKY38, the WRKY domain has the sequence WRKYGKK, while SsWRKY23 contains a WKKYGQK sequence, and SsWRKY78 contains WRKYGRK sequence. Eulgem (2000) previously described that the zinc finger motif (C-X 4-5 -X 22-23 -H-X 1 -H or C-X 7 -C-X 23 -H-X 1 -C) is another vital feature of the WRKY family. As illustrated in Fig. 1, four WRKY domains (SsWRKY76C, SsWRKY64, SsWRKY12 and SsWRKY28) do not contain any distinct zinc finger motif, but they were still reserved in the succeeding analyses, as performed in barley and poplar (He et al., 2012;Mangelsen et al., 2008). Additionally, some zinc-finger-like motifs, including C-X 4 -C-X 21 -H-X 1 -H in SsWRKY23 and C-X 5 -C-X 19 -H-X 1 -H in SsWRKY73 and SsWRKY17, were identified in willow WRKY genes. Both the two zinc-finger-like motifs were also found in poplar (PtWRKY39, 57, 42 and 53).
Detailed characteristics of SsWRKY genes are listed in Table 1, including the WRKY gene specific group numbers, chromosomal distribution, Arabidopsis and poplar orthologs. The MW, PI and the length of each WRKY protein sequence are also shown in Table 1. According to the particularization (Table 1), the average length of these protein sequences is 407 residues, and the lengths ranged from 109 residues (SsWRKY23) to 1,593 residues (SsWRKY78). Additionally, the PI ranged from 5.03 (SsWRKY38, SsWRKY60) to 10.27 (SsWRKY28), and the MW ranged from 12.9 (SsWRKY23) to 179.0 kDa (SsWRKY78).
Locations and gene clusters of willow WRKY genes
Nearly 84 of the 85 putative SsWRKY genes could be mapped onto 19 willow chromosomes and then renamed from SsWRKY1 to SsWRKY84 based on their specific distributions on chromosomes. Only one SsWRKY gene (willow_GLEAN_10002834), renamed as SsWRKY85, could not be conclusively mapped onto any chromosome. As shown in Fig. 2, Chromosome (Chr) 2 possessed the largest number of SsWRKY genes (11 genes), followed by Chr14 (10 genes). Eight SsWRKY genes were found on Chr6, six on Chr1 and Chr16, and five on Chr5. Additionally, four chromosomes (Chr4, Chr11, Chr17, Chr18) had four SsWRKY genes, as well as three SsWRKY genes were found on Figure 1 Comparison of the WRKY domain sequences from 85 SsWRKY genes. The WRKY gene with the suffix -N and -C indicates the N-terminal and C-terminal WRKY domain of group I members, respectively. "-" has been inserted for the optimal alignment. Red indicates the highly conserved WRKYGQK heptapeptide, and the zinc finger motifs are highlighted in green. The position of a conserved intron is indicated by an arrowhead. Chr8, Chr13 and Chr19. Chr10 and Chr15 had two SsWRKY genes, and only one SsWRKY gene was identified on Chr7, Chr9 and Chr12. The distribution of each SsWRKY genes was extremely irregular, indicating the reduction of the TDs in willow WRKY genes. As described by Holub (2001), a single chromosome region containing two or more genes within 200 kb was defined as gene clusters (He et al., 2012). According to this description, a total of 23 SsWRKY genes were clustered into 11 clusters in willow (Fig. 2). The chromosomal distribution of gene cluster was irregular, and only seven chromosomes were identified to have gene clusters. Three clusters, including seven SsWRKY genes, were found on Chr2, and two clusters were found on both Chr6 and Chr14. Only one cluster was distributed on each of Chr3, Chr8, Chr10 and Chr18, whereas none was identified on other 11 chromosomes. Further analysis of SsWRKY chromosomal distribution showed that a high WRKY gene density region in only 2.23 Mb regions on Chr2, which had also been observed in rice and poplar (He et al., 2012;Wu, 2005).
Phylogenetic analysis and classification of WRKY genes in willow
In order to get a better separation of different groups and subgroups in SsWRKY genes, a total of 185 WRKY domains, including 82 AtWRKY domains and 103 SsWRKY domains, were used to construct the NJ phylogenetic tree. On the basis of the phylogenetic tree and structural features of WRKY domains, all 85 SsWRKY genes were clustered into three main groups (Fig. 3). Nineteen members containing two WRKY domains and C 2 H 2 -type zinc finger motifs were categorized into group I, except SsWRKY78, which contains only one WRKYdomain and two zinc finger motifs. Domain acquisition and loss events appear to have shaped the WRKY family (Ross, Liu & Shen, 2007;Rossberg et al., 2001). Thus, SsWRKY78 may have evolved from a two-domain WRKY gene but lost one WRKY domain during evolution. Additionally, as shown in Fig. 3, SsWRKY78 shows high similarities to SsWRKY40N, implying a common origin of their domains. The similar phenomenon was also found in PtWRKY90 of poplar (He et al., 2012).
The largest number of SsWRKY genes, comprising a single WRKY domain and C 2 H 2 zinc finger motif, were categorized into group II. SsWRKY genes of group II could be further divided into five subgroups: IIa, IIb, IIc, IId and IIe. As shown in Fig. 3, subgroup IIa (four members) and IIb (eight members) were clustered into one clade, as well as subgroup IId (13 members) and IIe (11 members). Strikingly, SsWRKY genes in subgroup IIc (21 members) and group IC are classified into one clade, suggesting that group II genes are not monophyletic and the group IIc WRKY genes may evolve from the group I genes by the loss of the WRKY domain in N-terminal. As shown in Figs. 3 and S1, SsWRKY23, SsWRKY34 and their orthologous genes (AtWRKY49, PtWRKY39, PtWRKY57, PtWRKY34 and PtWRKY32) seem to form a new subgroup closer to the group III. However, SsWRKY23 and SsWRKY34 exhibit the zinc finger motif C-X 4 -C-X 21 -H-X-H and C-X 4 -C-X 23 -H-X-H as observed in the subgroup IIc and group IC. Therefore, they were classified into subgroup IIc in this study. Different from the C 2 H 2 zinc finger pattern in group I and II, group III WRKY genes (seven members), broadly considered as playing vital roles in plant evolution process and adaptability, contained one WRKY domain and a C-X 7 -C-X 23 -H-X-C zinc finger motif. However, in rice and barley, a new CX 7 CX n HX 1 C (n 24) zinc finger motif was identified in group III (Mangelsen et al., 2008;Wu, 2005), which was never found in poplar, grape, Arabidopsis and willow, suggesting that this feature perhaps only belong to monocotyledonous species.
In order to obtain a better study in woody plant species, a phylogenetic tree based on the WRKY domains between willow and poplar was constructed (Fig. S1). The tree showed that most of the WRKYdomains from willow and poplar were clustered into sister Figure 3 Phylogenetic tree of WRKY domains from willow and Arabidopsis. The phylogenetic tree was constructed using the neighbor-joining method in MEGA 6.0. The WRKY genes with the suffix 'N' and 'C' indicate the N-terminal and the C-terminal WRKY domains of group I, respectively. The different colors indicate different groups (I, II and III) or subgroups (IIa, b, c, d and e) of WRKY domains. Circles indicate WRKY genes from willow, and diamonds represent genes from Arabidopsis. The purple trapezoid region indicate a new subgroup belonging to IIc. pairs, suggesting that gene duplication events played prominent roles in the evolution and expansion of WRKY gene family. Furthermore, a total of 20 SsWRKY domains show extremely the same domains (similarity: 100%) to poplar, i.e., SsWRKY39 and PtWRKY9, SsWRKY39 and PtWRKY9, SsWRKY39 and PtWRKY9, SsWRKY39 and PtWRKY9, and so on. Further functional analyses of these genes in willow or poplar will provide a useful reference for another one.
The ortholog of SsWRKY genes in Arabidopsis and poplar
The clustering of orthologous genes emphasizes the conservation and divergence of gene families, and they may contain the same functions (Ling et al., 2011). In this study, a phylogeny-based method was used to identify the putative orthologous SsWRKY genes in Arabidopsis and poplar (Figs. 3 and S1), and BLAST-based method (Bi-direction best hit) was used to confirm the true orthologs. The WRKY genes of group I contained two WRKY domains, and both of them were used to construct the phylogenetic trees. To avoid the mistakes of orthologous genes in group I, the members of group I WRKY genes were considered as orthologous genes unless the same phylogenetic relationship can be detected between N-termini and C-termini in the phylogenetic tree. For example, SsWRKY37 and AtWRKY44 were considered as an orthologous gene pair because they clustered into a clade of their N-termini and C-termini (Fig. 3), while SsWRKY80 and PtWRKY30 were excluded from orthologous gene pairs due to their different clusters of N-termini and C-termini (Fig. S1). Totally, 75 orthologous gene pairs were found between willow and Arabidopsis, less than 82 orthologous gene pairs between willow and poplar (Table 1), which was congruent with the evolutionary relationship among the three plant species.
Evolutionary analysis of WRKY III genes in willow
The WRKY III genes were considered as the evolutionary youngest groups, and played crucial roles in the process of plant growth and resistance. In order to further probe the duplication and diversification of WRKY III genes after the divergence of the monocots and dicots, a phylogenetic tree was constructed using 65 WRKY III genes from Arabidopsis (13), rice (29), poplar (10), willow (7) and grape (6). As shown in Fig. S2, willow SsWRKY III genes were closer to the eurosids I group (poplar and grape) than eurosids II group (Arabidopsis) and monocots (rice). Meanwhile, most Arabidopsis and rice WRKY III genes formed the relatively independent clades, suggesting that two gene duplication events, including tandem and segmental duplication, perhaps were the main factors in the expansion of WRKY III genes in Arabidopsis and rice. The results also indicated that WRKY III genes might arise after the divergence of the Arabidopsis (eurosids I) and eurosids II (poplar, willow and grape). The study by Ling et al. (2011) in cucumber showed the similar results and hence proved the validity. Additionally, we found that seven rice WRKY III genes (OsWRKY55,84,18,52,46,114 and 97) contained the variant domain WRKYGEK, which was not found in other four dicots (Arabidopsis, poplar, grape and willow), implying that this may be a feature of WRKY III genes in monocots and these OsWRKY genes may respond to different environmental signals.
According to the comparison of the number of WRKY III genes in the five observed plants, the number is smaller in eurosids I (poplar, grape and willow) than Arabidopsis (eurosids II) and rice (monocots), which may be caused by different patterns of duplication events. Genes generated by duplication events are not stable, and can be retained or lost due to different selection pressure and evolution (Zhang, 2003). In order to determine which selection pressure played prominent roles in the expansion of willow WRKY III genes, we estimated the Ka/Ks ratios for all pairs (21 pairs) of willow WRKY III genes. As shown in Table S1, all the Ka/Ks ratios were less than 0.5, suggesting willow WRKY III genes had mainly been subjected to strong purifying selection and they were slowing evolving at the protein level.
Exon-intron structures of SsWRKY genes
The exon-intron structures of multiple gene families play crucial roles during plant evolution. As shown in Fig. 4, the SsWRKY gene phylogenetic tree and the corresponding exon-intron structures are shown in A and B, respectively. Exon-intron structures of each group were shown in Fig. 4B, a large number of WRKY genes had two to five introns (94%, 80 of 85), including eight WRKY genes contained one intron; 39 contained two introns; 13 contained three introns; 15 contained four introns and 5 contained five introns. The number of exons in remaining WRKY genes was quite different: SsWRKY49, SsWRKY76 and SsWRKY78 had 6, 11 and 10 introns, respectively; SsWRKY17 had the largest number of introns (17 introns), while no intron was found in SsWRKY12. The intron acquisition or loss occurred during the evolution of WRKY gene family, while WRKY genes in the same group shared the similar number of introns (Guo et al., 2014). In our study, most of WRKY genes in group I had three to six introns, expect SsWRKY76 and SsWRKY78, which might acquire some introns during evolution. The number of introns of WRKY genes in group II was extremely different, ranging from one to five introns, except SsWRKY17 with 17 introns and SsWRKY12 with zero intron might obtain or loss some introns during evolution. Strikingly, WRKY genes in group III had the most stable number of introns with all of seven WRKY III genes had two introns, suggesting that WRKY III genes may be the most stable genes in the environmental stress. The stable number of introns in SsWRKY III genes was consistent with the results of Ka/Ks analysis, which reflected that purifying selection pressure played vital roles in willow WRKY III genes.
A great deal of studies in WRKY genes proved that nearly all of the WRKY genes contained an intron in their WRKY core domains (Eulgem, 2000;Guo et al., 2014;He et al., 2012;Huang et al., 2012;Ling et al., 2011;Zou et al., 2004). According to the further analysis of SsWRKY genes, two major types of splicing introns, R-type and V-type, introns were observed in numerous SsWRKY domains. The R-type intron was spliced exactly at the R residue, about five amino acids before the first Cys residue in the C 2 H 2 zinc finger motif. The V-type intron was localized before the V residue, six amino acids after the second Cys residue in the C 2 H 2 zinc finger motif. As shown in Fig. 4B, the R-type introns could be observed in more groups, including group IC, subgroup IIc, IId, IIe and group III, while V-type introns were only observed in subgroup IIa and IIb. However, there was no intron found in group IN. The similar results were also observed in Arabidopsis, poplar and rice, suggesting that the special distribution of introns in WRKY domains was a feature of WRKY family (Eulgem, 2000;He et al., 2012;Wu, 2005).
Identification of gene duplication events and conserved motifs in willow
Gene duplication events were always considered as the vital sources of biological evolution (Chothia et al., 2003;Ohno, Wolf & Atkin, 1968). TDs were defined as two or more adjacent homologous genes located on a single chromosome, while homologous gene pairs between different chromosomes were defined as SDs (Liu & Ekramoddoullah, 2009). In our study, a total of 33 homologous gene pairs, including 66 SsWRKY genes, were identified to participate in gene duplication events (Table S2). The composition of gene duplication events in each group in ascending order was group I: 73.7% (14 of 19), group II: 78% (46 of 59) and group III: 85.7% (6 of 7). Among the 33 homologous gene pairs, none of them appeared to have undergone TDs, on the contrary, all of the 66 genes (77.6% of all SsWRKY genes) participated in SDs, implying that SDs played major roles in the expansion of willow WRKY genes.
WRKY genes shared more functional and homologies in their conserved WRKY core domains (about 60 residues), while the rest sequences of WRKY genes shared a little (Eulgem, 2000). In order to get a more comprehensive understanding of the structural feature in WRKY domains, the conserved motifs of SsWRKY genes were predicted using the online program MEME (Fig. S3; Table S3). Among the 20 putative motifs, motifs 1, 2, 3 and 5, broadly distributed across SsWRKY genes, were characterized as the WRKY conserved domains. The motif 6 was characterized as nuclear localization signals (NLS), which mainly distributed in subgroup IId and IIe and group III. Some other motifs with poorly defined recently were also predicted by MEME: the motif 4 was only found in group IC and subgroup IIc; motifs 7 and 9 were limited to subgroup IIa and IIb; the motif 8 was found in group I and a few genes of subgroup IIc; motifs 10, 13, 15 and 17 were unique in subgroup IId; the motif 12 was only observed in subgroup IIb; the motif 16 was mainly found in group II; the motif 18 was found in subgroup IIc; motifs 19 and 20 were only observed in subgroup I. The distinct conserved motifs of different groups could be an important foundation for future structural and functional study in WRKY gene family.
Some other important motifs, including Leu zipper motif, HARF, LXXLL and LXLXLX, could be also identified in WRKY genes. Using the online program 2ZIP, the conserved Leu zipper motif, described as a common hypothetical structure to DNA binding proteins (McInerney et al., 1998), was identified in only two SsWRKY genes (SsWRKY61 and SsWRKY39). With manual inspection, the conserved HARF (RTGHARFRR[A/G]P) motifs, whose putative functions were not distinguished clearly, were only observed in seven WRKY genes of subgroup IId, including SsWRKY82,33,45,81,9,30 and 56. In the meantime, the conserved LXXLL and LXLXLX (L: Leucine; X: any amino acid) motifs, which respectively defined as the co-activator and active repressor motifs, were also found in SsWRKY genes. A total of seven SsWRKY genes (SsWRKY19, Figure 5 Expression profiles of the 85 SsWRKY genes in root, stem, bark, bud and leaf. Color scale represents RPKM normalized log2 transformed counts and red indicates high expression, blue indicates low expression and white indicates the gene is not expressed in this tissue. 45, 72, 61, 76, 30 and 59) contained the helical motif LXXLL, whereas eight genes (SsWRKY66,26,35,81,83,75,73 and 3) shared the LXLXLX motif. The plenty of conserved motifs in WRKY genes with different lengths and variant functions, suggesting that the WRKY genes might play more vital roles in gene regulatory network.
Distinct expression profiles of SsWRKY genes in various tissues
In order to gain more information about the roles of WRKY genes in willow, RNA-seq data from the sequenced genotype were used to quantify the expression level of WRKY genes in five tissues of Salix suchowensis. As illustrated in Fig. 5, the expression of all 85 SsWRKY genes were detected in at least one of the five examined tissues, such as 84 genes in roots, 80 in stems, 84 in barks, all in buds and 73 in leaves. Meanwhile, the cluster analysis of the expression pattern in five tissues showed that SsWRKY genes shared more similarities between stem and leaf, as well as bark and bud, and root was more similar to the clade formed by bark and bud. The results detected here were consistent with their biological characteristics. SsWRKY38, not detected in roots and leaves, was also lowly expressed in other tissues. Similarly, SsWRKY74, not detected in stems, barks and leaves, was only expressed in roots and buds with extremely low levels. Among the five genes not expressed in stems, SsWRKY66, 74 and 79 were also not detected in leaves. The largest number of expressed or unexpressed SsWRKY genes (12 genes) was found in buds or leaves, respectively, suggesting that WRKY genes might play more roles in buds than leaves.
According to the expression annotation of 85 SsWRKY genes by RPKM method in Fig. 5 and Table S4, the total transcript abundance of SsWRKY genes in tender root (RPKM = 1,181.21), bark (RPKM = 1,363.01) and vegetative bud (RPKM = 928.58) was relatively larger than that in other two tissues, including non-lignified stem (RPKM = 537.88) and young leaf (RPKM = 349.84). As shown in Table S4, SsWRKY81 (RPKM = 97.75), the most expressed SsWRKY genes in roots, was also expressed in other four tissues, though the expression levels were relatively low; SsWRKY56 (RPKM = 32.54), the most expressed SsWRKY genes in stem, was also highly expressed in other examined tissues. Similarly, SsWRKY67, the most expressed SsWRKY genes in barks (RPKM = 188.16), was also detected in vegetative buds (RPKM = 82.07) and young leaves (RPKM = 26.11) with high expression levels. Similarly, SsWRKY6 (RPKM = 26.31), the most expressed genes in leaves, was also highly detected in other tissues. A few genes, i.e., SsWRKY52, SsWRKY2 and SsWRKY35, were expressed highly in barks, but lowly in other four tissues. The results mentioned above may be an important foundation for the specific expression analysis of each WRKY gene in willow.
DISCUSSION
The WRKY transcription factor gene family can specifically interact with the W-box to regulate the expressions of downstream target genes. They also play prominent roles in diverse physiological and growing processes, especially in various abiotic and biotic stress responses in plants. Previous studies about the features and functions of WRKY family have been conducted in many model plants, including Arabidopsis for annual herbaceous dicots (Eulgem, 2000), grape for perennial dicots (Guo et al., 2014), poplar for woody plants and rice for monocots (He et al., 2012;Wu, 2005), but there is no large-scale study of WRKY genes in willow. Here, the comprehensive analysis of WRKY family in willow (Salix suchowensis) would facilitate a better understanding of WRKY gene superfamily and provide interesting gene pools to be investigated for breeding and genetic engineering purposes in woody plants.
As described in many previous studies, the presence of highly conserved WRKY domains in WRKY proteins is the most prominent characteristic of the WRKY gene family (Ding et al., 2015;Eulgem, 2000;He et al., 2012;Huang et al., 2012;Wu, 2005). In our study, through comparing the two phylogenetic trees based on the conserved WRKY domains (Fig. 3) and proteins (Fig. 4A), we obtained the nearly same classification of all SsWRKY genes, suggesting that the conserved WRKY domain is an indispensable unit in WRKY genes. The variation of the WRKYGQK heptapeptide may influence the proper DNA-binging ability of WRKY genes (Duan et al., 2007;Maeo et al., 2001). A recent binding study by Brand et al. (2013) disclosed that a reciprocal Q/K change of the WRKYGQK heptapeptide might result in different DNA-binding specificities of the respective WRKY genes. For instance, the soybean WRKY genes, GmWRKY6 and GmWRKY21, which contains the WRKYGKK variant, can't bind normally to the W-box (Zhou et al., 2008). NtWRKY12 gene in tobacco with the WRKYGKK variant recognizes another binding sequence 'TTTTCCAC' instead of normal W-box (van Verk et al., 2008). In our study, four WRKY genes (SsWRKY14, SsWRKY23, SsWRKY38 and SsWRKY78) had a single mismatched amino acid in their conserved WRKYGQK heptapeptide (Fig. 1). The variants detected in willow were extremely congruent with that in another salicaceous plant, poplar, which also contains the same three variants in seven PtWRKY genes (He et al., 2012). Previous studies have disclosed that the binding specificities of variable WRKYGQK heptapeptide vary tremendously (Brand et al., 2013); however, few studies were shown about the effect of variable zinc finger motif. In this study, four WRKY domains (SsWRKY76C, SsWRKY64, SsWRKY12 and SsWRKY28) without complete zinc finger motif may lack the ability of interacting with W-box, as well as PtWRKY83, 40, 95 and 10 in poplar (He et al., 2012). Thereby, it is still indispensable to further investigate the function or the expression patterns of the regulated gene targets in the variant sequences of the WRKY domains (both WRKYGQK heptapeptide and complete zinc finger motif).
Different classification methods may lead to different numbers of WRKY genes in each group. The classification method in our study was categorized as described in Arabidopsis, grape, cucumber, castor bean and many other plant species (Eulgem, 2000;Guo et al., 2014;Ling et al., 2011;Zou et al., 2016). According to this method, the willow WRKY genes were classified into three main groups (I, II and III), with five subgroups in group II (IIa, IIb, IIc, IId and IIe). However, the strategy described in rice and poplar was a little different (He et al., 2012;Wu, 2005). They classified the subgroup IIc categorized above into a new subgroup Ib based on the fact that the C-termini of group I and the domains of the above subgroup IIc shared more similar consensus structures. At the meantime, subgroup IId and IIe categorized above were reclassified into subgroup IIc and IId, respectively. With the same classification method as described in Arabidopsis and many other plants, the numbers of different groups in poplar and rice are illustrated in Table S5. WRKY genes of subgroup IIa, the smallest number of members, appear to play crucial roles in regulating biotic and abiotic stress responses (Rushton et al., 2010). As shown in Table S5, the willow WRKY genes of subgroup IIa and IIb are extremely similar to that of other plant species, suggesting that all SsWRKY genes of these subgroups have been identified. In addition, the numbers of WRKY III in eurosids I group, such as cucumber (6), poplar (10), grape (6) and willow (7) are less than that of eurosids II (Arabidopsis: 14) and monocots (rice: 36), suggesting that different duplication events or selection pressures occurred in WRKY III genes after the divergence of eurosids I and eurosids II group. A previous study in Arabidopsis showed that nearly all WRKY III members respond to diverse biotic stresses, indicating that this group probably evolved with the increasing biological requirements (Wang et al., 2015). The different numbers of WRKY III genes in willow, poplar, cucumber, Arabidopsis and rice are probably due to their different biotic stresses during evolution, and seven SsWRKY III genes may be sufficient for the biological requirements in willow.
WRKY transcription factors play important roles in the regulation of developmental processes and response to biotic and abiotic stress (Brand et al., 2013). The evolutionary relationship of WRKY gene family promises to obtain significant insights into how biotic and abiotic stress responses from single cellular aquatic algae to multicellular flowering plants (Rinerson et al., 2015). Previous studies hypothesized that group I WRKY genes were generated by domain duplication of a proto-WRKY gene with a single WRKY domain, group II WRKY genes evolved through the subsequent loss of N-terminal WRKY domain, and group III genes evolved from the replacement of conserved His residue with a Cys residue in zinc motif (Wu, 2005). However, recent study proposed two alternative hypotheses of WRKY gene evolution (Rinerson et al., 2015): the "Group I Hypothesis" and the "IIa + b Separate Hypothesis." Additionally, another recent study by Brand et al. (2013) concluded that subgroup IIc WRKY genes evolved directly from IIc-like ancestral WRKY domains, and group I genes evolved independently due to a duplication of the IIc-like ancestral WRKY domains. Phylogenetic analysis in our study shows that subgroup IIc and group IC are evolutionarily close, as well as subgroups IIa and IIb, subgroups IId and IIe, and this result is consistent with the conclusion drew by Brand et al. (2013). Additionally, the V-type introns of SsWRKY genes are only found in subgroup IIa and IIb, while R-type introns are found in other groups except group IN. The results are congruent with the "IIa + b Separate Hypothesis." Our results shown here provide important reference for the further analyses on the accurate evolutionary relationship of WRKY gene family.
Gene duplication events played prominent roles in a succession of genomic rearrangements and expansions, and it is also the main motivation of plants evolution (Vision, Brown & Tanksley, 2000). The gene family expansion occurs via three mechanisms: TDs, SDs and transposition events (Maher, Stein & Ware, 2006), and we only focused on the TDs and SDs in this study. In willow, a total of 66 SsWRKY genes were identified to participate in gene duplication events, and all of these genes appeared to have undergone SDs. Similarly, in poplar, only one homologous gene pair participated in TDs, while 29 of 42 (69%) homologous gene pairs were determined to participate in SDs. The similar WRKY gene expansion patterns in willow and poplar showed that SDs were the main factors in the expansion of WRKY genes in woody plants. However, in cucumber, no gene duplication events have occurred in CsWRKY gene evolution, probably because there were no recent whole-genome duplication and tandem duplication in cucumber genome (Huang et al., 2009). In rice and Arabidopsis, many WRKY genes were generated by TDs, which was incongruent with the duplication events in willow, poplar and cucumber. The different WRKY gene expansion patterns of the above plant species could be due to their different life habits and selection pressures in a large scale, and it is still indispensable to be further investigated.
The WRKY gene family plays crucial roles in response to biotic and abiotic stresses, as well as diverse physiological and developmental processes in plant species. Because of the lack of researches on the function of willow WRKY genes, our study provided putative functions of SsWRKY genes by comparing the orthologous genes between willow and Arabidopsis. The details of the functions or regulations of AtWRKY genes can be obtained from TAIR (http://www.arabidopsis.org/). For example, AtWRKY2, the ortholog to SsWRKY6, which highly expressed in the five examined tissues, plays important roles in seed germination and post germination growth (Jiang & Yu, 2009). AtWRKY33, the ortholog to SsWRKY1, 35, 55 and 84, influences the tolerance to NaCl, and increases sensitivity to oxidative stress and abscisic acid (Jiang & Deyholos, 2009). A large number of AtWRKY genes, i.e. AtWRKY3, 4, 18, 53, 41, work in the resistance to Pseudomonas syringae (Chen & Chen, 2002;Higashi et al., 2008;Lai et al., 2008;Murray et al., 2007), therefore their orthologs in willow (SsWRKY42,47,39,79,20 and 70) may show the same resistance to Pseudomonas syringae. Based on the comparison of willow WRKY genes with their Arabidopsis orthologs, we could speculate that the functional divergence of SsWRKY genes has played prominent roles in the responses to various stresses.
CONCLUSIONS
Based on the recent released willow genome sequence and RNA-seq data, in this study we identified 85 SsWRKY proteins using bioinformatics approach. According to the phylogenetic relationships and structural features of WRKY domains, all 85 SsWRKY genes were assigned to the group I, group II (subgroup a-e) and group III. Three variations of the WRKYGQK heptapeptide and the normal zinc finger motif in willow WRKY genes might execute some new biological functions. Evolutionary analysis of SsWRKY III genes will be helpful for understanding the evolution of WRKY III genes in plant. With the comparison of willow WRKY genes with their Arabidopsis orthologs, breeding willow varieties with increased tolerance to many adverse environments could be achieved using transgenic technology. Our results will be not only beneficial to complete the functional and annotation information of WRKY genes family in woody plants, but also provide interesting gene pools to be investigated for breeding and genetic engineering purposes in woody plants.
|
v3-fos-license
|
2020-05-21T00:09:12.941Z
|
2020-02-24T00:00:00.000
|
219678228
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://innovareacademics.in/journals/index.php/ajpcr/article/download/36862/22202",
"pdf_hash": "3d7bb4d18c442ef832326aae64147976d14607fe",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43056",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "6275675b6bd58ae1d36bcea4ea44623bb2462448",
"year": 2020
}
|
pes2o/s2orc
|
AN OBSERVATIONAL STUDY OF CLINICAL AND HEMATOLOGICAL PROFILE OF CIRRHOSIS OF LIVER
Objective: Efforts can be made to normalize the hematological parameters so that the morbidity and mortality in these patients could be effectively reduced. Methods: This observational study was carried out among 69 cirrhosis patients that fulfills the inclusion and exclusion criteria, attended the medicine outpatient department, and admitted in medicine ward of PRM Medical College and Hospital, Baripada, Dist. Mayurbhanj, Odisha, India, from June 2018 to January 2019. Results: In our study, we had 59 male and 10 female patients with an average age of 49.8±13.19 years. About 92.75% of the patients were alcoholic. Abdominal distension (92.75%) and ascites (84.06%) were the most common presenting complaints. Pallor was present in 42 (60.87%) cases. Splenomegaly was present in 35 (50.72%) cirrhotic patients. Renal dysfunction was present in 23 (33.33%) cases. Sixty-six (95.65%) patients had anemia and 47 (68.12%) patients had thrombocytopenia. Conclusions: From this study, we can conclude that, in cirrhosis of liver patients, various hematological changes are very common which need to be identified and corrected early to reduce morbidity and mortality.
INTRODUCTION
The liver is one of the most complex functioning organs with a wide array of functions in human body. It plays a major role in carbohydrate, protein, lipid metabolism, synthesis of plasma proteins and maintenance of immunity (Kupffer cells), inactivation of various toxins, metabolism of drugs, and hormones. The liver has an extremely important role in maintenance of blood homeostasis as it functions as a storage depot for iron, folic acid, and Vitamin B12, secretes clotting factors and inhibitors. Hence, it is not surprising that, in liver diseases, a wide range of hematological abnormalities can be seen.
In general population, global prevalence of cirrhosis from autopsy studies ranges from 4.5% to 9.5% [1][2][3]. Hence, taking the adult population into count, we estimate that more than 50 million people in the world would be affected with chronic liver disease (CLD). At present, globally, the most common causative factors are alcohol, nonalcoholic steatohepatitis and viral hepatitis. The prevalence of cirrhosis is likely to be underestimated; as almost one-third of the patients remain asymptomatic.
During 2001, the estimated worldwide mortality from cirrhosis was 771,000 people; it was ranked 14 th and 10 th as the leading cause of death in the world and in developed countries, respectively [4]. Deaths from cirrhosis have been estimated to increase and would make it as the 12 th leading cause of death in 2020 [5].
CLD in the clinical context is a disease process of the liver that involves progressive destruction and regeneration of the liver parenchyma, leading to fibrosis and cirrhosis [6].
CLD frequently associated with hematological abnormalities. Pathogenesis of hematological changes is multifactorial and included portal hypertension induce sequestration, alteration in bone marrow stimulating factors, viral and toxin-induced bone marrow suppression, and consumption or loss.
Anemia of diverse etiology occurs in about 75% of patients of CLD [7]. Causes of anemia in CLD -iron deficiency, hypersplenism, anemia due to chronic disease, autoimmune hemolytic anemia, folic acid deficiency, aplastic anemia, and as an effect of antiviral drug. Alcohol is the most commonly used drug whose consequences include the suppression of hematopoiesis. These patients may suffer from nutritional deficiencies of folic acid and other vitamins due to malabsorption, malnutrition, or direct toxic effect that play a role in hematopoiesis. As a result, alcoholics may suffer from moderate-to-severe anemia, characterized by enlarged, structurally abnormal red blood cells (RBCs), mildly reduced numbers leukocytes and neutrophils, and moderately to severely reduced numbers of platelets [8].
Thrombocytopenia is common in CLD; mainly due to portal hypertension associated splenic sequestration, alteration in thrombopoietin, bone marrow suppression, consumptive coagulopathy, and increased blood loss. In CLD and cirrhosis, alterations in primary platelet hemostasis (platelet adhesion, activation and aggregation) have received less attention than changes in secondary hemostasis (coagulation). An increased intrasplenic platelet breakdown with variable roles of decreased platelet production and splenic pooling appears to be the most important determinants. Regarding the functional change, there is a decreased agreeability attributable to defective (transmembrane and intracellular) signaling, a storage pool defect, and an upregulation of the inhibitory pathways [9]. Thrombocytopenia is associated with increased bleeding tendency in CLD patients, so early detection of thrombocytopenia is important and helpful for decreased mortality and morbidity.
Behera and Dash
Abnormalities in hematological indices are associated with increased risk of complications including bleeding and infection. Efforts can be made to normalize the hematological parameters so that the morbidity and mortality in this cirrhosis of liver patients could be effectively reduced. This could also extend help in increasing the longevity in transplant awaiting patients. We, through our study, have made an attempt to group the patients with deranged hematological indices and analyzed the variation of these indices in accordance. This could have clear therapeutic implications in managing these patients and reducing the adverse events.
METHODS
This observational study was carried out among 69 cirrhosis patients that fulfills the inclusion and exclusion criteria and attended the medicine outpatient department and admitted in medicine ward of Pandit Raghunath Murmu Medical College and Hospital (PRMMCH), Baripada, Dist. Mayurbhanj, Odisha, India, from June 2018 to January 2019.
Inclusion criteria
• All cirrhosis of liver patients above 15 years of age with stigmata of chronic liver cell failure on clinical examination substantiated by ultrasonography (USG) were included in the study.
Exclusion criteria
The following criteria were excluded from the study: • Patients previously diagnosed to have one of the following causes of CLD • Primary biliary cirrhosis • Wilson's disease • Hemochromatosis • Primary sclerosing cholangitis.
• Patients of CLD presenting with associated comorbid diseases such as chronic renal failure and congestive heart failure • Malignancy • Pregnancy • Previous history of hematological and coagulation disorder other than CLD • Anemic patients already taking medications before being diagnosed as CLD.
After due consideration into inclusion and exclusion criteria, detailed history and clinical examination were undertaken in all subjects. Each subject instructed to have following investigations: Complete blood count (Sysmex XS-800i), USG abdomen, liver function tests, hepatitis B surface antigen, hepatitis C virus antibody, serum urea, and creatinine. Twenty healthy persons were taken as controls.
In the present study, anemia was defined using the World Health Organization definition hemoglobin (Hb) concentration <12 g/dl (females) and <13 g/dl (males). The severity of anemia was classified as mild anemia (Hb concentration between 11-12.9 g/dl for males and 11-11.9 g/dl for females); moderate anemia (Hb concentration between 8 and 10.9 g/dl), and severe anemia (Hb concentration <8 g/dl) [10]. Thrombocytopenia was defined with a value of <150×10 3 /µl.
Statistical analysis
All the data were fed on Excel spreadsheet, and statistical analyses were made using the SPSS version 21.0 software. Results were expressed in average±standard deviation, frequencies, and percentages. Continuous data were compared using Student's t-test. p<0.05 was considered as statistically significant for all tests conducted.
RESULTS
During the study period, 69 patients with cirrhosis of liver admitted in medicine ward of PRMMCH, Baripada, fulfill inclusion and exclusion criteria. All the cases were studied for the clinical presentation, risk factors, and laboratory parameters. Fig. 3 shows that microcytic hypochromic anemia was predominant in cirrhosis patients. Macrocytic anemias were more common in males. Table 6 shows the comparisons between hematologic indices of cirrhosis patients and healthy controls with significant association of anemia decreased RBC and thrombocytopenia in cirrhotic patients. [12]. In our study, M:F ratio was 5.9:1, which are due to the cultural and traditional influences in our country. About 55.07% of the patients were between 40 and 60 years of age, which shows a high prevalence of this disease among the productive age group. In our study, we found that 92.75% of the patients were alcoholic. Out of 10 female, nine gave alcohol history and out of 59 male, 55 were alcoholic.
In a study by Suthar et al. [11], splenomegaly was seen in 60% of cases. Only 1 (1.45%) case presented with hepatic encephalopathy.
In our study, the blood urea was raised (>40 mg/dl) in 36.23% (25 cases) of the patients, indicating indirectly toward acute renal injury (49.1% in a study by Pathak et al. [13] and 37% in a study by Hegde et al. [15]).
In our study, the creatinine was raised in 22 patients (i.e., 31.88% of the study group) which were comparable with 39.4% in a study by Pathak et al. and 20% in a study by Hegde et al. It was observed that 23 patients had glomerular filtration rate (GFR) <60 ml/min; thus, 33.33% of the patients had significantly reduced GFR. Hegde et al. studied that 30% of the patients had significantly reduced GFR.
The mean Hb level in our study was 7.99±2.18 g%, whereas in other studies, the findings were as Hegde et al. (9.12 g%). In our study, we found that 66 (95.65%) of the patients had anemia, out of which 37 (53.62%) had Hb ≤8 g/dl, i.e. severe anemia. A study by Gonzalez-Casas et al. [16] showed that anemia in CLD patients was 75%. Hegde et al. study also found severe anemia in 43% of cases. In our study, 36.23% had normal mean corpuscular volume, 60.87% has microcytic blood picture, and 2.9% had macrocytic blood picture. Macrocytic anemia was more common in males than females. Microcytic hypochromic anemia was predominant in cirrhosis patients. This may be due to the low socioeconomic and poor nutritional status of most of the cases in this part of Odisha.
Behera and Dash
In our study, the hematological parameters (Hb, RBC, and platelet) in cirrhosis of liver patients were statistically significant (p<0.05) ( Table 6).
CONCLUSIONS
Many conclusive results regarding the hematological abnormalities in cirrhosis of liver were obtained with this limited study involving 69 patients with decompensated cirrhosis. The results of this study established most of the known facts about chronic alcoholic liver disease in this part of the world. Numerous clinical observations support the notion that alcohol adversely affects the production and functioning of virtually all types of blood cells. Long-term excessive alcohol consumption leads to liver cirrhosis which interferes with various physiological, biochemical, and metabolic processes involving the blood cell production and maturation, leading to these adverse effects. Not only liver function tests, patients with alcoholic liver disease have abnormal hematological and renal function too. Renal dysfunction is common in alcoholic liver disease, especially in patients with ascites. From this study, we can conclude that, in cirrhosis of liver patients, various hematological changes are very common which need to be identified and corrected early to reduce morbidity and mortality.
|
v3-fos-license
|
2019-03-18T14:02:58.502Z
|
2017-11-25T00:00:00.000
|
55262897
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://aijournals.com/index.php/aanat/article/download/618/481",
"pdf_hash": "c8b5c9dac5c84fd190465c97a18d5e9b65373794",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43057",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "0c560c0de9cf2b0da4a4916dd95aeb2b3224cbe2",
"year": 2019
}
|
pes2o/s2orc
|
Determination of Sex from Epicondylar Breadth of Femur
Introduction: Sex determination from skeletal remains allows one to narrow the search in individual identification. As the femur shows 75% Variation between individuals, it has been useful for sex determination. There are many possible femoral measurements, but the Epicondylar breadth is routinely considered to be useful for sexing individuals. To date, nothing has been published on other measurements of distal femur, which may be useful if the bone is fragmented, and only lower part of bone is available. This study is an attempt to evaluate sex determination from distal femora of Indian origin. Methods: For the present study total 208 normal dry human femur bones of known sex were studied, which were collected from various medical colleges of Gujarat. For measurement of Epicondylar breadth of femur, Digital sliding vernier caliper was used. Results: The discriminant function equation for sex determination from Epicondylar Breadth (EB) of femur is:-Y = 0.143 * EB + (-8.334) If Y is < 0 then the sex of the bone is Female and if Y is > 0 then the sex of the bone is Male. Conclusion: With the use of discriminant function score, 87% of femur samples are correctly predicted about sex. The results of this study is particularly useful in a case for sex determination in which the skeletal remains of an individual are incomplete or damaged and thus more accurate bones such as the pelvis or cranium are absent. [Rakesh V IJBAA 2018; 1(1):8-13]
The femur is used by Anthropologists for many different applications. Paleoanthropologists Use the femur to determine stature and locomotion patterns. More specifically, because the femur shows 75% Variation between individuals, it has been useful for sex determination. 6,7,8,9,10,11,12,13 Sex determination has been accomplished using a variety of femoral dimensions. Many of the above methods for sex determination from the femur are dependent on the presence of well preserved (mostly complete) femora; unfortunately, this is often not seen in badly decomposed forensic remains. Femoral Shafts are most commonly encountered in forensic situations. Data concerning the sexing potential of the femur are available in the literature and it is well known that these data vary a great deal according to the population sample from which they were taken.
There are many possible femoral measurements, but the epicondylar breadth is routinely considered to be useful for sexing individuals. The study of an isolated measurement like the epicondylar breadth may be of interest in cases of partial, badly preserved, fragmentary remains or cremains, especially when the superior epiphysis, recognized as a good sex estimator, is absent. Furthermore, previous research has indicated that, for sex determination from long bones, longitudinal distances such as lengths are often less discriminating than breadth and circumference.
Citing the example of the femur, studies have been reported on various populations including the Chinese; 6,7 Spanish; 12 Nigerians; Croatians; 14 Thais; 11 South African whites and blacks; 15,16 and Germans. 8 Nakahashi and Nagai, 17 Tagaya, 18 Yoshino and Kato et al. 19 and Sakaue 20 studied sex determination from various bones in both recent and prehistoric Japanese populations.
Little work on the subject has been reported from India except for the study by Singh SP and Singh S 21,22 on the head of femur, the study by Purkait R on proximal femur, 23 the study by Shrivastava R 25 and Soni G 26 on various measurements of femur. To date, nothing has been published on other measurements of distal femur, which may be useful if the bone is fragmented, and only lower part of bone is available.
This study is an attempt to evaluate sex determination from distal femora of Indian origin.
This study on sex determination is based on the principle that the axial skeleton weight of the male is relatively and absolutely heavier than that of the female, 27 and the initial impact of this weight is borne by the femur in transmission of the body weight. Another factor that makes its indentation on the femur is the modification of the female pelvis with respect to its specialized function of reproduction. Therefore, the stress and strain experienced by the femur is different in a male than it is in a female.
The factors that makes the femur different in male and female are as follows; the axial skeletal weight of the male is relatively and absolutely larger than that of the female and the most of this weight is borne directly by the femur in transmission of the body weight; and obvious modification of the female pelvis with respect to its specialized function in reproduction. Some of the powerful methods of sex determination from skeletal element are based upon the application of statistical analysis to osteological material. Discriminate function analysis is one of the sophisticated mathematical approaches. Long bones are especially favorable for metric analysis. In this regard the femur bone has been studied most extensively. Methods of sex determination by discriminate analysis from the adult femur have been described in several populations by many authors.
Methods: For the present study total 208 normal dry human femur bones of known sex were studied. These femur bones were collected from various medical colleges of Gujarat. Out of 208 bones, 60 bones obtained from Government Medical College, Bhavnagar; 08 bones obtained from B.J. Medical College, Ahmedabad; 08 bones obtained from Smt. deterioration, extreme osteophytic activity, diffuse osteoarthritis or prosthesis, was excluded from the study. Bones with any injury, deformity or artifact were also discarded. A thorough visual inspection was done keeping the bone on Anthropometry board with graph paper.
For measurement of variables, Digital sliding vernier caliper (with an accuracy of + 0.05 mm) was used. The following parameter of the Femur bone was considered: Epicondylar Breadth: Distance between the two most laterally projecting points on the epicondyles parallel to the infracondylar plane. The measurement was taken with the bar of the caliper touching the point of both epicondyles in the infracondylar plane and the arms of the caliper touching respectively the lateral condyle of the femur and the medial condyle of the femur.
The measurement was repeated three times at three different sessions by the same observer, using the resulting mean value to reduce intra-observer error. After collecting data, in order to assess bilateral variation in the measurements, the measurements of femora were subjected to a paired t-test. Statistical functions carried out included univariate analyses for sex and epicondylar breadth. For classification purposes, a discriminant function analysis was used to study sexual dimorphism in this population. The statistical data which were extracted from the calculation and analysis are tabulated in Table-1 to Table-4 to show different parameters at a glance. To find out sexual dimorphism of Epicondylar Breadth of femur, independent-sample t test was applied. Results of independent-sample t test as shown in Table-3 indicates highly significant (p<0.0001) t-value for Epicondylar Breadth. So sex determination from the Epicondylar Breadth measurements is highly indicated. Table-4, and the discriminant function equation is derived. The sex can be determined from this formula by multiplying the value of each measurement of Epicondylar Breadth of femur with its corresponding coefficient (β coefficient) and adding the constant shown in Table- 24,25,26 The sexing potential of the epicondylar breadth is well known. The present study has confirmed that for sex determination, the measurements of the Epicondylar Breadth of femur display higher classification accuracy. According to results and with the use of discriminant function score, total 181 out of 208 femur samples are correctly determined about sex (87% accuracy). Among which 39 out 58 female samples and 142 out of 150 male samples are correctly predicted (female: 67.2% accuracy, male: 94.7% accuracy).
Single postcranial osteometric measurements may be of interest when only fragmented skeletal remains are available, and several investigators assessed that the use of discriminant functions based on several variables does not always significantly improve the prediction obtained by using a single variable. A review of the literature shows that there is considerable intra-and inter-population variability in femoral dimensions and standard formulae cannot be transposed from one population to another. In this study, we first attempted to evaluate sexual dimorphism from the measurement of the epicondylar breadth in Indian population sample, and secondly, we analyzed the geographical accuracy of this specific variable.
In all groups, a substantial part of the over-50 population suffers from osteoarthritis of the knees. However, the literature states that osteoarthritis occurs on the tibio-femoral and the patello-femoral surfaces. Our work consisted in analyzing the anthropometric characteristics of the medial and lateral condyles of the femur. There is no articular cartilage on the medial and lateral epicondyles, and so these are not affected by osteoarthritis. 28 Another study determined that knee height (the distance from the sole of the foot to the anterior surface of the thigh while the ankle and the knee are both flexed at a right angle) did not vary significantly with age.
Samples of Epicondylar Breadth of femur have routinely been used for sexing individuals, and often provided the best measure of sex determination. Comparison of Epicondylar Breadth of femur between present study and other studies has been shown in Table-5. The highest rate of accuracy for sexual determination using the distal epicondylar breadth of the femur is obtained in the Spanish population (97.5%). 12 In the French population, 13 South African white population, 15 some Chinese populations, 7 Japanese populations 9 and Thai populations, 11 the epicondylar breadth is the most sexually dimorphic element of the femur, yielding better results even than the head diameter.
On the other hand, other populations, such as the Germans, 8 the Croatians 14 and Black South Africans 10 showed a poorer accuracy rate in sexing individuals using epicondylar breadth. In the German sample, 8 only 81.4% of the cases were correctly classified using epicondylar breadth, and the authors did not recommend using this variable alone for sex determination. Furthermore, the overlap between both sexes was greater in that sample. In Black South Africans 10 the vertical head diameter and the medial condylar length were most successful in sex identification from the upper and lower ends of the femur respectively. In the Croatian sample, the maximum diameter of the femoral head was more successful in sexing individuals than the epicondylar breadth. 14 In a Chinese sample, 6 the maximum head diameter was the best factor of discrimination, with an accuracy rate of 85.1%. This success rate is quite low, but some studies have showed that Chinese femora display little sexual dimorphism, leading to lower accuracy rates of sex determination when only a single variable is used. 17 There is a correlation between epicondylar breadth and body size. A higher accuracy rate has been achieved in other Chinese population samples according to Iscan and Ding. 7 This could be understood as intra-population variations from one area of the country to another, 6 especially if there are significant differences in height or robustness between the people. This demonstrates that sexual dimorphism is expressed differently within contemporary populations, and geographical variations are apparent in metric values.
In Indian population study of Purkait R 23 confirmed that for sex determination the measurements of the femoral extremities display higher classification accuracy (91.9-93.5% for head diameters and 90.3% for epicondylar width) than shaft dimensions. So the study of Purkait R in Indians showed a poorer accuracy rate in sexing individuals using epicondylar breadth as compared to head diameters. 24 In another north Indian population study of Shrivastava R 25 multivariate discriminant function analysis using Epicondylar breadth, Antero-posterior diameter of lateral condyle, and Proximal breadth has produced higher accuracy of 90.2% (man: 91.5% and woman: 85.7%). But in this study, univariate discriminant function analysis using Epicondylar breadth has produced higher accuracy of 83.7% as compared to all other measurements used in the study.
Conclusion:
In the present study, total 208 femur bones of known sex (150 male and 58 female) were obtained from various medical colleges of Gujarat. Among these bones, 104 are of right side and 104 are of left side. Epicondylar Breadth of femur bones was recorded with digital sliding vernier caliper.
To find sexual variation of Epicondylar Breadth of femur, independent-sample t test and discriminant function analysis by stepwise method was done. The tvalue for Epicondylar Breadth is 10.901, which was found highly significant (p<0.0001). The discriminant function equation for sex determination from Epicondylar Breadth of femur is:-Y = 0.143 * EB + (-8.334) Where, Y=Discriminant function score, and EB=Epicondylar Breadth.
With the use of discriminant function score, 87% of femur samples (181 out of 208 femur bones) are correctly predicted about sex. The results of this study is particularly useful in a forensic anthropological case for sex determination in which the skeletal remains of an individual are incomplete or damaged and thus more accurate bones such as the pelvis or cranium are absent.
Acknowledgement:
I acknowledge Dr. S P Rathod (MS Anatomy), Professor & Head, Dept. of Anatomy, PDU Medical College, Rajkot whose constant inspiration and involvement in my study helped me a lot to make this task possible.
|
v3-fos-license
|
2017-08-17T08:41:43.648Z
|
2008-05-08T00:00:00.000
|
39647230
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2008/251978.pdf",
"pdf_hash": "7a6abceec4258133a59d17f96db3c5214849dd30",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43058",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "a1ffed190d31666f456e85bba34ae4b71076a021",
"year": 2008
}
|
pes2o/s2orc
|
On the Evection Resonance and Its Connection to the Stability of Outer Satellites
1Departamento de Estatı́stica, Matemática Aplicada e Computação (DEMAC), Instituto de Geociências e Ciências Exatas (IGCE), Universidade Estadual Paulista (UNESP), Caixa Postal 178, 13500-970 Rio Claro, SP, Brazil 2 Grupo de Dinâmica Orbital e Planetologia, Faculdade de Engenharia de Guaratinguetá (FEG), Universidade Estadual Paulista (UNESP), Campus de Guaratinguetá, Caixa Postal 205, CEP 12516-410 Guaratinguetá, SP, Brazil
Introduction
Among the puzzling questions in the solar system inventory, the problem of the irregular moons of the Jovian planets is a crucial challenge and controversial topic.Their orbits are highly tilted, very eccentric, and in opposition to inner satellites; these moons orbit the mother planets at very large distances, being strongly disturbed by the Sun.Recently, the number of these distant moons has increased at least one order of magnitude than the pre-CCD era Sheppard et .Recently, a large number about 60000 of irregular satellites were numerically integrated by Nesvorn ý et al. 9 .Among some interesting new results, they also confirm the role played by the Kozai-Lidov and evection resonances which in general provoke the escape of these objects.The evection resonance is caused by the 1 : 1 commensurability between longitude of the satellite pericenter and λ Sun's mean longitude where index refers to the Sun's elements.Nesvorn ý et al. 9 show that once a prograde satellite is in an evection resonance, the critical angle − λ can librate around 180 • , resulting in a cumulative perturbation which can cause the escape of the satellite.For the retrograde satellites, the phenomenon is similar, however the definition of the critical angle must be changed and the libration center is 90 • or 270 • .Nesvorn ý et al. 9 show that both resonances occur in the vicinity of some fixed values of the semimajor axes of the satellite.Some investigations on these values were done by Alvarellos and Dones 3 and Hamilton and Krivov 4 , using some concepts of Jacob constant and generalization of the Tisserand constant for the restricted three-body problem.In this work we derive an alternative and simple way to obtain, theoretically, these semimajor values.Through the steps we outline here, it becomes very clear the main idea which associates the libration of the critical angle with the appearance of the instability.
Disturbing function: second-order expansion
Let us assume a Cartesian system fixed on Jupiter.Initially, the reference plane is the equator of the planet.Figure 1 shows the geometry of the problem.The disturbing function for the motion of a satellite perturbed by the Sun is where third-order terms in the ratio of the distances r/r are neglected; k 2 is the constant of the gravitation, M is the mass of the Sun, and r and r are the position vector of the satellite and of the Sun, respectively.S is the angular distance between Sun and the satellite.From the geometry we have cos S x r x r y r y r z r z r .
3
We adopt the classical notations: a, e, I, l, ω, Ω, f stand for semimajor axis, eccentricity, inclination, mean anomaly, argument of pericenter, longitude of the node, and true anomaly, respectively, for the elements of the satellite.The same variables with the index are used for the Sun.Now let us choose the Sun-Jupiter orbital plane as the reference plane, so that z I 0 and
Prograde satellites
Considering this disturbing function, we integrate the Lagrange variational equations Danby 11 .Figure 2 a shows the behavior of the critical angle − λ and the eccentricity.Note that if the libration of this angle is centered at 180 • , the apocenter of the satellite will always be close to Sun.In Figure 3 we show this situation which is a critical case when Jupiter, Sun, and apocenter are aligned, that is, the amplitude of the libration is zero.
In this case, no matter the period of the satellite, each time it passes through the apocenter S we have the Sun, satellite, and Jupiter aligned with the first two in their closest approach.
Therefore, it will occur a cumulative perturbation.In particular, note that this is the worst situation for the orbital stability of a massless object.Now suppose that this configuration occurs repeatedly.After some passages, certainly, the eccentricity and the semimajor axis of the satellite will be strongly disturbed.They should increase approaching to some dangerous limit.This is clearly shown in Figure 2 b .Note, however, that while −λ remains in libration, the eccentricity is not so high ∼0.3 .The significant increase of the eccentricity appears only when − λ enters in a circulation regime.This dynamics will be discussed in Section 5. Of course in this model since we are using averaged equations, by definition, the semimajor axis is kept constant.Therefore, within the limits of our model the escape does not necessarily occur during this short integration time.The initial conditions we used were a 202603 AU, e 0.048497, ω 0 • , and Ω 0 • R J means Jupiter's equatorial radius and R H is Hill's radius .
Once we have this basic information we can go further and confirm very easily; some results shown in Hamilton and Krivov 4 .Since we have shown that the appearance of the evection resonance causes large variation of the eccentricity, we assume that the corresponding semimajor axis of the evection resonance has to do with the limit of stability of the satellite around the planet.Therefore, we take this statement for granted and search the value of the semimajor axis.To this end, in the averaged disturbing function 2.7 , let us consider only the secular and resonant terms due to the evection.All other remaining terms can be neglected.In particular, since we showed a libration in − λ 180 • , we fix this angle at 180 • .Again, from Lagrange's equation we easily obtain Since the apparent motion of the Sun is Keplerian, we take n k 2 M m J /a 3 .
Hill's radius is defined as R H a m J /3M where The level curves of the above Hamiltonian confirm that both − λ 0 • and 180 • are stable equilibrium points of the system.In Figure 4 we show the level curves in the plane e cos − λ , e sin − λ .This clearly shows that the longitude of the pericenter of the satellite can remain stably pointing to Sun direction or to the opposite direction.Of course the net effect of this dynamics is to stretch the satellite orbit toward and away from the Sun Figure 4 .We also can say that the − λ 180 • is a critical configuration, so that we expect to have escape of the satellite mostly in this situation, not when − λ 0 • .In Sections 5, 6 we confirm numerically that escapes occur following these kind of behavior.
These mentioned two centers of libration play important role as we can see in Section 6.
Retrograde satellites
For the retrograde satellites, the definition of the longitude of the pericenter should be changed Saha and Tremaine 13 .Let I be the longitude of the pericenter for this case.Therefore, we have
Let
be the usual longitude of the prograde case.Then, with a simple algebra we can relate both: As in the previous case, we integrate again Lagrange's equations taking the disturbing function given by 2.7 .
In the precedent figures, we considered the following initial condictions: .This time we see that the critical angle I − λ librates around 90 • or 270 • .The situation is not so drastic as in the direct case.The schematic geometry given in Figure 6 repeats each time the satellite passes through S.
From Lagrange's variational equations we have where G C k 2 M a 2 /2a 3 .As before we fixed some values considering the current resonance: Ω−ω−λ 90 • , I 180 • , and e 0. Again equating: ˙ I n , we get: a * 0.6933R H ≈ 515.3RJ which coincides again with the results given in Hamilton and Krivov 4 .
Recall that the semimajor axis we used in Figures 5 a and 5 b was a 0.7R H 520R J .It is worth noting that compared to the previous direct case; the present resonance is not very strong since the closest approach with the Sun is not like in the direct case.Even so, the cumulative effect works quite efficiently in driving the eccentricity and semimajor axis to critical values, sometimes causing ejection of the satellite.This is clear in Figure 5 b .
Again following the same steps outlined for the prograde case, the conservative Hamiltonian can be written as where P 1 and σ 1 are the canonical conjugated variables defined by P 1 G L 1 − e 2 1/2 and σ 1 I − λ .Drawing level curves for H P 1 , σ 1 , we can easily check that σ 1 90 • , 270 • are stable equilibrium solutions Figure 7 .This time, in this approximation, the tendency of the orbits is to elongate perpendicularly to the Sun-satellite direction, as suggested in Figure 6.
Numerical tests with exact equations
In this section we show some simulations considering the exact differential equations of a satellite of Jupiter disturbed by the Sun.In terms of the radius of the planet, the resonant semimajor axis for the prograde case is a * 393R J .Time variations of the eccentricity and − λ are shown in Figure 8.The initial conditions were a 355R J , e 0.001, I 1 • , ω 180 • , and Ω 0 • .Initially, the critical angle remains librating around zero when the eccentricity remains almost bounded and less than 0.4.
In Figure 9 we consider a 355R J , e 0.011, I 1 • , ω 180 • , and Ω 0 • .As before, initially, libration is around 0 • and in the beginning the dynamics is very similar to the previous figure.A significant increase of the eccentricity is observed when −λ enters in the circulation regime and escape occurs at about t ≈ 140 years.
Figure 10 shows a case when the libration is centered only in 180 • .As before, the increase and escape occur when − λ changes to a circulation regime.
Figure 4 is very useful to interpret the results of the previous simulations.Usually the region deep inside the libration near the center is very regular and is related to the existence of stable periodic orbits.For this reason, if − λ is trapped inside a libration region, the eccentricity remains bounded, without suffering large excursions.On the other hand, circulation regime allows large excursions of the eccentricity.
That said, we can analyze the dynamics of the three previous figures.In Figure 8, initially, the satellite librates around a circulation appears, so that the motion is no more trapped inside a region of bounded small eccentricity, as discussed above.Outside of the libration curves the motion can experience, very easily, higher variations mostly because now the complete problem is not integrable and the domain of the regular region of the level curves of Figure 4, certainly is very reduced modified .Indeed, numerical examples indicate that the librations of − λ , in general, are not permanent and the perturbations always cause transitions to circulation.In Figure 8, the eccentricity remains below 0.4 and escape occurs only after the resonant angle changes to circulation regime.The jump of the eccentricity when − λ changes to circulation is best illustrated in Figure 9.In Figure 10 the libration is always centered in 180 • , but even so, there Tadashi Yokoyama et al. are some brief transitions when − λ attains 0 • or 360 • .Again, as predicted, the escape occurs after circulation appears.Although it is not clear in these figures, we have checked that the escapes always occur in the neighborhood of − λ 180 • .As pointed out in Hamilton and Krivov, the limit a 393R J for the resonant semimajor axis is rather overestimated.In some cases, escapes can occur for a 345R J .One of the reasons of this discrepancy is the fact that our simplified model in 2.1 considers Jupiter in circular orbit, however in the numerical simulations Jupiter's eccentricity is e J 0.048497.Another point which is important is related to the expansion of the function given in 2.7 , where terms of higher order in the ratio r/r were neglected.The inclusion of higher order terms is not difficult but laborious.We intend to investigate in a future work.
Our numerical simulations also show that sometimes the initial value of the pericenter plays an important role in the stability.This seems to be more salient for values not so close to a a * .For instance, in the case of a 350 we found stability if ω 180 • , while if ω 0 the satellite is ejected in less than 1000 years.
For the retrograde case we have a * 515.31RJ .Figures 11 and 12 show two examples of escape, where the center of libration changes several times, much more often than in the prograde case.From Figure 6 we see that the two centers of libration are completely symmetric in opposition to the centers of the prograde case Figure 3 .Therefore, the behavior of the eccentricity around these two centers is similar.As before, the change of the center of the libration 90 • to 270 • and vice versa is predicted in the complete problem, due to the nonintegrability.Figure 7 suggests that the occurrence of these changes is related to a chaotic motion in the neighborhood of a separatrix.Again, each time the critical angle circulates, the trajectory is in a region, where large excursions in eccentricity should occur.Therefore sooner or latter this can result in an escape.
Indeed in Figures 11-12, after several changes of the center of the libration, escape ocurs and in both cases we confirmed again the remarkable feature we always have observed, that is,
Some islands of stability
As mentioned before, the bounds 395R J and 515R J are approximate and overestimated.Certainly, a model using higher order expansion of the disturbing function R would provide better determination of these values.Since expansion in Legendre polynomials is crucial for large values of the ratio of the distances, no wonder about some discrepancies in these numbers since they were obtained taking the simplest expansion of second order in 3.1 .However, no matter the improvement in this determination, we show, in this section, the existence of several islands of stability beyond the values mentioned above.
In Figures 4 and 7 we found two stable equilibrium centers.Although the onset of the resonance can cause large variations and sometimes escape, however if the satellite is trapped deep inside the libration curve, this orbit can remain very stable, free of dangerous variations in eccentricity.In general these are periodic quasi-stable orbits and are not isolated.Our numerical experiments have shown that there are some finite intervals of the semimajor axis even for a > 395R J , where the satellite survives for at least 5 myr.Figures 13 and 14 show typical examples, where the satellite remains trapped in − λ 0 • and 180 • , respectively.In the case of retrograde orbits we found much more interesting intervals of stability.We show only two Figures 15 and 16, where although the resonant angle changes sometimes from 90 • to 270 • , the eccentricity remains quite safe from collision or escapes.Note that in these two figures, the semimajor axis is much larger than a * .Recall that this kind of stable regions is possible thanks to the two stable libration centers predicted in our simplified model.In other words, the appearance of the two stable centers is related to the existence of a family of stable periodic orbits.In the complete problem, part of this region of stability is still preserved.Therefore, we can numerically find some of these orbits even for high distances.Finally, we list some semimajor axis intervals a > a * , where stability was found for at least 5 million years, as follows: where within parentesis, on the right of the intervals, we indicate the initial values for l, l , and ω.For the remaining values we considered e 0.001, I 179 • , and Ω Ω 0 • .Most probably some of these orbits are the same as pointed out by Winter 14 for the Earth-Moon problem.
Conclusion
We have derived, analytically, the values of the semimajor axis, where evection resonances can occur.These values are important, since they define approximated limits, where direct and retrograde orbits can remain stable around a planet.Through a simple model based on the restricted three-body problem these values were obtained and checked against exact numerical integration.Using a completely different way, we confirm previous results of other authors.Our methodology is based on the classical expansion of the disturbing function which can be improved much more if we consider higher order terms.Therefore, we think that the current values: 395R J and 515R J perhaps can be improved.We also showed that the existence of stable orbits beyond the above values is related to the stability of the region in the vicinity of the libration centers of the evection resonance, which still persist even for large values of the semimajor axis.
Figure 4 :
Figure 4: Level curves of Hamiltonian 3.7 with a 396R J .
604-640 l 180 • , ω l 0 • ; 600-642 l l 0 • , ω 180 • ; 564-676 l l ω 180 • ; 568-677 l ω 0 • , l 180 • ; 576-620 , 652-680 l 0 • , ω l 180 • ; 572, 618 , 654-682 l l 180 • , ω 0 • ; 10e longitudes of pericenter of satellite and of the Sun, resp..Note that terms like r 2 /a 2 , r 2 /a 2 cos 2f , r 2 /a 2 sin 2f , and so forth can be averaged through simple formulae of the classical two-body problem Yokoyama et al.10.Let • indicates average with respect to the mean anomaly of the satellite: Recall that the semimajor axes we used in Figures 2 a , 2 b were a 0.531R H 395R J and a 0.538R H 400R J , respectively.A simple inspection in 2.7 shows that critical angle always appears in the form 2 where L is the canonical momentum conjugated to λ .Considering the classical Delaunay canonical variables: , G − L Brouwer and Clemence 12 , and writing e 2 in terms of G and L, we proceed with a new trivial canonical transformation: This allow us to write a one-degree-of-freedom problem, since α 2 becomes a kinosthenic variable; 2− P 1 L 2 /L 2 .Note that with a completely different way, we obtained 3.4 of Hamilton and Krivov 4 .
|
v3-fos-license
|
2020-01-09T15:42:14.637Z
|
2020-01-09T00:00:00.000
|
210088321
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-019-57007-4.pdf",
"pdf_hash": "e93241d26e93a65acd053d92808535345d975d25",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43059",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "36d97b0a1dd6153c6a336030ade280619ff89844",
"year": 2020
}
|
pes2o/s2orc
|
Spatial-Memory Formation After Spaced Learning Involves ERKs1/2 Activation Through a Behavioral-Tagging Process
The superiority of spaced over massed learning is an established fact in the formation of long-term memories (LTM). Here we addressed the cellular processes and the temporal demands of this phenomenon using a weak spatial object recognition (wSOR) training, which induces short-term memories (STM) but not LTM. We observed SOR-LTM promotion when two identical wSOR training sessions were spaced by an inter-trial interval (ITI) ranging from 15 min to 7 h, consistently with spaced training. The promoting effect was dependent on neural activity, protein synthesis and ERKs1/2 activity in the hippocampus. Based on the “behavioral tagging” hypothesis, which postulates that learning induces a neural tag that requires proteins to induce LTM formation, we propose that retraining will mainly retag the sites initially labeled by the prior training. Thus, when weak, consecutive training sessions are experienced within an appropriate spacing, the intracellular mechanisms triggered by each session would add, thereby reaching the threshold for protein synthesis required for memory consolidation. Our results suggest in addition that ERKs1/2 kinases play a dual role in SOR-LTM formation after spaced learning, both inducing protein synthesis and setting the SOR learning-tag. Overall, our findings bring new light to the mechanisms underlying the promoting effect of spaced trials on LTM formation.
Repeating a given experience does not always result in better memory of it. The time between experiences is crucial for the formation of a lasting memory. Since the pioneering work of Ebbinghaus 1 to date, more than three hundred studies on verbal learning in humans have been performed leading to the conclusion that retention increases when the interval between learning sessions increases (see Cepeda et al. 2 ). These and other observations led to the distinction between massed and spaced training, which rely on repeated short and long inter-trial intervals, respectively, and to the demonstration that the latter induces more robust memories than the former. This discovery was confirmed in various animal models as diverse as Aplysia, fly, bee, rodents and non-human primates trained in diverse learning protocols and contexts [3][4][5][6][7][8][9] . After a century of experimentation in this area, two major conclusions can be drawn: (1) the promnesic phenomenon induced by spaced training is evolutionarily conserved and, (2) the neurobiological bases of this phenomenon are not clearly known.
There are three well-known cognitive theories proposed to explain the superiority of spaced over massed training. While the first emphasizes the information coding processes, the second is based on the need to evoke the information learned at the time of the new training. The third considers that deficient processing of the information learned in massed learning would result in information loss 9 . Concerning this last theory, emerging data from behavioral and neuroscience studies point to memory consolidation as a potential process contributing to the advantages of spaced training. The classical theory of memory consolidation posits that the newly acquired information initially goes www.nature.com/scientificreports www.nature.com/scientificreports/ through a period of fragility and is stabilized with time, giving rise to long-term memory (LTM). During this period, different molecular and cellular changes occur in places where memory is formed, which affects that storage 10 .
A necessary condition for LTM formation is the induction of the synthesis of plasticity-related proteins (PRPs) [10][11][12][13] . This synthesis occurs when the acquired information contains a degree of novelty or stress, which activates attention systems 10,14 . However, weak learning experiences can utilize the proteins whose synthesis has been induced by other events adjacent in time to consolidate a memory trace. Synaptic plasticity and also learning and memory require input specificity for the encoding and storage of the information. Thus, in analogy to the synaptic tagging and capture hypothesis 15 , we postulated the behavioral tagging (BT) hypothesis 16 proposing that a learning session sets a learning-tag within task-specific neurons, where plasticity proteins can be captured to establish LTM 17,18 . The processes involved in the formation or improvement of LTM by retraining are frequently studied using training protocols with multiple trials and/or sessions. However, it has been sometimes observed that repetitions do not always contribute to improve memory 6,19 . Here, we used the spatial object recognition (SOR) task, which requires that animals learn the spatial location of objects in an arena and react afterward to changes in location, showing thereby their spatial memory, and which is hippocampus-dependent 20 . We used two consecutive weak SOR training sessions (wSOR) and studied the mechanisms underlying the "lag effect", i.e. the fact that longer intervals between sessions tend to produce better learning than shorter intervals (see Carpenter 21 ). Based on the BT hypothesis, we suggest that retraining will mainly retag the sites initially labeled by the prior training. Moreover, we postulate that PRPs required for memory consolidation can be synthesized as a result of the sum or synergy of the consecutive weak training sessions when they are experienced within an appropriate temporal window. We thus aimed at determining if LTM promotion achieved by retraining relies on these two processes and if blocking any of them abolishes such a promotion. In addition, as the activation of extracellular regulated kinases 1/2 (ERKs1/2) or protein kinase A (PKA) after retraining is associated with an improvement of memory 6,8,[22][23][24][25][26] , we studied the involvement of protein synthesis in the promotion of LTM by retraining, and the role of ERKs1/2 in either the tag setting or the protein synthesis process. We observed SOR-LTM promotion when two identical wSOR training sessions, which individually induce short-term memory (STM) but not LTM, were spaced by an inter-trial interval (ITI) ranging from 15 min to 7 h. The promoting effect was dependent on neural activity and protein synthesis. Moreover, our results suggest that ERKs1/2 activation in the dorsal hippocampus has a dual role, being a critical step for PRP synthesis and for the setting of the SOR learning-tag.
Results
Two consecutive weak SOR sessions induce LTM when spaced between 15 min and 7 h. We performed a wSOR training during which rats explored two identical objects inside an arena for 4 min. In the test session, one of the objects was displaced to a novel location in the same context, and we measured the exploration time allocated to both objects. Figure 1a shows that the group of rats trained with a single wSOR session and tested 30 min later explored more the object in the novel location, so that their preference index was significantly higher than that calculated for the training session (TR), exhibiting therefore SOR-STM. However, a parallel group of rats trained in the same wSOR task but tested 24 h later did not show SOR-LTM (p < 0.001 STM vs. other show the preference index, expressed as mean ± SEM, registered in a training session (TR) or a test session performed 30 min or 24 h after training. (a) Rats were exposed to a 4-min wSOR training session (TR, n = 12) and independent groups were tested either 30 min (STM, n = 12) or 24 h (LTM, n = 12) after training. Newman-Keuls analysis after one-way ANOVA, F (2,33) = 21.51; ***p < 0.001 vs. TR and LTM. (b) One-Trial group (n = 17) received a single 4-min wSOR training. Animals in the 2-Trials group were trained with two 4-min SOR sessions spaced by different ITIs spanning from 5 min to 24 h (5 min, n = 10; 15 min, n = 10; 30 min, n = 11; 1 h, n = 15; 4 h, n = 13; 7 h, n = 18; 9 h, n = 13; 24 h, n = 13). Representative first training session (TR, n = 18). Dunnett's test after one-way ANOVA, F (9,128) = 5.780; **p < 0.01 vs. 1 Trial and **p < 0.01 vs. TR.
www.nature.com/scientificreports www.nature.com/scientificreports/ groups). In contrast, LTM formation was promoted when animals experienced a second and identical wSOR training session spaced by an inter-trial interval (ITI) ranging from 15 min to 7 h, ( Fig. 1b; p < 0.01 vs. TR and 1 Trial). On the contrary, the same second wSOR session was ineffective to promote LTM if it was spaced from the first one by an ITI of 5 min, 9 h or 24 h (p > 0.05 vs. TR and 1 Trial), defining a critical time window in which the retraining protocol is effective for SOR-LTM formation.
The promoting effect of retraining on SOR-LTM formation depends on neural activation and protein synthesis in the dorsal hippocampus. In the previous experiment, we showed that rats trained with two wSOR sessions spaced by 1 h form a LTM observable 24 h after training. We then used this ITI and performed hippocampal infusions of vehicle, the neural blocker muscimol, or the protein-synthesis inhibitors emetine or anisomycin after the second training session to determine the effect of these inhibitions on SOR-LTM formation assessed 24 h after training. Figure 2a shows that rats infused with vehicle expressed SOR-LTM (p < 0.01 vs. TR) while rats infused with muscimol did not express it (p < 0.01 vs. Veh). Moreover, the promotion of SOR-LTM induced by retraining was blocked by the intra-hippocampal administration of emetine (p < 0.001 vs. Veh) and anisomycin (p < 0.01 vs. Veh) (Fig. 2b,c, respectively). These results indicate that the formation of SOR-LTM induced by retraining requires hippocampal activity and the induction of protein synthesis in the dorsal hippocampus.
The learning-tag induced by wSOR is transient and depends on ERKs1/2 activation in the dorsal hippocampus. In a previous work, we coupled a wSOR training session, similar to the one used here, with an open-field (OF) session, and showed that the latter promotes the formation of SOR-LTM through the mechanism of behavioral tagging, which involves the setting of a learning tag by the wSOR training and the provision of the PRPs by OF exposure 27 . Thus, we used this protocol to show that this phenomenon occurs within a critical temporal window between the tasks and studied the molecular requirements of this process. We decided to explore the role of ERKs1/2 in establishing the SOR learning-tag because these kinases are required specifically for the setting of synaptic-tags associated with long-term depression (LTD) 28,29 , a cellular-plasticity model associated with the acquisition of spatial memory for object location in rodents 30,31 . We exposed rats to a 5 min OF session 1 h or 4 h after a wSOR training session. The group of rats exposed to OF 1 h after wSOR expressed SOR-LTM when they were tested 24 h after training (Fig. 3, p < 0.001 vs. TR). In contrast, control animals that were not exposed to the OF, and the group that was exposed to the novel OF 4 h after wSOR did not express SOR-LTM (Fig. 3, p < 0.001 vs. 1 h Veh). Moreover, the promoting effect of OF experienced 1 h after wSOR was prevented by the infusion of the specific MEK inhibitor U0126 15 min before wSOR training session (Fig. 3, p < 0.001 vs. 1 h U0126). This experiment suggests that the initial wSOR training session induces a learning tag that depends on ERKs1/2 activation. In addition, the results from rats infused with www.nature.com/scientificreports www.nature.com/scientificreports/ vehicle into the hippocampus indicate that the wSOR learning tag is no longer active 4 h after training, which is in agreement with our previous results obtained in non-cannulated rats 27 . Overall, these results indicate that during the ITI of 4 h separating two wSOR sessions, an additional process other than tag setting occurs given that LTM is formed under these conditions (Fig. 1b).
The promoting effect of wSOR retraining on SOR-LTM formation depends on a dual role of ERKs1/2 activation in the dorsal hippocampus. We next assessed whether ERKs1/2 activation also participates in the regulation of protein synthesis required to form LTM after retraining. In order to ensure that the learning-tag induced by the first wSOR session has decayed at the moment of the second session, we trained rats with an ITI of 4 h. Note that this ITI resulted in LTM when rats were trained with two wSOR sessions ( Fig. 1b and Veh group in Fig. 4a). Despite the fact that the second wSOR induced its learning-tag, the local infusion of U0126 15 min before the first wSOR session impaired the SOR-LTM (Fig. 4a, p < 0.001 vs. Veh), suggesting that ERKs1/2 was also involved in the process leading to the synthesis of PRPs. The inhibitory effect of U0126 was rescued by an OF session performed after the second wSOR session, which contributed PRPs to the second learning tag.
Infusion of U0126 in the dorsal hippocampus immediately after the second wSOR session impaired the SOR-LTM when an ITI of 4 h separated the two training sessions (Fig. 4b, p < 0.001 vs. Veh), consistently with an inhibition of the SOR learning-tag by this drug. This effect was not rescued by a novel OF session experienced 1 h after retraining (Fig. 4b, p < 0.001 vs. Veh). These results suggest that the intact learning-tag induced by the second wSOR session, which was spaced by 4 h from the first one, was necessary to utilize the PRPs provided by the novel experience.
When the ITI between wSOR sessions was 1 h, and thus sufficient for the first learning-tag to persist until the second session and to promote LTM (see Fig. 1b, and Veh group in Fig. 4c), the infusion of U0126 either before the first (Fig. 4c) or immediately after the second session (Fig. 4d) impaired SOR-LTM formation (p < 0.001 U0126 vs. Veh). In both cases, the exposure to an OF 1 h after retraining rescued the SOR-LTM. In the first case (U0126 infusion before the first wSOR session), LTM rescue was due to the provision of PRPs contributed by the OF session to the tag induced by the second wSOR session (Fig. 4c, p < 0.01 vs. 2 Trials U0126). This assumption was explicitly tested by administering emetine after the OF session. Inhibition of protein synthesis by emetine caused an amnesic effect, which was not present in control animals that experienced a vehicle injection after the OF session (see Supplementary Fig. S1). This result thus confirmed that the OF session contributed the PRPs necessary to rescue SOR-LTM. In the second case (U0126 infusion after the second wSOR session), LTM rescue can be explained by the supply of PRPs induced by the OF session to the learning-tag set by the first wSOR session, which was still available (Fig. 4d, p < 0.001 vs 2 Trials U0126). These results suggest that with an ITI of 1 h, the www.nature.com/scientificreports www.nature.com/scientificreports/ injection of U0126 either before the first or after the second wSOR session prevented PRP synthesis so that no SOR-LTM formation was observed. The inhibition of tag setting by U0126 was therefore overcome in our retraining protocol because at least one tag was always preserved and available for the PRPs induced by OF exposure. . Hippocampal inhibition of ERKs1/2 prevents SOR-LTM formation induced by wSOR retraining, acting on learning-tag and protein synthesis processes. (a-d) show the preference index as mean ± SEM registered in the first training session (TR), which is representative for all groups, or in the test session performed 24 h after training. (a) One-Trial group (n = 8) was injected with vehicle 15 min before a single wSOR training session and tested on the following day. Independent animals received intra-dorsal hippocampus infusions of vehicle (n = 11) or U0126 (n = 9) 15 min before being subjected to two identical wSOR training sessions spaced by 4 h; another group was also exposed to an OF session 1 h after both training sessions (n = 6). Training session (TR, n = 12). SOR-LTM was tested 24 h after the second training session. Newman-Keuls analysis after one-way ANOVA, F (4,41) = 12.18; ***p < 0.001 vs. TR, 1 Trial and 2 Trials U0126. (b) One-Trial group (n = 11) was injected with vehicle 4 h after a single wSOR training session and tested on the next day. Independent animals were subjected to two identical wSOR sessions spaced by 4 h and immediately after that, they received bilaterally infusions of either vehicle (n = 18) or U0126 (n = 15); another group was also exposed to an OF session 1 h after that training (n = 10). Training session (TR, n = 18). SOR-LTM was tested 24 h after the second training session. Newman-Keuls analysis after one-way ANOVA, F (4,67) = 16.30; ***p < 0.001 vs. all other groups. (c) The experimental protocol is similar to (a), except that the ITI is 1 h. One-Trial group of animals (n = 10), 2-Trials group of rats infused with vehicle (n = 13) or U0126 (n = 11) and the retrained group plus an OF session (n = 7). Training session (TR, n = 13). Newman-Keuls analysis after one-way ANOVA, F (4,49) = 11.64; **p < 0.01; ***p < 0.001 vs. TR, 1 Trial and 2 Trials U0126. (d) The experimental protocol is similar to (b), except that the ITI is 1 h. One-Trial group of animals was injected with vehicle 1 h after training (n = 6), 2-Trials group of rats infused with vehicle (n = 11) or U0126 (n = 10) and the retrained group plus an OF session (n = 8). Training session (TR, n = 12). Newman-Keuls analysis after one-way ANOVA, F (4,42) = 13.49; ***p < 0.001 vs. TR and 2 Trials U0126; ## p < 0.01 vs. 1 Trial.
Discussion
In this work, we described the temporal window of efficacy for the promotion of SOR-LTM after a retraining protocol using two consecutive weak training sessions. This promoting effect depends on hippocampal activity and protein synthesis and requires, in addition, the activation of ERKs1/2 at the time of the first and the second wSOR session. Our results suggest that ERKs1/2 activity is probably needed to induce the protein synthesis necessary to consolidate SOR-LTM. In addition, ERKs1/2 activity is also an essential step for the setting/maintenance of the SOR-learning tag. Based on these results, we postulated that a process of behavioral tagging (BT) operates in the formation of SOR-LTM after retraining, and that ERKs1/2 activity plays a dual role in it, acting both at the level of tag setting and maintenance and PRP synthesis.
Our results show that rats trained with a single wSOR session do not form SOR-LTM; however, when they were exposed to two identical wSOR sessions separated by an ITI ranging between 15 min to 7 h, they exhibited SOR-LTM. A critical step in the establishment of durable memories is the synthesis of proteins. In accordance with this, we observed that the infusion of the protein synthesis inhibitors anisomycin and emetine in the dorsal hippocampus, immediately after the second wSOR training session, fully blocked the expression of LTM. A similar result was observed after infusing muscimol, a GABA A receptor agonist that temporarily silences the infused area. Because the same behavioral output was observed after preventing the activation of ERKs1/2 through U0126 infusion in the hippocampus, we suggest that these kinases enable the process of protein synthesis after retraining. In that sense, the summation of the biochemical effects induced by retraining would be necessary to promote LTM. If this were the case, the observed ineffectiveness of the short 5 min ITI to promote SOR-LTM could be due to an incapacity of the second training session to enhance and/or extend the levels of ERKs1/2 activation that would be required for memory consolidation 9 . This molecular explanation constitutes an alternative view to the hypothesis suggesting that memories established on consecutive trials interfere with each other's during shorter ITIs corresponding to a window of high susceptibility to interference 32 . To further discriminate between these two points of view, we observed that OF promoted SOR-LTM when it was experienced one hour after two wSOR sessions spaced by 5 min (see Supplementary Fig. S2). This result suggests that a short ITI does not interfere with a fundamental process that could not be overcome by providing PRPs, such as the tag setting process; in contrast, it seems to impair mechanisms associated with the synthesis of PRPs required for memory consolidation. In the scheme proposed to account for our findings (see Fig. 5), the absence of LTM after an ITI of 5 min does not result from interference between consecutive trials but from an absence of sustained or enhanced activity of ERKs1/2 induced by this ITI. ITIs higher than 7 h are also ineffective to promote LTM because the effects of the first training session would no longer persist until retraining. However, we do not discard the possibility that other processes triggered by the first and the second waves of ERKs1/2 activation (and not necessarily its sustained level) facilitate the triggering of PRP synthesis. In that sense, recent findings reported that repeated experiences in contextual fear conditioning or Morris water maze may be integrated within a time window of 5 h to possibly promote their LTM. Moreover, this depended on network activity and c-Fos expression, which was sufficient and necessary to determine what mice learn 33 .
An important issue in retraining protocols is to know whether the population of cells activated by the first training coincides with that activated by the second one. The use of fluorescent in-situ hybridization and confocal microscopy to monitor the subcellular distribution of the immediate-early gene Arc revealed that rats exposed sequentially to the same environment exhibited duplicated proportion of CA1 neurons with overlapping Arc expression with respect to animals exposed sequentially to two different environments 34,35 . Moreover, Attardo et al. 36 used a fluorescent reporter of neural plasticity to image long-term cellular ensemble dynamics of live mice, and they observed that CA1 cell patterns representing the enriched environment were progressively stabilized after repeated episodes. As expected, exposure to the same environment evoked patterns about twice as high as those evoked by different environments. Also, Abdou et al. 37 showed that assemblies in the basolateral amygdala and the auditory cortex overlap more if the associative fear experience is the same. On the other hand, if the task implies the association of context with different tones, the overlapping degree of the cellular assemblies was lower than that corresponding to the repetition of the same experience. Moreover, the authors suggested that engram-specific synaptic plasticity is crucial and sufficient for information storage and keeps the identity of the distinct overlapping memories. Thus, they showed that it was possible to erase a fear memory from an engram network without affecting other memories stored in the shared ensemble by resetting the plasticity in a synapse-specific manner.
The aforementioned findings highlight two main facts: the repetition of a given episode activates similar neural substrates as those used by the original one, and a given memory requires synaptic plasticity specificity. In line with this statement, the BT hypothesis offers a conceptual framework to explain how PRPs could be used at sites activated by training. In this framework, the formation of LTM relies on two key cellular processes: the synthesis of PRPs and the setting of a learning-tag 18,38 , which provides the specificity for the memory storage. It has been proposed that tag setting does not require protein synthesis, but is based on post-translational changes and re-assembly of the cytoskeleton that lead to changes in spine morphology 39 . Kinase activity was postulated as a necessary step in the tag setting after synaptic plasticity or learning experiences 18 .
The participation of the BT process in a SOR paradigm was already shown by Ballarini et al. 27 , who demonstrated that a single wSOR session could result in a SOR-LTM if associated with a novel OF exposure occurring to two hours after the training session. This phenomenon was dependent on the protein synthesis induced by the OF novelty. In the present work, we used this finding to assess the duration of the learning tag induced by the initial wSOR training session and to determine its dependence on ERKs1/2 activation (see Fig. 3). We observed that the infusion of the ERKs1/2 inhibitor U0126 in the dorsal hippocampus before the wSOR session impaired the SOR-LTM promotion induced by the OF exposure 1 h after the training session. This result suggests that ERKs1/2 activation at the moment of learning is necessary for the setting of the tag by the SOR session. We also confirmed in cannulated rats infused with the vehicle before the wSOR training session that OF exposure promoted www.nature.com/scientificreports www.nature.com/scientificreports/ SOR-LTM when given 1 h, but not 4 h, after wSOR session. Overall, our results suggest that the SOR learning-tag persists less than 4 h and depends, at least in part, on ERKs1/2 activity in the dorsal hippocampus.
Finally, we studied if ERKs1/2 activation is also involved in the processes of tag setting and induction of protein synthesis after retraining. As the second training session will mainly retag the sites labeled by the first session, and to further test if SOR-LTM formation after retraining needs an active learning-tag, we used a 4 h ITI protocol to ensure that the transient learning-tag induced by the first wSOR had declined. We observed that the local infusion of U0126 after the second wSOR session impaired the SOR-LTM, and also prevented memory promotion induced by a subsequent OF exposure. This result suggests that ERKs1/2 activity is required to set the SOR learning-tag and that in its absence, the PRPs induced by the OF exposure are ineffective for SOR-LTM formation. In contrast, when a 1 h ITI retraining protocol was used, the local infusion of U0126 after the second wSOR session did not impaired the OF promoting effect on SOR-LTM formation, because the PRPs induced by OF exposure could be used by the learning-tag set by the first wSOR session, which was still active during the OF session. On the other hand, the role of ERKs1/2 activity for signaling protein synthesis could be evidenced when U0126 was infused in the hippocampus before the first wSOR training session both with ITIs of 1 h and 4 h. In both cases, the inactivation of ERKs1/2 impaired SOR-LTM suggesting that even when the second learning-tag was active, because it was not reached by U0126, memory was not formed probably due to lack of protein synthesis. We speculate that PRPs required for memory consolidation can be synthesized as a result of the sum or synergy of two wSOR sessions that are experienced in an appropriate temporal window, and that ERKs1/2 activity is crucial for this phenomenon. This dynamic in protein synthesis is also compatible with a metaplasticity-like mechanism by which prior experience impacts subsequent learning 40 . The first row shows that a single wSOR session does not induce LTM on the following day despite inducing tag setting and enhancing ERKs1/2 level as it would be insufficient to trigger the required synthesis of PRPs. Two consecutive and identical wSOR sessions separated by 5 min (second row) or 9 h (fifth row) do not induce LTM as no PRP synthesis would occur in either case. Although each session would tag the same cellular substrates, in the first case, levels of ERKs1/2 induced by the first session would not be further enhanced or extended by the second one to reach the threshold necessary for PRP synthesis. This would be due to the necessity of a minimal ITI for the machinery inducing ERKs1/2 activation by the second session to be operational. In the second case, the tag and ERKs1/2 levels of the first session decay over time during the long ITI and do not reach the second session, thus impeding protein synthesis. The third and fourth rows correspond to ITIs of 1 and 4 h, respectively, in which PRP synthesis would occur. In both cases, ERKs1/2 levels would be enhanced and the same cellular substrates would be retagged by the second session. This convergence would lead to PRP synthesis necessary for LTM formation. However, as ERKs1/2 activity was not quantified under these circumstances, alternative explanations (besides addition or synergy in ERKs1/2 activation level) to explain how ERKs1/2 functionality may relate to wSOR training and PRP synthesis cannot be excluded.
www.nature.com/scientificreports www.nature.com/scientificreports/ An important issue, both in synaptic plasticity models and in BT protocols, is the identification of the molecules responsible for setting the tags. Our results suggest that ERKs1/2 activation is required to set the SOR learning tag. These results are in accordance with the fact that ERKs1/2 are required specifically for the setting of synaptic-tags associated with LTD 28,29 , a cellular plasticity model associated with the acquisition of spatial memory for object location in rodents 30,31 . Also, hippocampal ERKs1/2, but not PKA, may serve as behavioral tags to promote LTM extinction of an aversive memory task 41 . In contrast, Moncada et al. 42 showed that CAMKII, PKA, and PKM , but not ERKs1/2, activities play an essential role in the setting of the learning tag resulting from an inhibitory avoidance task. This is in agreement with results showing the same kinase dependency of the synaptic tag induced by LTP protocols 28,29,43 , a cellular plasticity model associated with an inhibitory avoidance task 44 .
The involvement of ERKs1/2 in the formation of LTM after retraining found in our work is in agreement with other studies. In Aplysia, a 45-min interval between stimuli was effective for the induction of LTM for sensitization of the tail-elicited siphon withdrawal reflex, and for ERKs1/2 activation in the tail sensory neurons 8,19 . In olfactory conditioning of Drosophila, consisting of pairing odor with an electric shock, Pagani et al. 6 demonstrated that the cycle of ERKs1/2 activation must decay to permit a resetting with the subsequent trial. Recently, Miyashita et al. 22 demonstrated that this ERKs activation is required for the increased expression of c-fos and dCREB2 during spaced training. Moreover, Li et al. 45 suggested that translocation of ERKs1/2 to the nucleus of mushroom body neurons is required for the consolidation of this LTM after retraining. ERKs1/2 activity is also a key step in LTM induced by retraining in rodents. The infusion of a MEK blocker into the striatum, both at the time of the second training and 3 h later, impaired the enhancement of an inhibitory-avoidance memory induced by retraining 46 . However, Parsons and Davis 23 reported the activation of ERKs1/2 in the amygdala one hour after the first fear-training session but not after the second one. Using a recognition-memory paradigm, similar to that used in the present work, Seese et al. 24 observed that synaptic ERKs1/2 activation was associated with the formation of object-location memory after spaced training in mice, which are model for the fragile X syndrome.
In conclusion, we report the existence of a temporal window ranging from 15 min to 7 h between two wSOR sessions, which is effective to promote SOR-LTM. Our results suggest that ERKs1/2 activity is: (1) necessary to induce protein synthesis required for memory formation after retraining and, (2) relevant to set the SOR learning-tag, which marks specific sites activated by re-learning. Finally, and in addition to a great body of evidence showing that the BT process accounts for LTM promotion by novel or stressful experiences 18,38,47 , the present results highlight that the formation of LTM after wSOR retraining is also in line with the assumptions of the BT hypothesis (Fig. 5).
Materials and Methods
Subjects. Male adult Wistar rats between 2 and 3 months of age (weight, 200-350 g) obtained from the breeding colony maintained at the Faculty of Exacts and Natural Sciences of the University of Buenos Aires were used in this study. Animals were housed in groups of three per cage, with food and water available ad libitum, under a 12 h light/dark cycle (lights on at 07:00 A.M.) at a constant temperature of 23 °C. All behavioral testing was conducted during the light phase of the cycle. Animals were handled for 2 min for two consecutive days before each experiment to avoid emotional stress. During behavioral procedures, animals were individually moved from their home cages to the arena and returned immediately after each trial session. All experiments were conducted in accordance with the National Institutes of Health Guides for Care and Use of Laboratory Animals (Publication No. 80-23, revised 1996) and were approved by the Animal Care and Use Committee of the University of Buenos Aires (CICUAL).
Drugs. All drugs supplied were purchased from Sigma (St. Louis, MO). The GABA A agonist muscimol was applied to temporarily inactivate the dorsal hippocampus (0.1 µg of muscimol in 0.5 µl saline solution per side). The protein synthesis inhibitors used were anisomycin (80 µg of anisomycin, dissolved in HCl, diluted in saline, adjusted to pH 7.4 with NaOH, and infused in a volume of 0,8 µl per side) and emetine (50 µg in 1 µl saline solution per side). U0126 (0.4 μg diluted in 10% DMSO in saline and infused in a volume of 0,8 µl per side) was used as an ERKs1/2 inhibitor given that it blocks the kinase activity of MEK1/2, thus preventing the activation of MAP kinases p42 and p44 encoded by the erk2 and erk1 genes, respectively. Surgery and drug infusion. For cannulae implantation, rats were deeply anesthetized (70 mg/Kg ketamine and 7 mg/Kg xylazine). 22-G cannulae were stereotaxically aimed at the CA1 region of the dorsal hippocampus at coordinates A: −3.9 mm; L: ±3.0 mm; D: −3.0 mm, from Bregma 48 (see Supplementary Fig. S3) and were cemented to the skull with dental acrylic. Animals received a subdermal application of analgesics and antibiotics during surgery (Meloxicam 0.2 mg/Kg, gentamicin 3 mg/Kg) and were allowed to recover from surgery for at least four days. Drugs were infused using a 30-G needle with its tip protruding 1.0 mm beyond the guide. The infusions needles were linked by an acrylic tube to a Hamilton microsyringe and the entire bilateral infusion procedure lasted about 2 min. Needles were left in place for one additional minute after infusion to minimize back-flow. Histological examination of cannulae placements was performed after the end of the behavioral procedures by the infusion of 0.5 µl of 4% methylene blue in saline solution. Animals were killed by decapitation 15 min after the infusion and their brains were sliced to verify the infusion area 49 . Only data from animals with correct cannulae implants (95%) were included in statistical analyses. Spatial object recognition (SoR) task. In the SOR task, animals familiarized with two objects in a specific spatial environment should recognize that one of them has changed its location with respect to its original position and the other object. Rats spend more time exploring the spatially displaced familiar object relative to a stationary familiar object, suggesting that they remember the location in which particular objects were previously encountered 50 . www.nature.com/scientificreports www.nature.com/scientificreports/ The SOR arena was a 60 cm wide x 40 cm long x 50 cm high acrylic box, with different visual clues in its lateral white walls. The floor was white, the front wall was transparent and the back wall was hatched. For habituation to the context, all subjects explored the arena without objects for a 20 min daily session during two consecutive days before the training day. In the wSOR training session, two identical plastic or glass objects were included in the arena in two adjacent corners and animals were left to explore it for 4 min. In the test session, one of the objects was moved to a new position and animals were allowed to explore this context for 2 min. Exploration time for each object, defined as sniffing or touching it with the nose or forepaws, was measured using a hand stopwatch. Rats were excluded from the analysis when they explored one object more than 65% of the total object-exploration time during training sessions or when they did not reach 10 s in the total object-exploration time during the 2-min test session. Results are expressed as a preference index: [Exploration time of the object in a new location (Tn) -Exploration time of the object in the familiar location (Tf)] / [Tn + Tf]. Also, we calculated a preference index for the first training session (TR), considering Tf as the exploration time of the object that will be congruent in the test session and Tn the exploration time of the other object. In all cases, the preference index calculated for TR was not different from zero (p > 0.05), thus showing an initial absence of preference for exploring a particular location. A positive preference index in the test session, differing significantly that calculated for the TR, indicates the presence of memory. A representative mean ± SEM of the total object-exploration time during the first wSOR training session was 53.68 ± 1.87 s. It was 45.83 ± 1.67 s during the wSOR retraining session and 23.01 ± 0.77 s during the test session.
Open field (OF) task. The OF task consists in placing an animal within an arena to record its locomotor and exploratory behavior in this novel spatial context. The arena was a 50 cm wide x 50 cm long x 39 cm high square box, with black plywood walls and floor divided into nine squares by white lines. The number of line crossings and rearings was measured in blocks of 1 min during 5 min under normal room lighting 16 . Data analysis. Behavioral data were analyzed by means of Newman-Keuls or Dunnett post-hoc comparison tests after one-way ANOVA. Analyses were performed in GraphPad Prism ® version 8.00 (GraphPad Software, La Jolla, CA, USA). Effects were considered significant when p < 0.05. Results are presented as mean ± SEM.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
v3-fos-license
|
2019-06-13T13:16:37.331Z
|
2016-03-11T00:00:00.000
|
188526534
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://nphj.nuph.edu.ua/article/download/nphj.16.2077/85992",
"pdf_hash": "8bd5156f2d46ef1067920d5b0b8688b974e8c02c",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43061",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"sha1": "79638963ac3c21e6c917abc4d8333b4949346d61",
"year": 2016
}
|
pes2o/s2orc
|
DEVELOPMENT Of THE quaNTiTaTiVE DETErMiNaTiON METHOD fOr a NEW CariES-PrEVENTiVE COMPOuND
Fluorides are the most important treatment and preventive additive in the composition of any form; they prevent development of caries by increasing the resistance of enamel, as well as production of acids by bacteria of dental plaque. At the Odessa National Medical University the work on searching for fluorine-containing compounds in a series of quaternary bases and their subsequent use in dentistry is conducted. The pharmacological studies have shown that “onium” hexafluorosilicates have a higher caries-preventive effectiveness compared to sodium fluoride. Cetylpyridinium hexafluorosilicate has been found to be the most active in the dose of 15 mg/kg when used in the form of oral applications of the gel; its mechanism of action is in activation of alkaline phosphatase and lysozyme of the pulp of the teeth. Development of reliable methods for identification and quantitative determination is a prerequisite for further use of this compound in medical practice. The aim of this work was to develop the method for quantitative determination of cetylpyridinium hexafluorosilicate. For further use of the method proposed for analysis of the compound under research its validation characteristics have been studied. According to the results of the research conducted it has been found that the method for quantitative determination of cetylpyridinium hexafluorosilicate in the substance corresponds to the following parameters: accuracy, precision, linearity (Δz = 0.50≤max Δz = 0.53, δ = 0.17≤max δ = 0.32, a = 0.80≤max a = 1.60, r = 1.0000≥min r = 0.9993).
Fluorides are the most important treatment and preventive additive in the composition of any form; they prevent development of caries by increasing the resistance of enamel, as well as production of acids by bacteria of dental plaque. At the Odessa National Medical University the work on searching for fluorine-containing compounds in a series of quaternary bases and their subsequent use in dentistry is conducted. The pharmacological studies have shown that "onium" hexafluorosilicates have a higher caries-preventive effectiveness compared to sodium fluoride. Cetylpyridinium hexafluorosilicate has been found to be the most active in the dose of 15 mg/kg when used in the form of oral applications of the gel; its mechanism of action is in activation of alkaline phosphatase and lysozyme of the pulp of the teeth. Development of reliable methods for identification and quantitative determination is a prerequisite for further use of this compound in medical practice. The aim of this work was to develop the method for quantitative determination of cetylpyridinium hexafluorosilicate. For further use of the method proposed for analysis of the compound under research its validation characteristics have been studied. According to the results of the research conducted it has been found that the method for quantitative determination of cetylpyridinium hexafluorosilicate in the substance corresponds to the following parameters: accuracy, precision, linearity (Δ z = 0.50≤max Δ z = 0.53, δ = 0.17≤max δ = 0.32, a = 0.80≤max a = 1.60, r = 1.0000≥min r = 0.9993).
Over the past decade there is a significant increase of affected teeth by caries in the population [8]. Caries is a disease, in which under the effect of bacteria the process of demineralization of teeth occurs. The risk of caries is associated with a number of causes, among them there is deficiency of fluorine in food and drinking water, which leads to brittleness and thinning of the enamel; the excess of carbohydrate food and sugar; dental plaque formed from decomposition of food debris; and it is also a stimulus for bacterial growth. In turn, excessive amounts of fluoride lead to binding of calcium salts in the inert calcium fluoride and the hepatotoxic action. Hexafluorosilicates (SiF 6 ) are one of the fixed forms of fluorine; moreover, they are almost completely free of drawbacks of fluorides [7]. In order to find substances with the cariesprotective and antibacterial properties the work on searching for new biologically active substances among hexafluorosilicate derivatives is conducted at the Odessa National Medical University [5,6].
One of the most active compounds in this series is the salt of the quaternary base -cetylpyridinium hexafluorosilicate; development of methods of the quality control is the necessary condition for its further application. The basic physical and chemical properties of cetylpyridinium hexafluorosilicate have been studied, and the methods for its identification have been proposed [4].
Continuing the research on the standardization of the compound it was necessary to develop a method for its quantitative determination.
Materials and Methods
The experiments were carried out using the chromatographic grade sample of the compound (the content of impurities -0.5%). During the work the measuring glassware of class А, reagents meeting the requirements of the State Pharmacopoeia of Ukraine (SPhU) and "AXIS" analytical balances were used.
The quantitative determination method. Dissolve 2.000 g in distilled water and dilute to 100.0 ml with the same solvent. Transfer 25.0 ml of the solution to a separating funnel, add 25 ml of chloroform, 10 ml of 0.1 M sodium hydroxide and 10.0 ml of a freshly prepared 50 g/l solution of potassium iodide. Shake well, allow to separate the chloroform layer and discard chloroform extracts. To the aqueous layer add 40 ml of hydrochloric acid, cool and titrate with 0.05 M potassium iodate to a deepbrown colour that does not disappear. Add 2 ml of chloroform and continue to titrate, shaking vigorously, until the chloroform layer no longer changes its colour. Simultaneously carry out the blank titration of a mixture of 10.0 ml of the freshly prepared 50 g/l solution of potassium iodide, 20 ml of water and 40 ml of hydrochloric acid.
One ml of 0.05 M solution of potassium iodate corresponds to 37.56 mg of (C 21 H 38 N) 2 SіF 6 , which must be from 99.0% to 101.0%.
results and Discussion
Cetylpyridinium hexafluorosilicate is a quaternary ammonium salt, the residue of the pyridine cycle is in the basis of its structure. Cetylpyridinium hydrochloride is analogue by the structure of the compound studied. For its quantitative assessment the European and British Pharmacopoeias recommend to use the method of titration with potassium iodate after the appropriate sample preparation. First, the solution of potassium iodide in the alkaline medium was added, and the resulting compound was extracted with chloroform. The excess of potassium iodide in the aqueous layer was determined after acidifying the reaction mixture by titration with 0.05 M solution of potassium iodate [2,3].
We assumed that when adding potassium iodide cetylpyridinium hexafluorosilicate in the alkaline medium formed cetylpyridinium iodide having the properties of the ion associate and being well extracted with chloroform.
We consider that the reaction proceeds by the following mechanism (Scheme 1).
We confirmed that the reaction occurred by the exactly this mechanism in the following way. The chloro-form layer was carefully evaporated on a water bath to dryness, suspended with water, acidified by acetic acid diluted to a yellow-green colour with the bromphenol blue indicator. While thoroughly stirring the reaction mixture was titrated slowly with 0.1 M solution of silver nitrate to an emerald-green colour. In the process of titration the precipitate was dissolved, and colloidal precipitate of silver iodide was gradually formed (Scheme 2).
Some validation characteristics of the method proposed for titration of the cetylpyridinium hexafluorosilicate substance with the indicator fixation of the titration end point were studied according to the requirements of the SPhU [1]. To validate the titration method the experimental batch of the substance was used. The loss on drying was 2.0%. In calculations the content of the active substance was taken equal to 100%.
To reduce uncertainty the titre of 0.05 М potassium iodide solution was determined by the method of the SPhU [1]. 100.14 relative standard deviation, Sz% 0.50 relative confidence interval Δ Аs % = t (95%,7)×Sz 0.53 Critical value for convergence of results Δ Аs % 1.00 Systematic error δ 0.17 Criterion of the systematic error insignificance 1) δ≤Δ Аs /(g)^0.5 = 0.33, 2) if not satisfied 1), then δ≤0.33 0.32 The overall conclusion of the method correct The mean value of 5 parallel titrations was obtained. The value of the correction factor to the nominal concentration of the titrated solution K T was 1.0000 with the relative standard deviation RSD = 0.10% and the confidence interval ∆(titr) = 0.10%. Thus, the results of the titre determination (≤0.2%) comply with the requirements of the SPhU for convergence [1].
The titration was carried out after adding a certain amount of potassium iodide solution, hydrochloric acid and chloroform; therefore, to reduce the error of titration it was appropriate to conduct the blank titration simultaneously.
To determine linearity the samples were taken for different points (і) of the straight line, they were 80, 85, 90, 95, 100, 105, 110, 115 and 120% from the nominal weight of 200 mg. To study the reproducibility of the results for different experiments for studying linearity 5 samples for each point (і) were taken (Tab. 1).
The results obtained were processed by the least squares method. The values X i , Y i and Z i are given in Tab. 1.
The results of the linear dependence processing by the least square method are given in Tab. 2
and Fig.
As can be seen from Tab. 2, the requirement of simultaneous statistical insignificance of the values │a│ and │1-b│ is performed for the set of 9 points; it meets the requirements of the practical acceptance of the linear dependence.
It should be noted that the systematic error value both by 80% of the nominal content (δ RL,80 ), and by 120% (δ RL,120 ) does not exceed the maximal value (Tab. 2). The determination limit (DL) and limit of quantification (LOQ) do not exceed 32%, i.e. they do not significantly affect the quantitative determination (Tab. 2).
From these calculations it is apparent that the maximum permitted value of the complete predicted uncertainty of the analytical procedure is more than the total calculated uncertainty of the method developed for quantitative determination of the active ingredient in the substance. Therefore, the method of redox titration can be used for quantitative determination of cetylpyridinium hexafluorosilicate with the tolerance of the active substance content of ±1.0%. CONCLUSIONS The method for quantitative determination of cetylpyridinium hexafluorosilicate in the substance has been developed using the redox method. When determining the basic validation characteristics of the method specified it has been found that the requirements for linearity, precision, accuracy are performed, and this method can be recommended for use.
|
v3-fos-license
|
2021-08-29T06:16:17.296Z
|
2021-08-01T00:00:00.000
|
237341810
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/21/16/5587/pdf",
"pdf_hash": "2fd4f88f6f488b9a0142af4385c5f2943522c837",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43063",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "2661e74a3e47360f0a66833030b6222d02d3663e",
"year": 2021
}
|
pes2o/s2orc
|
Stochastic Memristive Interface for Neural Signal Processing
We propose a memristive interface consisting of two FitzHugh–Nagumo electronic neurons connected via a metal–oxide (Au/Zr/ZrO2(Y)/TiN/Ti) memristive synaptic device. We create a hardware–software complex based on a commercial data acquisition system, which records a signal generated by a presynaptic electronic neuron and transmits it to a postsynaptic neuron through the memristive device. We demonstrate, numerically and experimentally, complex dynamics, including chaos and different types of neural synchronization. The main advantages of our system over similar devices are its simplicity and real-time performance. A change in the amplitude of the presynaptic neurogenerator leads to the potentiation of the memristive device due to the self-tuning of its parameters. This provides an adaptive modulation of the postsynaptic neuron output. The developed memristive interface, due to its stochastic nature, simulates a real synaptic connection, which is very promising for neuroprosthetic applications.
Introduction
The design of compact neuromorphic systems, including micro-and nanochips, capable of reproducing information and computational functions of brain cells is a great challenge of modern science and technology. Such systems are of interest for both fundamental research in the field of nonlinear dynamics and the synchronization of complex systems [1][2][3][4][5][6][7], as well as medical applications in the devices for monitoring and stimulating brain activity in the framework of neuroprosthetic tasks [8][9][10]. Due to their importance, memristive devices have recently become the subject of intense research, especially in the area of neuromorphic and neurohybrid applications [11][12][13][14][15][16][17]. Neuromorphic technologies are especially relevant for intelligent adaptive automatic control systemsbiorobots. It is also worth noting that the construction and creation of electronic neurons and synapses (connections between neurons) based on thin-film memristive nanostructures is a fast-growing area of interdisciplinary research in the development of neuromorphic systems [18][19][20].
The history of neuromorphic technologies began in the late 1980s with the emergence of computation machines, and since then, significant advances have been achieved in elec-Sensors 2021, 21, 5587 2 of 12 tronics, physics of micro-and nanostructures, and solid-state nanoelectronics. The careful development of neuron-like electrical circuits made it possible to reproduce basic neural behaviors, such as resting, spiking, and bursting dynamics, as well as more sophisticated regimes, including chaos and multistability [21][22][23][24][25].
A memristive device is usually based on the Chua's model [19], which is an element of an electrical circuit capable of changing resistance depending on an electrical signal entering its input. In recent decades, various thin-film memristive nanostructures have been created. They are capable of changing their conductivity under the action of a pulsed signal [26,27], which makes the memristor an almost ideal electronic analogue of a synapse [13]. A synapse is known to be a communication channel between neurons that provides unidirectional signal transmission from a transmitting (presynaptic) neuron to a receiving (postsynaptic) neuron. This communication channel ensures the propagation of a nerve impulse along the axon of the transmitting cell.
The synaptic communication results in synchronization of postsynaptic and presynaptic neurons. Neural synchronization was extensively studied using various mathematical models and described in terms of periodic solutions [3,6,[28][29][30][31][32][33][34][35]. Such artificial synapses were implemented as electronic circuits that convert pulses of presynaptic voltage into postsynaptic currents with some synaptic amplification. Different strategies were used for the hardware implementation of synaptic circuits, e.g., an optical interface between electronic neurons [4,5,7].
Recent advances in nanotechnology allowed for miniaturization of artificial synapses by creating memristive nanostructures that mimic dynamics of real synapses. Among various candidates for the role of electronic synapses, memristive devices have a great potential for implementing massive parallelism and three-dimensional integration in order to achieve good efficiency per unit volume [36][37][38]. In this regard, it is important to create a memristor-based neuromorphic system capable of processing neuron-like signals.
Recently, the interaction between electronic neurons through a metal-oxide memristive device was successfully implemented in hardware [39]. The prerequisite for such a device was the study of the interaction of Van der Pol generators via a memristor [40]. Later, a significant effort was invested in theoretical research to study synchronization between neuron-like generators connected through a memristive device [14,41]. However, to the best of our knowledge, experimental studies of the dynamics of FitzHugh-Nagumo (FHN) neurons connected by a memristive synapse have not yet been carried out. We believe that the creation of neuromorphic memristive systems will lead to the production of simple and compact neuroelements based on memristive devices capable of imitating the electrophysiological behavior of real neurons.
At the same time, a memristive device made of metal oxides is of interest not only for experimental research, but also for theoretical studies. Neuromemristive models were found to exhibit complex dynamics, including chaos and chimeras [42,43], the study of which can contribute to the fundamental theory. On the other hand, many theoretical "memristive" neural models reported in the literature have nothing to do with the concept of memristive elements [44]. Therefore, the development of adequate mathematical models that can simulate real laboratory neuromemristive experiments is an actual problem.
Summarizing all the above, significant theoretical investigations of memristors and the possibility of their use as a part of neuromorphic systems were performed. In particular, not only dynamical were effects simulated, but also the simplest learning rules were implemented [45][46][47][48][49][50][51][52]. Currently, technologies are being developed to improve the characteristics of memristive devices in order to create reliable memristive networks capable of solving some mathematical tasks [53], classifying images [54][55][56][57][58], etc. [59][60][61]. Despite impressive theoretical results in the development of neuromorphic memristive systems, the experimental research of laboratory memristive devices, rather than their substitutes based on transistors or resistors as parts of dynamical systems, was not carried out because of high complexity of this task, which requires the cooperation of nanotechnologists, physicists, and neuroscientists. In this work, we experimentally implement a memristive interface based on the metaloxide nanostructure that acts as a synaptic interface connecting two electronic FHN neural generators. The interface allows for the analog simulation of the adaptive behavior and neural timing effects, which can be associated with synaptic plasticity. We also investigate the stochastic properties of the memristive device. For the first time, to the best of our knowledge, we perform an experimental study on such a memristive neural system and compare experimental results with numerical simulations.
Materials and Methods
In order to simulate neural dynamics, we explored two FHN neuron generators with cubic nonlinearity constructed using diodes [7,22]. The dynamics of the presynaptic FHN neuron was modeled by the normalized equations obtained with the Kirchhoff law [21] as follows: where u 1 is the membrane potential of the presynaptic neuron, ν 1 is the "recovery" variable related to the ion current, f (u 1 ) = u 1 −u 1 3 /3 is the cubic nonlinearity, I 1 is the depolarization parameter characterizing the excitation threshold, and ε is a small coefficient. If u 1 < 0, the function g(u 1 ) = αu 1 , and if u 1 ≥ 0, g(u 1 ) = βu 1 (α, β being the parameters that control, respectively, the shape and location of the ν-nullcline [22]). The memristive device model was developed based on a standard approach to reflect the dynamical response of a memristor to electrical stimulation. The model describes a change in resistance, similar to potentiation and depression, based on physical laws identified in experiments [62]. The memristor model is given by the complex function: This approach supposes the introduction of internal state variable w, which is determined by the fraction of the insulator region occupied by filaments. The change in this state is associated with the processes of migration of oxygen ions (vacancies) with the height of the effective migration barrier E m . In turn, the migration is provided by the Joule heating kT and applied electric voltage u 1 . The total current density j through the memristor is the sum of the linear j lin and nonlinear j nonlin components. The former corresponds to ohmic conductivity with resistivity ρ, whereas the latter is determined by the transport of charge carriers through defects in the regions of the insulator not occupied by filaments (including those in the filament rupture region). It was previously found that, in the insulating state of the studied ZrO 2 -based memristive devices, the current transport is implemented by the Poole-Frenkel mechanism with an effective barrier E b [62]. The smooth transition between high-and low-resistance states (HRS and LRS, correspondingly) is determined by the dynamic contribution to the total current of the conductive filaments and, therefore, the state variable. In Equation (2), b, α 1 , and A are coefficients derived from experimental data. In our numerical simulations we used the Runge-Kutta integration methods for stochastic differential equations in Matlab [63][64][65].
In order to compare the experimentally observed dynamics of the memristive device with the results of numerical simulations, we needed to take into account stochasticity of microscopic processes leading to a change in the internal state w of the dynamical system. Random fluctuations of the normal distribution were added to energy barrier E m for ion hopping (dispersion 10%), energy barrier E b for electron jumps in the Poole-Frenkel conduction mechanism in the HRS (dispersion 1%), and ohmic resistance ρ of the structure in the LRS (dispersion 10%). This led to the scattering of the experimental current-voltage characteristics. The finite spread of the switching voltages is mainly related to the stochasticity of the energy barrier for ions, whereas the change in the resistive states from cycle to cycle is associated with the electron transport stochasticity.
One-way communication between two neurons through the memristive device was modeled by the following equations: where d is the equivalent load resistance, j(u 1 ) is the current density through the memristive device, S is the area of conductive filaments obtained from the experiment, and ε is a small recovery parameter. The signal from the presynaptic neural generator (u 1 ) was sent to the postsynaptic neural generator (u 2 ) through the memristive device.
Thus, the two neurogenerators were connected in such a way that part of the current j(u 1 ) generated by the presynaptic neuron passed through the load resistor, which was connected in series with the memristive device, before reaching the postsynaptic neuron.
The initial conditions and model parameters corresponded to the experimental conditions. In particular, both neural oscillators were initially in a self-oscillatory regime.
The designed neuromorphic circuit consisted of an FHN electronic circuit, a memristive device formed by the thin-film metal-oxide-metal nanostructure based on yttriastabilized zirconia (Au/Zr/ZrO 2 (Y)/TiN/Ti) [66], and a load resistor (Figure 1a). This memristive interface operated as follows. The electronic FHN neuron generated a pulse signal that affects the memristive device and thus modulates the oxidation and recovery of conductive filaments in the oxide film of the memristive device. The analog electronic FHN neuron consisted of the following blocks: an oscillatory contour unit, a nonlinearity unit, and an amplifier unit (see Figure 1b). The detailed design of this device is described in [7,22]. The FHN neural generator demonstrates the main qualitative features of neurodynamics: the presence of an excitability threshold and the existence of resting and spiking regimes. These regimes were controlled using a potentiometer. The spiking frequency was varied in the range of 10-150 Hz, the spike duration in the range of 10-25 ms, and the spike amplitude u 1 in the range of 1-6 V.
In this work, we used the National Instruments USB-6212 data acquisition system, which consists of a digital-to-analog converter (DAC) and two analog-to-digital converters (ADC). The data acquisition system was controlled using LabVIEW software. The prerecorded neuron-like signal was applied to a memristive device with a sampling frequency of 5 kHz via the DAC. The ADCs recorded the voltage drop across the memristive device and the load resistor, which made it possible to calculate the memristive device resistance in real time. The potential difference across the memristive device (R m ) and the load resistor (R 2 ) was digitized at a sampling frequency of 10 kHz. Matlab was used to analyze the results.
After testing and tuning, the neuron-like oscillators were connected through the memristive device. Both analog neurogenerators were turned in the oscillatory regime. Under the neuron-like signal action, the memristive device changed its state from high resistive to low resistive. The amplitude of the presynaptic neuron was adjusted by the potentiometer in order to obtain a frequency-locking regime between two oscillators. The inductance is implemented by the circuit with operational amplifier, cubic nonlinearity is set using diodes D1-D6, capacitor C2 is related to the capacitance of the neuron membrane, and potential V1 is associated with an equilibrium controlled by the power source.
After testing and tuning, the neuron-like oscillators were connected through the memristive device. Both analog neurogenerators were turned in the oscillatory regime. Under the neuron-like signal action, the memristive device changed its state from high resistive to low resistive. The amplitude of the presynaptic neuron was adjusted by the potentiometer in order to obtain a frequency-locking regime between two oscillators.
Results and Discussion
The output signal of the presynaptic electronic neuron is shown in Figure 2a. This signal is applied to the memristive device. The used neuron-like signal (u1) is asymmetric (the minimum voltage is −5 V and the maximum voltage is 4 V) due to the asymmetry of Figure 1. The system description: (a) block diagram of the interaction between presynaptic (u 1 ) and postsynaptic (u 2 ) electronic neurons through a memristive device. The neurons are initially in an oscillatory regime. The output of the presynaptic neuron is increased during the experiment; (b) analog electrical circuit of the FitzHugh-Nagumo neuron. The inductance is implemented by the circuit with operational amplifier, cubic nonlinearity is set using diodes D1-D6, capacitor C2 is related to the capacitance of the neuron membrane, and potential V1 is associated with an equilibrium controlled by the power source.
Results and Discussion
The output signal of the presynaptic electronic neuron is shown in Figure 2a. This signal is applied to the memristive device. The used neuron-like signal (u 1 ) is asymmetric (the minimum voltage is −5 V and the maximum voltage is 4 V) due to the asymmetry of the current-voltage characteristic (I-V curves) of the memristive device. For a more detailed study of the effect of the neuron-like signal on the memristive device, the curve in Figure 2a is visually divided into four intervals with different colors. Each interval corresponds to a specific fragment of the I-V curves in Figure 2b. The I-V curves in Figure 2b display the switching between LRS and HRS. The RESET process (switching from LRS to HRS) occurs with a positive voltage and SET (switching from HRS to LRS) with a negative voltage. The scattering of the I-V curves in Figure 2b results from random fluctuations applied to the memristor parameters E m , E b , and ρ. Figure 2c demonstrates the increase in the amplitude of the presynaptic neuron from 1.558 V to 4 V. Figure 2d shows that, even when exposed to a small amplitude signal of 2 V (purple curve), the memristive device can switch from HRS to LRS.
voltage. The scattering of the I-V curves in Figure 2b results from random fluctuations applied to the memristor parameters Em, Eb, and ρ. Figure 2c demonstrates the increase in the amplitude of the presynaptic neuron from 1.558 V to 4 V. Figure 2d shows that, even when exposed to a small amplitude signal of 2 V (purple curve), the memristive device can switch from HRS to LRS.
The laboratory memristor demonstrates different responses to an input signal with a small stochastic spread. Figure 2d shows that, for one curve in Figure 2c, with the yellow curve used as an example, the numerical memristor model yields 10 possible curves (also a yellow color) with a small spread. The I-V curves in Figure 2d illustrate the effect of stochastic switching in the memristor response to the voltage signals of the corresponding amplitudes. Since memristor conductivity is adaptively changed according to the input signal, the memristive device demonstrates the property of plasticity. The laboratory memristor demonstrates different responses to an input signal with a small stochastic spread. Figure 2d shows that, for one curve in Figure 2c, with the yellow curve used as an example, the numerical memristor model yields 10 possible curves (also a yellow color) with a small spread. The I-V curves in Figure 2d illustrate the effect of stochastic switching in the memristor response to the voltage signals of the corresponding amplitudes. Since memristor conductivity is adaptively changed according to the input signal, the memristive device demonstrates the property of plasticity.
There is a threshold value of the amplitude (u 1 ) of the neuron-like signal at which the memristor state switches at each spike. At high amplitudes of the input signal (u 1 ), the system enters a state of extreme resistance and does not respond to each spike anymore. The memristive device remains in this state. The switching degree strongly depends on the internal changes in the memristive device related to the interrelated transport phenom-ena in oxide dielectrics, due to electric potential gradients, ion concentration, and local heating [67,68]. These reasons result in the partial recovery and oxidation of conducting filaments in the oxide film. The corresponding dynamical change in conductivity is limited by the applied voltage and leads to the modulation of the strength of neuron coupling and different types of synchronization. In the course of the study, the optimal coupling strength is z = j(u 1 ); SR = (0.02-0.06) for 1:1 frequency-locking (Figure 3c) and z = (0.06-0.095) for intermittent synchronization (Figure 3d).
There is a threshold value of the amplitude (u1) of the neuron-like signal at which the memristor state switches at each spike. At high amplitudes of the input signal (u1), the system enters a state of extreme resistance and does not respond to each spike anymore. The memristive device remains in this state. The switching degree strongly depends on the internal changes in the memristive device related to the interrelated transport phenomena in oxide dielectrics, due to electric potential gradients, ion concentration, and local heating [67,68]. These reasons result in the partial recovery and oxidation of conducting filaments in the oxide film. The corresponding dynamical change in conductivity is limited by the applied voltage and leads to the modulation of the strength of neuron coupling and different types of synchronization. In the course of the study, the optimal coupling strength is z = j(u1); SR = (0.02-0.06) for 1:1 frequency-locking ( Figure 3c) and z = (0.06-0.095) for intermittent synchronization (Figure 3d). The experiments show that, when the amplitude of the presynaptic neuron u 1 is varied from 1.6 to 2 V, the oscillation frequencies of the coupled neurons are locked either as 2:1 (Figure 4a) or 3:1 (Figure 4b), i.e., the presynaptic neuron u 1 fires the postsynaptic neuron u 2 twice or thrice. This ratio can be randomly changed when chaotic synchronization is reached at higher voltage amplitudes. Although the phase portraits obtained numerically and experimentally do not completely match, the experiment confirms the diversity of phase-locking regimes predicted by the model. Moreover, our model demonstrates dynamics close to the experimentally observed one, despite of to the first-order memristor model, if the stochasticity of switching is accounted for.
The experiments show that, when the amplitude of the presynaptic neuron u1 is varied from 1.6 to 2 V, the oscillation frequencies of the coupled neurons are locked either as 2:1 (Figure 4a) or 3:1 (Figure 4b), i.e., the presynaptic neuron u1 fires the postsynaptic neuron u2 twice or thrice. This ratio can be randomly changed when chaotic synchronization is reached at higher voltage amplitudes. Although the phase portraits obtained numerically and experimentally do not completely match, the experiment confirms the diversity of phase-locking regimes predicted by the model. Moreover, our model demonstrates dynamics close to the experimentally observed one, despite of to the first-order memristor model, if the stochasticity of switching is accounted for. The stochasticity is an inalienable property of resistive-switching devices, enabling the so-called stochastic plasticity used to mimic neural synchrony in a simple electronic cognitive system [69]. To the best of our knowledge, the present work is the first attempt to study this important phenomenon both numerically and experimentally. In our case, the stochasticity is modeled through the introduction of fluctuations in the model parameters in a way similar to [70]. Recently, Agudov et al. [71] developed a more generic stochastic model of a memristive device that can be further used to adequately describe the observed complex dynamics of the proposed memristive interface. Another option is to use the deterministic, but at the same time higher-order memristor models based on two The stochasticity is an inalienable property of resistive-switching devices, enabling the so-called stochastic plasticity used to mimic neural synchrony in a simple electronic cognitive system [69]. To the best of our knowledge, the present work is the first attempt to study this important phenomenon both numerically and experimentally. In our case, the stochasticity is modeled through the introduction of fluctuations in the model parameters in a way similar to [70]. Recently, Agudov et al. [71] developed a more generic stochastic model of a memristive device that can be further used to adequately describe the observed complex dynamics of the proposed memristive interface. Another option is to use the deterministic, but at the same time higher-order memristor models based on two or more state variables in order to simulate the experimentally observed intermittency route to chaos [72].
Conclusions
In this work, we have studied the dynamics of two coupled FitzHugh-Nagumo neuron generators coupled through a memristive device of a metal-oxide type that adapts Sensors 2021, 21, 5587 9 of 12 the synaptic connection according to the amplitude of the presynaptic neuron oscillations. The stochastic switching of the memristive device from a high-resistance state to a lowresistance state is achieved by the variation of the internal parameters. Therefore, the memristive synaptic device demonstrates the property of stochastic plasticity. Different synchronous regimes were observed, including 1:1, 2:1, and 3:1 frequency-locking, intermittent synchronization, and more complex dynamics. Its relative compactness and high sensitivity make the proposed neuromemristive device very promising for biorobotics and other bioengineering applications [73].
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2017-04-12T23:25:49.825Z
|
2015-04-23T00:00:00.000
|
3237110
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0122563&type=printable",
"pdf_hash": "853870b7ab49aa2dc28496e2f95b291ec6c8c35c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43064",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "853870b7ab49aa2dc28496e2f95b291ec6c8c35c",
"year": 2015
}
|
pes2o/s2orc
|
Simultaneous Imaging of CBF Change and BOLD with Saturation-Recovery-T1 Method
A neuroimaging technique based on the saturation-recovery (SR)-T1 MRI method was applied for simultaneously imaging blood oxygenation level dependence (BOLD) contrast and cerebral blood flow change (ΔCBF), which is determined by CBF-sensitive T1 relaxation rate change (ΔR1 CBF). This technique was validated by quantitatively examining the relationships among ΔR1 CBF, ΔCBF, BOLD and relative CBF change (rCBF), which was simultaneously measured by laser Doppler flowmetry under global ischemia and hypercapnia conditions, respectively, in the rat brain. It was found that during ischemia, BOLD decreased 23.1±2.8% in the cortical area; ΔR1 CBF decreased 0.020±0.004s-1 corresponding to a ΔCBF decrease of 1.07±0.24 ml/g/min and 89.5±1.8% CBF reduction (n=5), resulting in a baseline CBF value (=1.18 ml/g/min) consistent with the literature reports. The CBF change quantification based on temperature corrected ΔR1 CBF had a better accuracy than apparent R1 change (ΔR1 app); nevertheless, ΔR1 app without temperature correction still provides a good approximation for quantifying CBF change since perfusion dominates the evolution of the longitudinal relaxation rate (R1 app). In contrast to the excellent consistency between ΔCBF and rCBF measured during and after ischemia, the BOLD change during the post-ischemia period was temporally disassociated with ΔCBF, indicating distinct CBF and BOLD responses. Similar results were also observed for the hypercapnia study. The overall results demonstrate that the SR-T1 MRI method is effective for noninvasive and quantitative imaging of both ΔCBF and BOLD associated with physiological and/or pathological changes.
Introduction
Cerebral blood perfusion through the capillary bed is essential for brain function. Imaging of cerebral blood flow (CBF) provides valuable information regarding brain physiology, function, activation and tissue viability associated with a large number of brain diseases. Arterial spin labeling (ASL), a noninvasive MRI technique which utilizes the radiofrequency (RF) pulse to label the flowing arterial water spin as an endogenous and diffusible tracer for imaging CBF [1][2][3][4][5][6][7], plays an ever-growing role in scientific and clinical research. An inversion-recovery preparation is commonly applied for most ASL methods, and paired images (one control and another with spin tagging) are acquired with an appropriate inversion recovery time. The signal difference between the paired images can be used to determine the CBF value and it is particularly useful for imaging relative CBF changes, for instance, induced during brain activation [8,9] and/or physiology/pathology perturbations.
An alternative MRI-based CBF imaging method is to measure the parametric parameter of apparent longitudinal relaxation time (T 1 app ) directly by using inversion-recovery preparation with varied inversion-recovery time (T IR ) [2,6,10]. Due to the slow T 1 app relaxation processing, this method requires a relatively long repetition time (TR) to acquire a serial of images with different T IR values in order to generate T 1 app images, thus, results in low temporal resolution for imaging CBF. Nevertheless, this method should be robust in quantifying CBF and its change in the absolute scale with the unit of ml blood/g brain tissue/min (or ml/g/min) owing to a simple relationship between T 1 app and CBF (see more details in Method and Theory).
One common, interesting observation related to T 1 app changes reported in the literature is the detection of T 1 app increase at an early stage of ischemia, indicating a possible link between the brain tissue T 1 app change and the perfusion deficit caused by the ischemia [11][12][13][14]. However, the quantitative relationship between the observed T 1 app change and the CBF reduction extent caused by acute ischemia has not been rigorously, quantitatively studied. A potential difficulty for quantifying CBF via the measurement of T 1 app lies on the confounding effect from the brain tissue temperature change owing to physiological and/or pathological perturbation, which can also contribute to the T 1 app change [15][16][17][18]. Consideration and correction of this confound effect might improve the accuracy of quantifying CBF based on the T 1 app measurement. Another layer of complexity is the phenomena of concurrent changes in both perfusion and the blood oxygenation level dependence (BOLD) contrast [19,20] during either physiology perturbation (e.g., brain stimulation) or pathology perturbation (e.g., hypoxia via ischemia); and these changes could affect the MRI intensity via either T 1 app based mechanism owing to perfusion change or the transverse (or apparent transverse) relaxation time (T 2 /T 2 Ã ) based mechanism owing to BOLD contrast.
To address these issues, we conducted a study aiming to: i) develop a robust neuroimaging approach to simultaneously measure and image the CBF change and the BOLD contrast by using the saturation-recovery (SR)-T 1 MRI method; ii) validate this approach by conducting simultaneous in vivo measurements of CBF change using the SR-T 1 MRI method and the relative CBF change using laser Doppler flowmetry (LDF) recording under transient hypercapnia (increasing CBF) and acute ischemia (reducing CBF) conditions using a rat model at 9.4T; iii) investigate the effect of brain temperature change on T 1 app and the apparent longitudinal relaxation rate (R 1 app = 1/T 1 app ) after the induction of hypercapnia or ischemia (see S1 Supporting Information); iv) establish the quantitative relationship between the CBF-sensitive R 1 change (ΔR 1 CBF ) and the CBF change (ΔCBF); and v) quantitatively study the temporal relationships among R 1 CBF change, ΔCBF, relative CBF change and BOLD during and after hypercapnia or ischemia.
Method and Theory
The SR-T 1 MRI pulse sequence for imaging T 1 app The MRI perfusion method described herein relies on quantitative T 1 app mapping using the magnetization saturation-recovery preparation without slice selection and fast gradient-echo echo-planar imaging (GE-EPI) sampling [21]. The regional saturation-recovery preparation is confined by using a RF surface coil with a focal, intense RF field (B 1 ) covering the rat brain [22]. Fig 1 is the schematic diagram showing the imaging pulse sequence and principle underlying the SR-T 1 MRI method. The magnetization saturation preparation is achieved by an adiabatic half passage 90°RF pulse, which is insensitive to the inhomogeneous B 1 field of a surface coil, immediately followed by dephasing gradients (G Dephase ) in three dimensions (Fig 1a). The adiabatic 90°pulse rotates the longitudinal magnetization (M z ) into the transverse plane in a spin-rotating frame, resulting in the transverse magnetization (M xy ) with the same magnitude of M z . The M xy rapidly loses its phase coherence because of the strong dephasing effect by G Dephase . The overall effect of the magnetization saturation preparation in combination with the dephasing gradients is to approach zero magnetizations for both M z and M xy components (Fig 1b). This magnetization preparation is independent of the initial M z value or its reduction from the previous scan owing to the partial magnetization saturation effect when a relatively short TR is applied. Therefore, no extra delay before the adiabatic 90°pulse is needed. This feature can significantly shorten the total image acquisition time and improve the temporal resolution for imaging T 1 app as compared to the conventional inversion-recovery preparation in which the net magnitude of inverted M z depends on the initial M z prior to the inversion pulse and the effect of partial saturation as a function of TR, thus, a relatively long TR is preferred for each T IR measurement. During the period of saturation-recovery time (T SR ), the longitudinal magnetization starts to relax and recover approximately according to an exponential function (Fig 1c). This recovered M z after a period of T SR is rotated to the transverse plane again by a spin excitation RF pulse with a nominal 90°flip angle, and then sampled by EPI acquisition. There is no an extra delay time between the imaging acquisition and the next saturation pulse. This imaging measurement is repeated n times with varied T SR . (b) Schematic diagram illustrating the principle underlying the imaging sequence in the spin-rotating frame. It shows the rotation of longitudinal magnetization (M z ), dephasing of transverse magnetization (M xy ) and its evolution after T SR and spin excitation pulse followed by GE-EPI acquisition. (c) Exponential recovery curve of GE-EPI signal intensity (SI) as a function of T SR . The regression of this curve determines the apparent T 1 (T 1 app ) value that is sensitive to perfusion. (d) Schematic diagram of two-phase model incorporated with the SR-T 1 method developed in this study. The regional saturation zone (gray shaded area) achieved by a surface RF coil overlapped on a rat brain sagittal anatomic image. The white arrows stand for the Phase 1 arterial spins in the saturated region traveling into the image slice within the time window ttran (i.e., when T SR <t tran ) The gray arrows indicate the unsaturated Phase 2 arterial blood spins flowing into the image slice when the traveling time is longer than t tran or equal to (i.e., when T SR ! t tran ).
The detected EPI signal intensity (SI) without considering the perfusion effect on T 1 app obeys the following equation: where TE is the spin echo time; and T 2 Ã is the apparent transverse relaxation time, which is sensitive to magnetic field inhomogeneity and susceptibility effect, such as the BOLD contrast; SI 0 is the EPI signal intensity when TE = 0 and T SR = 1. This equation can be used for T 1 app regression based on a number of SI measurements with varied T SR . When T SR is sufficiently long (e.g., !5T 1 app as applied in this study) and TE>0, the second term in (Eq 1) approaches one, and (Eq 1) becomes: Under this condition, SI Ã is determined by the T 2 Ã relaxation process and becomes independent of T 1 app ; thus, this signal can be used to quantify the "true" BOLD contrast without confounding effect from the saturation effect caused by the perfusion contribution [19,20,23]. Moreover, the addition of SI Ã measurement with a long T SR , thus, a long TR is also critical to improve the reliability and accuracy for T 1 app regression, which is essential in determining the absolute CBF change, though it leads to a relatively low temporal resolution for simultaneously obtaining T 1 app and BOLD images.
In this study, two brain conditions are defined by the subscript "RC" standing for the Reference Condition (i.e., control) before the induction of physiological/pathological perturbation, and the subscript "PC" standing for the Perturbed Condition by the induction of either hypercapnia (hyper-perfusion) or ischemia (hypo-perfusion). Accordingly, the BOLD contrast can be quantified by: where rBOLD stands for the relative BOLD.
Two-phase arterial spin modeling of the SR-T 1 MRI method
The Block equation describes the dynamic behavior of brain water magnetization as the following: where M a , M b and M v are the longitudinal water magnetization of the arterial blood, brain tissue and venous blood respectively; M b 0 is the equilibrium value of M b ; T 1 is the brain tissue water longitudinal relaxation time in the absence of blood flow; f represents the CBF value. Associated with the proposed experimental MR preparation and acquisition, we solve (Eq 4) with a two-phase arterial spin model as illustrated in Fig 1d, showing the schematic graph of the global brain region (shaded gray area) saturated by the RF coil overlapped on a rat brain sagittal anatomic image. Phase 1 represents the time window in which the image slice receives the flowing saturated arterial spins within the regional saturation region (as indicated with the white arrows in the Fig 1d); and Phase 2 characterizes the time window when the fresh, fully relaxed arterial spins sitting outside of the saturation region (as indicated with the gray arrows in Fig 1d) flowing into the image slice. Assuming the arterial blood spins travel smoothly as bulk flow at a constant speed without any turbulence, the fresh spins at the edge of the saturation region will take the artery transit time (t tran ) to reach the image slice. For the SR-T 1 measurement with the boundary condition of M b (t = 0) = 0, the final solution (see S1 Supporting Information for details) for (Eq 4) and Phase 1 when t< t tran follows: where T 1a is the longitudinal relaxation time of arterial blood and 1 T app where λ (= 0.9 ml/g) is the brain-blood partition coefficient, T 1 temp (R 1 temp ) is the contribution of temperature-dependent longitudinal relaxation time (rate) caused by brain temperature alteration during physiological or pathological perturbation (see S1 Supporting Information). During Phase 2 the fully relaxed artery spins in the blood outside the saturation region flow into the rat brain, and will approach the image plane and exchange with the brain tissue water spins when t!t tran . Final solution for Phase 2 of (Eq 4) when t!t tran with the initial condition using (Eq 5) with the boundary condition of t = t tran gives: where C A and C B are the constants which equal to A and B in (Eq 5), respectively, when t = t tran , they reflect the boundary condition and ensure the function continuity between Phase 1 and Phase 2. Therefore, when the saturation recovery time is shorter than t tran (Phase 1), the magnetization of brain tissue water relaxes following (Eq 5) whereas when the saturation recovery time is longer than or equal to t tran it will relax according to (Eq 7) instead. It is clear that the brain magnetization recovery with a long saturation recovery time of T SR !t tran (Phase 2) follows a single exponential relaxation time T 1 app (Eq (7)), while the magnetization recovery in Phase 1 (T SR <t tran ) in theory is influenced by both T 1 app and T 1a (see (Eq 5)).
Close examination of (Eq 5) for Phase 1, one can see that the signal recovery described by the term A only depends on T 1 app whereas the signal recovery of the term B relies on both T 1 app and T 1a . A simulation study was conducted using (Eq 5) and the parameters relevant to this study for comparing the relative contributions of A and B terms (see S1 Supporting Information) and it turns out that the term B in (Eq 5) is less than 4% of the term A within a reasonable artery transit time range (100-500ms) in the rat brain [1,[24][25][26]. Therefore, a single exponential recovery according to the term A is a rational approximation for the rat brain application during Phase 1 (T SR <t tran ) since the magnetization contribution from the term B is negligible. Therefore, the exponential recovery functions for both Phase 1 and Phase 2 can be unified as a single exponential recovery function according to T 1 app . In summary, a single exponential fitting of T 1 app based on multiple SR-T 1 MRI measurements with varied T SR values presents a simple approach and good approximation for imaging T 1 app or R 1 app which can be quantitatively linked to CBF according to (Eq 6).
CBF ) and CBF change (ΔCBF)
The T 1 and R 1 term in (Eq 6) represent the intrinsic brain tissue property of longitudinal relaxation time and rate, respectively; they are usually insensitive to physiological changes and can be treated as constants. The R 1 app difference between the reference and perturbed conditions becomes: where ΔCBF = CBF PC-CBF RC . Thus, CBF change (ΔCBF) between perturbation and reference conditions can be calculated from the following equation: where ΔR 1 CBF presents the R 1 change, which is solely attributed to the CBF change induced by physiopathological perturbation; and it can be imaged by the SR-T 1 MRI method through three steps: i) image brain MRI SI as a function of T SR during both control and perturbed conditions, and then determine the T 1 app values in each image pixel by the exponential regression of measured SI as a function of T SR according to (Eq 1); ii) subtract the control R 1 app (= 1/T 1 app ) value from the perturbed R 1 app value resulting in ΔR 1 app ; iii) determine ΔT 1 temp or ΔR 1 temp caused by a brain temperature change induced by perturbation (see S1 Supporting Information), then calculate ΔR 1 CBF and ΔCBF according to (Eq 9). The unit of CBF in (Eq 9) is ml/g/second, which can be converted to a conventional unit of ml/g/min by multiplying CBF by 60.
Materials and MRI Measurements Animal preparation and Experiment Design
All animal experiments were conducted according to the National Research Council's Guide for the Care and Use of Laboratory Animals and under the protocols approved by the Institutional Animal Care and Use Committee of University of Minnesota. Twelve male Sprague-Dawley rats weighing 328 ± 35 g were included in this study. The rat was initially anesthetized and intubated using 5% (v/v) isoflurane in N 2 O:O 2 (60/40) gas mixture. Both femoral arteries and left femoral vein were catheterized for physiological monitoring and blood sampling. Five rats were used for simultaneous MRI/LDF/temperature measurements. The LDF/Temperature instrument (Oxford Optronix, UK) was used to concurrently measure the percentage change of CBF or the relative CBF change that is defined as rCBF = CBF PC /CBF RC and the brain temperature change in the cortical region in one hemisphere by inserting the LDF/Temperature probe (0.5 mm diameter) into the brain tissue through a small hole (3×3 mm 2 ) passing both skull and dura (1.5-4 mm lateral, 1.5-3 mm posterior to the bregma, 1.9 mm deep). The soft tissue around the hole was kept to minimize magnetic susceptibility artifacts in MRI. After the surgical operation, the rat was placed in a home-built cradle incorporating ear bars and a bite bar to reduce head movement and to ensure proper positioning inside the MRI scanner. The animal anesthesia was maintained at 2% isoflurane. Rectal temperature was maintained at 37.0 ±0.5°C by a circulating/heating water blanket and the rate and volume of ventilation were adjusted to maintain normal blood gases. Mild transient hypercapnia was induced in eight of the twelve rats used in this study by ventilating the gas mixture of 10% CO 2 , 2% isoflurane and 88% N 2 O:O 2 (60/40) for 7 minutes; three of the eight rats were used for simultaneous measurements of ΔR 1 CBF , ΔCBF and BOLD using the SR-T 1 MRI method, and rCBF and temperature (T) change using the LDF/Temperature probe; and other five rats were used to conduct the MRI experiments only.
All twelve rats performed 1-minute occlusion of the two carotid arteries to achieve acute, global brain ischemia using the four-blood-vessel-occlusion rat model [27]. The transient hypercapnia experiment was performed first, followed by the acute ischemia experiment, and the rats were sacrificed by KCl injection for approaching cardiac arrest at the end of the experiment. There was an adequately long waiting time between these studies to ensure stable animal conditions prior to each perturbation and measurement. The SR-T 1 GE-EPI data were acquired for two minutes prior to each perturbation of transient hypercapnia (7 minutes), acute ischemia (1 minute) or KCl injection for approaching cardiac arrest. This control (or prior-perturbation) imaging acquisition period is defined as Stage 1. The duration during either the transient hypercapnia or acute ischemia perturbation is defined as the perturbation stage or Stage 2. Finally, the relatively long post-perturbation period was divided into three stages (i.e., early Stage 3; middle Stage 4 and late Stage 5).
MRI measurement
All MRI experiments were conducted on a 9.4T horizontal animal magnet (Magnex Scientific, Abingdon, UK) interfaced to a Varian INOVA console (Varian, Palo Alto, CA, USA). A butterfly-shape 1 H surface coil (2.8×2.0 cm with the short axis paralleled to the animal spine) was used to collect all MRI data. Scout images were acquired using a turbo fast low angle shot (Tur-boFLASH) imaging sequence [28] with the following acquisition parameters: TR = 10 ms, TE = 4 ms, image slice thickness = 2 mm, field of view (FOV) = 3.2 cm×3.2 cm; image matrix size = 128×128.
The magnetization saturation of water spin inside the rat brain was achieved by using the local B 1 field of the RF surface coil and the adiabatic 90°RF pulse followed by three orthogonal dephasing gradients. GE-EPI (TE = 21 ms; FOV = 3.2cm×3.2cm; image matrix size = 64×64; single slice coronal image with 2 mm thickness) combined with the saturation-recovery preparation was used to image T 1 app with seven T SR values of 0.004, 0.1, 0.2, 0.3, 0.4, 0.5 and 10 s, which resulted in a temporal resolution of 11.9 s for obtaining one set of T 1 app and BOLD images. This SR-T 1 GE-EPI imaging sequence (see Fig 1a) was applied to: i) measure ΔR 1 CBF resulting from either hypercapnia (CBF increase) or acute ischemia (CBF reduction) compared to the control condition; ii) determine the relationship between brain temperature change and ΔR 1 temp immediately after the cardiac arrest (i.e., CBF = 0) with a KCl bolus injection (see S1 Supporting Information for details); and iii) determine ΔCBF values and then compare and correlate the values with the LDF measurement results.
Data analysis
MRI data analysis was performed using the STIMULATE software package (Stimulate, Center for Magnetic Resonance Research, University of Minnesota, USA) [29] and the Matlab software package (The Mathworks Inc., Natick, MA, USA). LDF data was sub-sampled to match the corresponding MRI sampling rate and processed with home-written Matlab programs. Both region of interest (ROI) and single pixel MRI data taken from the rat sensory cortical region were used to perform the T 1 app regression analysis and to determine ΔR 1 app , ΔR 1 temp (see S1 Supporting Information for details) and ΔCBF according to Eqs (8) and (9). The leastsquare nonlinear curve-fitting program using the Matlab software was applied to perform the T 1 app regression analysis. The regression accuracy was estimated by the sum squared error (sse) and the square of regression coefficient (R 2 ).
To improve the quantification accuracy, the GE-EPI data were averaged within each stage as defined above and then applied to calculate the averaged values of ΔR 1 app , ΔR 1 temp and ΔCBF based on the transient hypercapnia or acute ischemia measurement. The reference control CBF (CBF RC ) was further estimated from the averaged ΔCBF values and its corresponding relative CBF changes (rCBF) measured by LDF under ischemia condition and during the reperfusion period after the acute ischemia. ROIs (ranging from 24 to 52 pixels) were chosen from the cortical brain region in the intact hemisphere with the location being approximately contralateral to the LDF recording side for those experiments performing simultaneous MRI and LDF/Temperature measurements in order to avoid the MRI susceptibility artifacts caused by the LDF/Temperature probe. The GE-EPI data acquired with the longest T SR of 10 s (i.e., % 5T 1 ) were used to calculate BOLD according to (Eq 3).
The R 1 app images at the control stage and a series of ΔCBF and BOLD images measured during and post hypercapnia and/or ischemia stages were created on a pixel-by-pixel basis (pixel size 0.25×0.25×2 mm 3 , with nearest neighbor interpolation) with two-dimensional median filtering and then overlapped on the anatomic image. Paired t-test was applied to compare the T 1 app values measured at reference and perturbation conditions obtained from either ROIs or single pixel, as well as to compare the regressed T 1 app values using ROI or single pixel data under a given condition. A p value of < 0.05 is considered as statistically significant.
Reliability and sensitivity of T 1 app measurement using the SR-T 1 MRI method
The averaged T 1 app value measured using the SR-T 1 MRI method in the rat cortex region under the normal physiological condition was 2.30±0.03 s (n = 12) at 9.4T. Fig 2 demonstrates a representative, single SR-T 1 GE-EPI measurement under control, hypercapnia and ischemia condition, respectively, and the T 1 app regression results based on ROI (Fig 2a) and single pixel (Fig 2b) data analysis without signal averaging. All the experimental data fitted well with an exponential function (R 2 !0.99 and sse <2×10 -4 ). The T 1 app regression curves and the fitted T 1 app values measured under control, hypercapnia and ischemia conditions were distinguishable and highly reproducible; and the results between the ROI and single pixel data analysis were consistent (Fig 2). For instance, no statistical difference was found between the T 1 app values obtained from the ROI analysis versus single pixel analysis under either the hypercapnia (p = 0.93, 12 image volumes using paired t-test) or the ischemia (p = 0.83, 5 image volumes using paired t-test) condition. These results reveal a high reliability of the proposed MRI method for imaging T 1 app and its change down to the pixel level, and this reproducibility is crucial in generating reliable T 1 app maps. Moreover, the determined T 1 app values under the hypercapnia and ischemia perturbations were statistically different from the control T 1 app value (p<0.01), indicating that the T 1 app relaxation process is sensitive to the perfusion changes induced by physiology/pathology perturbations. It is worth to note that adding more T SR points in a median T SR range (e.g., few seconds) could be helpful to further improve the fitting accuracy of T 1 app (or R 1 app ) measurement with a tradeoff of reduced imaging temporal resolution; nevertheless, the benefit on the outcome of ΔR 1 app , thus, ΔCBF measurement is insignificant because of the cancelation of the systematic errors of R 1 app measurements under control and perturbation conditions (data not shown herein). One could optimize the T SR values and the number of T SR points for achieving a proper balance between the T 1 app fitting accuracy and imaging temporal resolution. MRI method from the ROI located in the rat sensory cortex and rCBF measured by LDF in the similar brain region before, during and after (a) transient hypercapnia and (b) acute ischemia perturbation from a representative rat. Despite some fluctuations, these time courses display expected temporal behaviors and dynamics. First, there are approximately parallel trends among all of the measured time courses. Secondly, the transient hypercapnia led to significant increases in the measured parameters owing to the vascular dilation effect, thus, increasing perfusion followed by a recovery back to the baseline level after the termination of hypercapnia. Thirdly, the acute ischemia caused rapid reductions in all measured parameters followed by a substantial overshooting (reperfusion) after the termination of ischemia and a slow recovery to the baseline level. Nevertheless, a careful examination of Fig 3 suggests a stronger temporal correlation between the measured rR 1 CBF and rCBF (correlation coefficient = 0.84 for hypercapnia and 0.90 for Relative BOLD (rBOLD) shows a more significant undershoot after the hypercapnia (Fig 3a) and a smaller overshooting after the ischemia than that of rR 1 CBF and rCBF (Fig 3b). These results reveal the feasibility of the SR-T 1 ) and relative BOLD (rBOLD) measured by the SR-T 1 MRI method, relative CBF change (rCBF) measured by LDF before, during and after (a) hypercapnia and (b) ischemia perturbation from a representative rat (data extracted from a region of interest). The bar graphs on top indicate the experimental acquisition protocol of (a) hypercapnia and (b) ischemia. Five stages are defined for imaging acquisition under varied animal conditions. Stage 1 represents the control (or priorperturbation) period (2 minutes) prior to the induction of perturbations (i.e., hypercapnia or ischemia). Stage 2 represents the perturbation period either for the transient hypercapnia (7 minutes) or acute ischemia (1 minute). Stages 3, 4 and 5 represent the three post-perturbation periods with varied time duration after either the transient hypercapnia or acute ischemia perturbation. Because the post-perturbation effects on CBF and BOLD responses were much shorter for the 1-minute acute ischemia perturbation than that of 7-minute transient hypercapnia perturbation, the durations for these three stages were different for these two perturbation studies: i) for the hypercapnia perturbation: 8 (Fig 4a) and versus (rBOLD-1) (Fig 4b) measured for the ischemia study. It indicates an excellent consistency and strong linear correlation between the CBF change and ΔR 1 CBF across all five stages studied; and the correlation can be described by the following numerical equation that was obtained by the linear regression:
Relationships between rCBF, relative R 1 CBF and BOLD induced by perturbations
with R 2 = 0.99. In contrast, two distinct linear fitting slopes (slope ratio = 2.6) were observed between ΔR 1 CBF and (rBOLD-1), indicating the independence of the SR-T 1 MRI method for simultaneously determining ΔR 1 CBF and BOLD; and showing the decoupled changes of these two physiological parameters during the post-ischemia stages. Table 1 summarizes the results of simultaneous rCBF (by LDF) and ΔR 1 CBF measurements (by the SR-T 1 MRI method) during the ischemia (Stage 2) and the first post-ischemia period (Stage 3) for each rat and averages among inter-subjects. The ΔR 1 CBF value was used to calculate the CBF change (ΔCBF) according to (Eq 9), and then rCBF and ΔCBF were applied to estimate the reference control (or baseline condition) CBF (i.e., CBF RC ) according to the following relationship: The estimated CBF RC was 1.19±0.27 ml/g/min calculated from the ischemia stage (Stage 2) data, and 1.24±0.31 ml/g/min calculated from the first post-ischemia stage (Stage 3) data, showing an excellent consistency between them. The estimated baseline CBF values in this study are coincident with the reported values in the literature ranging from 0.9 to 1.5 ml/g/min (1.29±0.05 ml/g/min) measured in the rat cortex under similar isoflurane anesthesia condition, which are summarized in S1 Supporting Information. Furthermore, if we approximate the small interception value of 0.01 to zero in (Eq 10) and replace (rCBF-1) term in (Eq 11) with the approximated (Eq 10), we derived CBF RC = (60sec/min)Áλ(ml/g)/45.9(sec) = 1.18 ml/g/min using the relationship of (Eq 9). This value based on the regressed slope of 45.9 sec in (Eq 10) using the data shown in Fig 4a is again in good agreement with the averaged literature value of 1.29±0.05 ml/g/min. These comparison results provide ample evidence supporting the feasibility and reliability of the proposed SR-T 1 MRI method in measuring and quantifying the CBF changes induced by physiological/pathological perturbations. the ΔCBF and BOLD images measured by the SR-T 1 MRI method during and after the hypercapnia/ischemia perturbation in a representative rat without the use of LDF probe to avoid the susceptibility MRI artifacts. It illustrates that the SR-T 1 MRI method is robust and sensitive for noninvasively imaging the CBF changes in response to a physiological (hypercapnia) or pathological (ischemia) challenge with a few minutes of image acquisition time. Moreover, the BOLD images can also be simultaneously obtained. Simultaneous Imaging of CBF Change and BOLD Using MRI
Discussion
Underlying mechanism for imaging CBF change in rat brain using the SR-T 1 imaging method Instead of focusing on the magnetization change by delivering tagged spins to the image slice (s) at a certain T IR in the ASL approach, T 1 perfusion model [2,5,6,10] views CBF circulation as an enhanced longitudinal relaxation through T 1 app as defined in (Eq 6). The rapid exchange between the saturated water protons in the image slice and the fully relaxed arterial blood water outside of the saturation region during the SR-T 1 MRI measurement enables a quantitative link between T 1 app and CBF. Based on the current MRI acquisition scheme used in this study, (Eq 5) (valid for Phase 1, when T SR < t tran ) and (Eq 7) (valid for Phase 2 when T SR ! t tran ) quantitatively describe the magnetization change as a function of T SR for the SR-T 1 MRI method with a two-phase perfusion model shown in Fig 1d. According to (Eq 5), the brain tissue relaxation depends on both T 1 app and T 1a when T SR < t tran (Phase 1), however, the term B in (Eq 5) which contains T 1a accounts for only few percentages of the term A and its contribution to the magnetization relaxation becomes insignificant. Therefore, a single exponential recovery function according to the T 1 app relaxation time provides a good approximation. When T SR ! t tran (Phase 2), the brain tissue magnetization relaxation solely follows T 1 app according to (Eq 7). Therefore, T 1 app dominates the magnetization change through the entire T SR covering both Phase 1 and Phase 2, and can be robustly regressed for determining CBF changes. The excellent T 1 app fitting curve (and excellent linearity in semi-log fitting, data not shown herein), as well as the high reproducibility and sensitivity to CBF alteration as shown in Fig 2 demonstrate that the single exponential function regression worked well in this study. LDF measures a frequency shift in light reflected from moving red blood cells [30,31]. It enables a real-time, continuous recording of relative (or percentage) CBF change in a focal region inside the brain; and is regarded as a standard tool for dynamic CBF measurements [32]. Moreover, LDF-based CBF measurements have been reported to be in good agreement with the radioactive microsphere CBF techniques [31,33] as well as the hydrogen clearance CBF methods [34]. The excellent correlation between ΔR 1 CBF measured with the SR-T 1 method and relative CBF change recorded by LDF technique (Figs 3 and 4) during physiological/pathological perturbations, and the coincidence between the calculated baseline CBF results and the reported CBF values in the literature clearly suggest that ΔR 1 CBF imaged by the SR-T 1 MRI method quantitatively reflects the CBF changes. Besides the contributions from intrinsic T 1 and CBF, T 1 app can be also slightly influenced by other factors, for example, temperature. Although the temperature correction of T 1 app could improve the accuracy of measurement, T 1 app without the correction still provide a good approximation to calculate CBF change since the temperature induced T 1 change is small (see S1 Supporting Information).
Advantages, limitations and methodology aspects of the SR-T 1 method for imaging CBF change and BOLD
The SR-T 1 MRI method has several unique merits when comparing it with the conventional ASL techniques using an inversion-recovery preparation. First, the modeling used to quantitatively link T 1 app and CBF in the SR-T 1 MRI method is simple and it requires much less physiological parameters aiming to quantify absolute ΔCBF according to (Eq 9). Second, the SR-T 1 MRI method relies on parametric T 1 app mapping; thus, it does not require the paired control image as applied in most ASL methods for determining CBF change. Third, the saturation-recovery preparation avoids the requirement of relatively long TR constrained by the conventional T 1 imaging methods based on the inversion-recovery preparation; this enables relatively rapid mapping of T 1 app and ΔR 1 app to generate the ΔCBF image as illustrated in Fig 5. Although the imaging of T 1 app requires multiple measurements with varied T SR values, its temporal resolution of 12 s per complete image set is comparable with other ASL methods (approximately 5-10 s). Fourth, the partial volume effect of cerebrospinal fluid (CSF) to ΔCBF is minimal because of the subtraction out the undisturbed CSF-related R 1 contribution under various animal conditions, slow movement of CSF water spins (about 20 times slower than CBF, [35][36][37]) and the negligible exchange between CSF and brain tissue water spins. This study was based on single slice measurement to prove the concept and feasibility of the proposed SR-T 1 MRI method, nevertheless, it should be readily extended to multiple image slices covering a larger brain volume.
One technical limitation of the current SR-T 1 MRI method is its inability to directly measure the baseline CBF value under a physiological condition of interest, thus, the control value of CBF RC was indirectly estimated by two independent measurements of ΔCBF and rCBF using the SR-T 1 MRI method and LDF, respectively, in this study. This technique is more useable for imaging CBF changes, thus, requiring two measurement conditions, for instance, reference control versus perturbation (similar to the BOLD measurement) as presented in this study.
A surface RF coil generates an inhomogeneous distribution of B 1 in space, resulting in a non-uniform RF pulse flip angle for saturating the water magnetization if a linear RF pulse waveform is used. In this study, we applied an adiabatic half passage 90°RF pulse to achieve relatively uniform 90°rotation of M z into the transverse plane and to improve saturation efficiency, in particular, in the brain cortical region where B 1 is strong. The ROI for data processing was chosen from this region (see Fig 2 for an example). In the deep brain region, distant from the surface coil, it might not be warranted to approach an adiabatic 90°rotation if the RF power is inadequate in this brain region. As a result, the RF saturation efficiency (0 α 1) can drop in the deep brain region and lead to relatively low saturation efficiency. However, this imperfection, if it exists, should not cause a significant error in determining T 1 app owing to the following two reasons. First, the term of α was considered in the least square regression to calculate the T 1 app value using the following formula in which three constants, k, α and T 1 app are determined by regression. Though α can become less than 1 in the deep brain region if B 1 strength is inadequately strong, it can be treated as a constant. The T 1 app regression becomes insensitive to the absolute value of α; and the regression outcome is mainly determined by the exponential rate of SI recovery as a function of T SR . Second, a nominal 90°excitation pulse and a very short TE were used in the GE-EPI sampling in this study. In addition, there was no extra delay between the EPI signal acquisition and the next RF saturation pulse. This configuration further reduces the residual M z component (or suppresses the M z recovery) before the next magnetization saturation preparation. It acts as extra magnetization saturation, resulting in an improved saturation efficiency and insensitivity of regression to the value of α. These notions are supported by the ΔCBF images (Fig 5a and 5c) showing relatively uniform CBF changes across the entire image slice including the deep brain region. In contrast, the BOLD images show the 'hot' spots around the sinus vein as pointed by the arrows in Fig 5 because of the large BOLD effect near a large sinus vein [19]. Such 'hot' spots were not observed in the ΔCBF images, indicating again that the ΔCBF image is more specific to the tissue perfusion and less susceptible to macro vessels. This differentiation between the simultaneously measured BOLD and ΔCBF images suggests that the SR-T 1 MRI method indeed is able to independently but simultaneously measure two important physiological parameters: CBF change and BOLD. Although, there is a similar trend between the measured changes of ΔCBF and BOLD during ischemia or hypercapnia perturbation as shown in this study, there are clearly distinct characters in both spatial distribution and temporal behavior between the ΔCBF and BOLD images during the recovery periods after the perturbations, which provide complementary information of the brain hemodynamic changes in response to physiological/pathological perturbations.
The SR-T 1 MRI method is based on the exponential fitting of R 1 app using multiple EPI images with varied saturation-recovery times, and the R 1 app changes caused by either hypercapnia or ischemia were small (<5%). Therefore, the accuracy of fitting is susceptible to the EPI image noise level, in particular, in the brain regions with EPI susceptibility artifacts or weak B 1 of the surface coil.
Relationship between ΔR 1 CBF and the CBF change during ischemia and hypercapnia
It is known that the development of vasogenic edema usually occurs at a later phase, approximately 30 minutes after the induction of regional ischemia [38]; and the water accumulation in the ischemic tissue owing to cellular swelling takes place hours after the onset of ischemia [39,40]. It is unlikely that vasogenic edema and water content change could be responsible for the measured ΔR 1 CBF change in the present study since the global ischemia lasted only one minute and the SR-T 1 MRI measurements were continued within a relative short period during the post-perturbation stages. Therefore, ΔR 1 CBF imaged by the SR-T 1 MRI method could be fully quantified to determine and image ΔCBF. This notion is evident from the results of the ischemia study; and it also holds true for the hypercapnia study. However, it would be interesting to investigate the longitudinal T 1 app change in the severely or chronically ischemic brain region, which could be affected by the CBF change and possibly other physiopathological changes (infarction, edema, necrosis etc.) of brain tissue. Further study of their relative contributions to the T 1 app change would be helpful to understand the evolution of the ischemic lesion and its relationship with CBF change. A similar estimation using the relationship of rCBF and ΔCBF measured during the hypercapnia (Stage 2) resulted in the CBF RC value of 1.36±0.35 ml/g/min (n = 3). This value is close to that were calculated with the data collected during the ischemic/post-ischemic stages (%1.2 ml/g/min) although they are not exactly the same. This small discrepancy might be due to the slightly basal CBF drifting through the prolonged period of experiment because two experiments (hypercapnia and ischemia) were combined during the same MRI scanning session. It could also be related to the limited sampling size of hypercapnia experiments. Nevertheless, ΔCBF increase induced by hypercapnia calculated from ΔR 1 CBF correlates well with rCBF measured by LDF.
Correlation of R 1
CBF , CBF and BOLD during perturbations
Beside the aforementioned distinction of spatial responses between the ΔCBF and BOLD images owing to the different specificity to the hemodynamic response with varied vessel size, it is interesting to note that there were significantly mismatched temporal responses between measured ΔCBF and BOLD during both post-ischemia and post-hypercapnia stages. The "overshooting" effect was much less for BOLD than the ΔCBF and rCBF responses during the postischemia stage (Fig 3b), leading to two distinct slopes of linear regression between (rBOLD-1) and ΔR 1 CBF (Fig 4b). There was also a substantial "undershooting" in the measured BOLD change during the later post-hypercapnia recovery stages (Fig 3a) compared to rCBF or rR 1 CBF .
One explanation for this observation is that BOLD signal reflects a complex interplay among CBF, cerebral blood volume (CBV) and oxygen consumption rate (CMRO 2 ) [19,20]. Therefore, BOLD can become decoupled with the CBF change, and degree of the mismatched BOLD and ΔCBF relies on the fractional changes in CBF, CBV and CMRO 2 in response to a particular perturbation. The quantitative interpretation of the mismatched rCBF-rBOLD behavior requires additional measurements of CBV and CMRO 2 , which is beyond the scope of this article. Nevertheless, this mismatch could be linked to the uncoupling between the metabolic and hemodynamic responses associated with a physiology or pathology perturbation, and it should be useful for indirectly estimating the CMRO 2 time course during the perturbation if the CBV change can be measured independently or estimated using a sophisticated BOLD modeling (e.g., [41]). In addition, the measured BOLD using the SR-T 1 MRI method under fully relaxed condition reflects the "true" BOLD without the CBF confounding effect; thus, further improve the outcome of quantification [23]. A local RF coil, such as a surface coil as used in the present study, has been commonly applied for most in vivo animal MRI/MRS brain studies. Its B 1 field (or profile) induces regional longitudinal magnetization changes through saturation (or inversion) preparation prior to the EPI acquisition. The combination of regional M z preparation and relatively short artery blood traveling time from the non-saturated region into the EPI slice in animal brains is the underlying mechanism for a quantitative link between ΔCBF and ΔR 1 that can be robustly imaged by the SR-T 1 MRI method. Consequently, the T 1 or T 1 -weighted MRI signal changes commonly observed in the clinical imaging diagnosis, for instance, stroke patients, are at least partially attributed by the impaired perfusion, i.e., ΔCBF. Finally, the SR-T 1 MRI method can also be combined with a volume RF coil for imaging ΔCBF with the implement of a slice-selective saturation preparation.
Conclusion
In summary, we have described the SR-T 1 MRI method for noninvasively and simultaneously imaging the absolute CBF change and BOLD in response to physiological/pathological perturbations. This imaging method was rigorously validated in the rat brain with simultaneous LDF measurements under global ischemia and hypercapnia conditions. It should provide a robust, quantitative MRI-based neuroimaging tool for simultaneously measuring the CBF change and BOLD contrast associated with physiological perturbations (e.g., brain activation) or pathological perturbations (e.g., stroke or pharmaceutical drug treatment).
Supporting Information S1 Supporting Information. Detailed equation derivation for two-phase arterial spin model of the SR-T 1 method and simulation results. Supporting information regarding the confounding effect of brain temperature change on T 1 app . (DOCX)
Author Contributions
Conceived and designed the experiments: XW XHZ WC. Performed the experiments: XW XHZ YZ. Analyzed the data: XW XHZ WC. Wrote the paper: XW XHZ WC.
|
v3-fos-license
|
2024-02-07T06:17:19.765Z
|
2024-02-05T00:00:00.000
|
267497249
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1128/mbio.02376-23",
"pdf_hash": "bcd6c92df016d6f88c2cb346b9db0dece26075a5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43066",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "67b92024a92ee8b410ebb43d9fb2327197738b4c",
"year": 2024
}
|
pes2o/s2orc
|
The end of the reign of a “master regulator’’? A defect in function of the LasR quorum sensing regulator is a common feature of Pseudomonas aeruginosa isolates
ABSTRACT Pseudomonas aeruginosa, a bacterium causing infections in immunocompromised individuals, regulates several of its virulence functions using three interlinked quorum sensing (QS) systems (las, rhl, and pqs). Despite its presumed importance in regulating virulence, dysfunction of the las system regulator LasR occurs frequently in strains isolated from various environments, including clinical infections. This newfound abundance of LasR-defective strains calls into question existing hypotheses regarding their selection. Indeed, current assumptions concerning factors driving the emergence of LasR-deficient isolates and the role of LasR in the QS hierarchy must be reconsidered. Here, we propose that LasR is not the primary master regulator of QS in all P. aeruginosa genetic backgrounds, even though it remains ecologically significant. We also revisit and complement current knowledge on the ecology of LasR-dependent QS in P. aeruginosa, discuss the hypotheses explaining the putative adaptive benefits of selecting against LasR function, and consider the implications of this renewed understanding.
The interconnection of all three systems is widely recognized.LasR activation positively regulates the expressions of rhlR, mvfR, and pqsH, while RhlR activation represses pqs operon activity (10, 15,17,[21][22][23][24].Additionally, PqsE, a thioesterase enzyme encoded by the last gene of the pqs operon, plays an important role in the regulation of P. aeruginosa QS (25).Indeed, this protein interacts with RhlR through an incompletely defined chaperone-like function to activate the rhl system and heighten the expression of some target genes (12,(26)(27)(28)(29).These examples highlight the complex regulation that characterizes P. aeruginosa QS.However, while most of the research that informed this interconnected but hierarchical model of QS was performed in a few, well-defined laboratory strains, the frequent detection of LasR-defective strains still able to produce QS-regulated factors has revealed certain inconsistencies regarding the central role of the LasR regulator within the QS hierarchy.Here, we will propose an alternative perspective and discuss this issue based on current literature, unveiling new hypotheses regarding the relative positions of LasR, RhlR, and MvfR (PqsR) within the P. aeruginosa QS hierarchy.
LasR-defective strains are generally prevalent in both chronic and acute clinical environments
Over time, numerous studies have highlighted the natural occurrence of P. aeruginosa strains with impaired LasR activity.Initial reports of LasR-defective variants reflected their detection within microbial populations that had evolved for years within the chronically infected lungs of people with CF, which was unexpected given the requirement of QS-controlled factors for full expression of bacterial virulence in vitro (30).However, multiple reports have identified LasR-defective strains in CF respiratory cultures, and the selection of these strains by the CF lung environment is now a well-accepted dogma.The emergence of lasR mutants has been associated with increased inflammatory markers, increased neutrophilic inflammation, and deteriorated pulmonary function (31)(32)(33)(34)(35)(36)(37).These variants have also been implicated in pathogenesis in corneal ulcers and in the airways of individuals with chronic obstructive pulmonary disease (COPD) and ventilator-associated pneumonia (38)(39)(40).Moreover, we recently identified a high prevalence of LasR-defective strains in non-clinical environments as well, such as sinks, hydrocarbon-contaminated soils, and animal products (4).While a defect in LasR activity has seldom been reported in strains isolated from acute infections, a recent study from O'Connor et al. (41) identified P. aeruginosa isolates carrying lasR mutations commonly present within many environments, although the impact of these mutations on LasR activity per se was not investigated (38,41,42).
Based on the emerging picture of a widespread prevalence of LasR-defective strains in various environmental niches, we hypothesized that defects in LasR activity could also be found in acute clinical contexts.To evaluate the prevalence of LasR-defective strains isolated from both chronic and acute infections, we assessed LasR activity in 92 P. aeruginosa strains from diverse sources, including burn wounds, keratitis, urinary tract infections, bronchitis, CF lungs, COPD, and others (Table S1) (43).LasR activity was evaluated using a previously published method relying on unbiased phenotypic profiling based on the quantification of QS-regulated metabolites such as HAQs and pyocyanin (4).Based on this model, we found that 41% of the 92 isolates from our panel of clinical strains exhibited impaired LasR activity (Fig. 1).These data are consistent with our previous findings, where 40% of isolates from a panel of 176 P. aeruginosa environmental strains were found to be LasR-defective (4).These results also support the conclusions of O'Connor et al., as well as our hypothesis that LasR-defective strains are common in all clinical contexts and are not restricted to CF clinical isolates, as previously assumed (41), due to a predominant focus of prior analyses on CF isolates.Here, we found the prevalence of defects in LasR function to be similar among isolates from various environments, including when comparing CF and non-CF sources.
In the literature, the prevalence of lasR mutants among P. aeruginosa isolates from CF patients has been estimated to be from 20% to 25%, as defined using genotypic techniques (35,45,46).However, different studies have used a variety of methods to estimate the proportion of LasR-defective strains, hampering direct comparison of results.Relying solely on the identification of mutations in the lasR coding sequence, as it is usually reported, could misidentify LasR-defective strains, given its complex regulation and the unpredictable relationship between coding sequence and protein function (4,35,47).Indeed, a great diversity of mutations in the lasR gene has been identified, not all of which completely abrogate function (35).Additionally, some mutations in regulatory elements could also modify LasR expression or activity (4,35).Examining a single phenotypic trait could also bias LasR-defective strain identification given the complex regulation of many phenotypes.Our approach, based on phenotypic profiling rather than examining only one trait, allows for unbiased identification of all isolates with a defect in LasR activity (4).Therefore, we refer to LasR-defective strains rather than lasR mutants.
Taken together with previous studies, the data presented here further highlight that LasR-defective strains occur commonly, regardless of their clinical or environmental origin (4,35).As a result, we must reconsider prior assumptions, including the notion that chronically colonized CF lungs distinctively provide a selective environment promoting the loss of LasR function (32,33).
Factors driving the emergence of LasR-defective strains remain elusive
Despite decades of research, the precise drivers for the emergence of LasR-defective strains remain elusive.LasR function seems especially prone to be lost, and the lasR gene might even be considered a hotspot for various types of genetic variations (4,35,41).One particularly popular hypothesis suggests that LasR-defective cells arise as "cheaters, " taking advantage of neighboring cells with a functional LasR that provide "public goods, " such as exoproteases (e.g., LasB elastase), to the entire population.By exploiting this strategy, LasR-defective variants reduce the population metabolic cost associated with the expression of these proteins when they are essential for bacterial growth.Interestingly, this behavior has been mostly observed under specific in vitro conditions (39,(48)(49)(50)(51), as well as suggested under in vivo conditions (52,53).While such social cheating behavior could potentially extend to different contexts and environments beyond the CF lung, providing an interesting explanation for the emergence of these variants in multiple niches, evidence suggests that social conflict is not the sole source of LasR loss-of-function and that lasR mutants are not always "cheaters" (54).It is important to consider that LasR-defective strains can also arise in various conditions that do not implicate "public goods" (55).Therefore, other factors must contribute to the emergence of LasR variants.
As stated above, LasR-defective strains have been typically associated with the chronically infected lungs of people with CF (30,32,56).Extensive investigations have also explored the non-social benefits conferred by defective LasR function in this specific environment, revealing their relatively high fitness in the presence of specific amino acids, such as phenylalanine, which is especially abundant in CF secretions (32,57).Relative to wild-type strains, these variants exhibit altered metabolism, including lower oxygen consumption and enhanced nitrogen utilization, providing them with a competitive edge over their wild-type counterpart (56,58).Furthermore, LasR-defective strains exhibit relative resistance to specific antimicrobials and enhanced tolerance to alkaline stress, resulting in protection from cell lysis (32,33,(59)(60)(61).These findings support the notion that diminished LasR activity confers substantial advantages in conditions known to be common in CF lungs.However, the growing evidence for the prevalence and abundance of LasR-defective variants in diverse infections and environ mental contexts disproves any specificity for the CF lung and warrants expanding current models of their emergence.For instance, it was proposed that diversification of P. aeruginosa populations in the CF lung could favor long-term survival to multiple unpredictable stresses (62).This phenomenon could also apply outside of the CF lung.Accordingly, the effects of LasR impairment on growth on different carbon sources were suggested to explain the emergence of LasR-defective variants in the context of the CF lung, but such non-social mechanisms could likely arise in many other environmental settings, as growth conditions and carbon sources differ (63).
Alternatively, we might consider the emergence of LasR-defective variants as beneficial for a population that contains them (64).Supporting this model, controlled evolution experiments performed in vitro demonstrated the tendency of LasR-defective clones to rapidly emerge, with their proportion frequently stabilizing at about 50% of the total population (48,50,55,59,65).For instance, swarming colonies of P. aeruginosa with higher proportions of LasR-defective cells tend to have fitness advantages over those without LasR variants (55).The population dynamics of mixed populations of P. aeruginosa, comprising wild-type and LasR-defective strains, could partially explain the prevalence of the latter in diverse environments.In natural settings, P. aeruginosa forms biofilms, which are bacterial communities composed of diverse physicochemical niches.Jeske et al. showed that LasR function is especially prone to be lost in a biofilm context (42).Within these biofilms, factors produced by strains with a functional LasR could influence the behavior of surrounding LasR-defective strains within specific niches.Similarly, LasR-defective strains can modulate the behavior of LasR-functional strains by affecting, for instance, QS signaling and function (66,67).
Hence, the concomitant mixed presence of LasR-functional and LasR-defective strains appears to be beneficial to the overall population.This observation suggests that LasR plays an important ecological role and remains essential for the viability of P. aeruginosa populations.There could be various reasons for the selection of such mixed populations, including both social and non-social drivers, considering the multitude of factors that influence the diversification of this protein.
LasR-defective clinical isolates could be acquired directly from surrounding environments
The same high prevalence of LasR-defective strains in both clinical and environmental contexts suggests potential adaptative benefits associated with modulating LasR activity in populations of P. aeruginosa, irrespective of the environment.Therefore, it is possi ble that LasR-defective strains isolated from infections originated from environmental sources, as previously proposed, rather than emerging in situ (4,38).Two distinct studies provided evidence supporting this notion, as the lasR gene of some P. aeruginosa isolates already exhibited mutations at initial detection in CF respiratory samples, consistent with a direct acquisition from environmental sources (40,68).While P. aeruginosa lineages have been observed to lose LasR function over time during an infection, the discussion above indicates that such events are not necessarily specifically selected by the CF lung environment (30,(69)(70)(71)(72).
Presence of naturally occurring LasR-impaired function is not necessarily synonymous with loss of quorum sensing
A widely held notion is that LasR occupies the top position in the QS regulatory hierarchy (47,73).An alternative hypothesis is that the presence of this hierarchy is not a universal feature of P. aeruginosa and reflects studies conducted primarily with one strain, PAO1, which could itself be considered as an outlier regarding its QS mechanisms (74).We now know that isolates with defective LasR activity can still exhibit robust QS-dependent regulation of virulence factors (4,35,(75)(76)(77).Since LasR upregulates the rhl and the pqs QS systems in studied laboratory-adapted strains, it has been generally assumed that a defect in LasR activity should result in deficient QS regulation and thus reduced transcription of target genes and production of virulence factors.However, LasR-defec tive strains with a functional rhl QS system, referred to as "RAIL'' (RhlR Active Independ ently of LasR) strains, have been identified.Such strains have been isolated from CF lungs, as well as from diverse environments (4,35,(75)(76)(77).Using the same method as previously mentioned, we found that about half of the LasR-defective clinical strains from our panel (Fig. 1) possess LasR-independent RhlR activity (Fig. 2), in accordance with previously published findings (4).Hence, an impaired LasR protein does not necessarily lead to diminished production of virulence factors.Thus, the dogma based on the study of prototypical strains stating that LasR is the conserved "master regulator" of the QS regulatory cascade should be reconsidered.Besides regulating known RhlR-depend ent virulence factors such as pyocyanin, RhlR can activate virulence factors typically regulated by LasR, such as various exoproteases (4,35,78,79).Together, these observa tions and concepts highlight the crucial role of RhlR, accentuating its importance as a QS regulator in P. aeruginosa.However, the precise regulatory mechanisms involved in LasR-independent RhlR regulation remain elusive.One mechanism might involve PqsE, encoded by the last gene of the pqs operon.PqsE plays a major role in promoting RhlR activity, at least in prototypical strains (12,26,29,80,81), underscoring the importance of MvfR and the pqs system for maintaining the full suite of QS regulation.Since an important subset of naturally evolved P. aeruginosa strains activates QS and produces virulence factors without relying on LasR, we need to better understand the nature of the ecological pressure sustaining its regulatory activity in some lineages.
Improving quorum sensing inhibition by targeting RhlR-dependant factors
P. aeruginosa is naturally tolerant of, and resistant to, a wide range of antibiotics, limiting the efficacy of treating infections by this bacterium (83).To address this issue, anti-viru lence therapies have emerged as a promising approach for the development of new drugs, since they offer distinct advantages.Unlike antibiotics that directly target the survival of the bacteria, anti-virulence therapies aim to inhibit non-essential virulence factors without affecting viability.This approach theoretically reduces the development of resistance since no selective pressure for survival is applied.In P. aeruginosa, QS represents an interesting target, since it modulates the expression of multiple virulence factors unrelated to bacterial survival (84)(85)(86)(87).Rationally, many anti-virulence therapies target P. aeruginosa LasR or the production of 3-oxo-C 12 -HSL, since the las system has been considered to be on top of the QS regulation cascade (88)(89)(90).Unfortunately, the high prevalence of LasR-defective strains and the clear absence of QS hierarchy in some strains suggest that LasR is not an ideal target.Instead, we propose that researchers working on anti-QS therapies should consider focusing on other QS targets, such as the rhl or pqs system, which can still be active in the absence of LasR and seem better conserved.There have been a very few reports of strains lacking RhlR or MvfR/PqsR FIG 2 Clustering analysis performed on a subset of LasR-defective isolates of Pseudomonas aeruginosa shown in Fig. 1.Clustering analysis was based on chosen variables from a previous study.Briefly, HAQ concentrations, pyocyanin production, and activity of a rhlA-gfp reporter were measured in King's A medium at two different time points using the same methods as previously described (4,82).RhlR Active Independently of LasR (RAIL) strain E90 was included in the analysis as a reference.Raw data used to generate this analysis are presented in Table S2.Statistical analyses were made as previously described using R software (4,44).Robustness (*) represents the proportion of clustering runs in which a pair of isolates appeared together in some clusters, given that they were clustered together in at least one run, averaged over all such pairs (****: >90%).activity, perhaps because these regulators are indispensable for the proper functioning of QS (41,(91)(92)(93).Thus, targeting rhl-dependent QS, such as the production/function of C 4 -HSL or PqsE, could be an interesting approach in the control of P. aeruginosa infections and their clinical consequences.Studying the QS ecology in strains from a broader range of origins (clinical and environmental) should allow for better target selection.
Outstanding questions and conclusion
Acknowledging that maybe 40% of all P. aeruginosa isolates are LasR-defective raises several important questions.One particularly intriguing observation is the prevalence of mutations in lasR compared to those in its cognate synthase gene, lasI (41).A possible explanation for this discrepancy could be the absence of social benefit for lasI mutations: the inactivation of LasI would not prevent an autologous response to neighboring strains producing the diffusible 3-oxo-C 12 -HSL signal in a mixed population (63).Another potential explanation is that 3-oxo-C 12 -HSL does not exclusively serve as a LasR autoinducer and contributes to alternative functions in P. aeruginosa.For example, 3-oxo-C 12 -HSL increases pyocyanin production in the absence of LasR (67).In fact, the maintenance of 3-oxo-C 12 -HSL production in LasR-defective strains is an intriguing question that warrants further investigation.Globally, there seems to be an advantage for P. aeruginosa to conserve the function of LasI, while it is often not the case for LasR, as previously discussed.Finally, LasR-defective strains seem to maintain most QS-regulated functions; how this is working represents an important question to address in the coming years.
Loss of LasR function is common among P. aeruginosa isolated from any source, including both environmental and clinical settings, regardless of the acute or chronic nature of the infections.Furthermore, despite the absence of LasR activity, the rhl system is still functional in a subset of LasR-defective strains, ensuring QS functionality and the expression of survival and virulence factors.Nevertheless, it appears to be beneficial to have both LasR-defective and LasR-functional strains present in a population, as this combination of genotypes may play a beneficial role in maintaining P. aeruginosa's ecological balance.
Based on available knowledge, current hypotheses, and accumulating data, we propose that LasR-defective strains isolated from infection-related settings could be acquired directly from the environment, where genomic diversification occurs in P. aeruginosa populations based on differences in regulation in various niches.We also propose that LasR is not as essential for functional QS as it has been commonly believed and that there may be a larger variety of QS architectures in P. aeruginosa than previously thought.Based on the rarity of isolates with completely deficient RhlR activity, and the relatively high frequency of LasR deficiency, we suggest that RhlR plays a more central role in QS regulation than LasR.Additionally, RhlR activity depends on MvfR-dependent regulation through the expression of pqsE via the pqs operon.More investigation on the importance of this partnership, until now mainly investigated in prototypical strains, throughout a larger panel of strains is needed, given the possible importance of the pqs system in maintaining QS activity in P. aeruginosa.Accordingly, to better understand the diversity of QS, future studies should focus on P. aeruginosa obtained from a broader range of environments and consider the dynamics of cocultures with naturally co-isola ted strains.
|
v3-fos-license
|
2021-05-23T13:28:45.211Z
|
2021-01-01T00:00:00.000
|
235097274
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.aclweb.org/anthology/2021.naacl-industry.37.pdf",
"pdf_hash": "71cac5ef629cebdbaecad10f6ef522d58787b682",
"pdf_src": "ACL",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43067",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "71cac5ef629cebdbaecad10f6ef522d58787b682",
"year": 2021
}
|
pes2o/s2orc
|
Label-Guided Learning for Item Categorization in e-Commerce
Item categorization is an important application of text classification in e-commerce due to its impact on the online shopping experience of users. One class of text classification techniques that has gained attention recently is using the semantic information of the labels to guide the classification task. We have conducted a systematic investigation of the potential benefits of these methods on a real data set from a major e-commerce company in Japan. Furthermore, using a hyperbolic space to embed product labels that are organized in a hierarchical structure led to better performance compared to using a conventional Euclidean space embedding. These findings demonstrate how label-guided learning can improve item categorization systems in the e-commerce domain.
Introduction
Natural language processing (NLP) techniques have been applied extensively to solve modern ecommerce challenges (Malmasi et al., 2020;Zhao et al., 2020). One major NLP challenge in ecommerce is item categorization (IC) which refers to classifying a product based on textual information, typically the product title, into one of numerous categories in the product category taxonomy tree of online stores. Although significant progress has been made in the area of text classification, many standard open-source data sets have limited numbers of classes which are not representative of data in industry where there can be hundreds or even thousands of classes (Li and Roth, 2002;Pang and Lee, 2004;Socher et al., 2013)To cope with the large number of products and the complexity of the category taxonomy, an automated IC system is needed and its prediction quality needs to be high enough to provide positive shopping experiences for customers and consequently drive sales. Figure 1 shows an example diagram of the product category taxonomy tree for the IC task. In this example, a tin of Japanese tea 1 needs to be classified into the leaf level category label "Japanese tea." As reviewed in Section 2, significant progress has been made on IC as a deep learning text classification task. However, much of the progress in text classification does not make use of the semantic information contained in the labels. Recently there have been increasing interest in taking advantage of the semantic information in the labels to improve text classification performance (Wang et al., 2018;Liu et al., 2020;Du et al., 2019;Xiao et al., 2019;Chai et al., 2020). For the IC task, labels in a product taxonomy tree are actively maintained by human experts and these labels bring rich semantic information. For example, descriptive genre information like "clothes" and "electronics" are used rather than just using a numeric index for the class labels. It is reasonable to surmise that leveraging the semantics of these category labels will improve the IC models.
Although label-guided learning has been shown to improve classification performance on several standard text classification data sets, its application to IC on real industry data has been missing thus far. Compared to standard data sets, e-commerce data typically contain more complicated label taxonomy tree structures, and product titles tend to be short and do not use standard grammar. Therefore, whether label-guided learning can help IC in industry or not is an open question worth investigating.
In this paper, we describe our investigation of applying label-guided learning to the IC task. Using real data from Rakuten 2 , we tested two models: Label Embedding Attentive Model (LEAM) (Wang et al., 2018) and Label-Specific Attention Network (LSAN) (Xiao et al., 2019). In addition, to cope with the challenge that labels in an IC task tend to be similar to each other within one product genre, we utilized label embedding methods that can better distinguish labels which led to performance gains. This included testing the use of hyperbolic embeddings which can take into account the hierarchical nature of the taxonomy tree (Nickel and Kiela, 2017).
The paper is organized as follows: Section 2 reviews related research on IC using deep learningbased NLP and the emerging techniques of labelguided learning. Section 3 introduces the two label-guided learning models we examined, namely LEAM and LSAN, as well as hyperbolic embedding. Section 4 describes experimental results on a large-scale data set from a major e-commerce company in Japan. Section 5 summarizes our findings and discusses future research directions.
Related works
Deep learning-based methods have been widely used for the IC task. This includes the use of deep neural network models for item categorization in a hierarchical classifier structure which showed improved performance over conventional machine learning models (Cevahir and Murakami, 2016), as well as the use of an attention mechanism to identify words that are semantically highly correlated with the predicted categories and therefore can provide improved feature representations for a higher classification performance (Xia et al., 2017).
Recently, using semantic information carried by label names has received increasing attention in text classification research, and LEAM (Wang et al., 2018) is one of the earliest efforts in this direction that we are aware of. It uses a joint embedding of both words and class labels to obtain label-specific attention weights to modify the input features. On a set of benchmark text classification data sets, LEAM showed superior performance over models that did not use label semantics. An extension of LEAM called LguidedLearn (Liu et al., 2020) made further modifications by (a) encoding word inputs first and then using the encoded outputs to compute label attention weights, and (b) using a multihead attention mechanism (Vaswani et al., 2017) to make the attention-weighting mechanism have more representational power. In a related model, LSAN (Xiao et al., 2019) added a label-specific attention branch in addition to a self-attention branch and showed superior performance over models that did not use label semantics on a set of multi-label text classification tasks.
Alternatively, label names by themselves may not provide sufficient semantic information for accurate text classification. To address this potential shortcoming, longer text can be generated based on class labels to augment the original text input. Text generation methods such as using templates and reinforcement learning were compared, and their effectiveness were evaluated using the BERT model (Devlin et al., 2019) with both text sentence and label description as the input (Chai et al., 2020).
Finally, word embeddings such as word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) are generated in Euclidean space. However, embeddings in non-Euclidean space called hyperbolic embeddings have been developed (Nickel and Kiela, 2017;Chen et al., 2020a,b) and have been shown to better represent the hierarchical relationship among labels.
Model
For a product title X consisting of L words X = [w 1 , . . . , w L ], our goal is to predict one out of a set of K labels, y ∈ C = {c 1 , . . . , c K }. In a neural network-based model, the mapping X → y generally consists of the following steps: (a) encoding step (f 0 ), converting X into a numeric tensor representing the input, (b) representation step (f 1 ), processing the input tensor to be a fixed-dimension feature vector z, and (c) classification step (f 2 ), mapping z to y using a feed-forward layer.
Among label-guided learning models, we chose both LEAM (Wang et al., 2018) and LSAN (Xiao Step LEAM LSAN f 0 Word embedding Word embedding + Bi-LSTM encoding f 1 Only label-specific attention Both self-and label-specific attentions + adaptive interpolation f 2 Softmax with CE loss Softmax with CE loss Table 1 shows a comparison between these models.
LEAM
The LEAM architecture is shown in Figure 2 (Wang et al., 2018). First a product title of length L is en- where v l ∈ R D is determined through word embedding and V ∈ R D×L . The class labels are also encoded via label embedding as C = [c 1 , ..., c K ] where K is the total number of labels, c k ∈ R D and C ∈ R D×K . The label embeddings are title-independent and is the same across all titles for a given set of labels. We can then compute the compatibility of each wordlabel pair based on their cosine similarity to obtain a compatibility tensor G ∈ R L×K . The compatibility tensor is transformed into an attention vector through the following steps, (a) apply a 1D convolution to refine the compatibility scores by considering its context, (b) apply max pooling to keep the maximum score, and (c) apply a softmax operation to obtain the label-guided attention weights β. These attention weights containing the label semantic information are used in the f 1 step to compute a new representation, After obtaining z, we use a fully-connected layer with softmax to predict y ∈ C. The entire process f 2 (f 1 (f 0 (X ))) is optimized by minimizing the cross-entropy loss between y and f 2 (z).
LSAN
The LSAN architecture is shown in Figure 3 (Xiao et al., 2019). As shown in Table 1, LSAN has a few modifications compared to LEAM. First, a bi-directional long short-term memory (Bi-LSTM) encoder is used to better capture context semantic cues in the representation. The resulting con- where − → H and ← − H represent LSTM encoding outputs from forward and backward directions and H ∈ R L×2P where P is the dimension of the LSTM hidden state. For model consistency we typically set P = D.
Additional features of LSAN which extend LEAM include (a) using self-attention on the encoding H, (b) creating a label-attention component from H and C, and (c) adaptively merging the selfand label-attention components.
More specifically, the self-attention score A (s) is determined as where W 1 ∈ R da×2P and W 2 ∈ R K×da are selfattention tensors to be trained, d a is a hyperparameter, A (s) ∈ R K×L and each row A of all L words to label j. Therefore, is a representation of the input text weighted by self-attention where M (s) ∈ R K×2P . From the title encoding H and the label embedding C, compatibility scores between class labels and title words can be computed as the product where A (l) ∈ R K×L and each row A (l) j· is a Ldimensional vector representing the contributions of all L words to label j. The product title can be represented using label attention as where M (l) ∈ R K×2P . The last procedure in the f 1 step of LSAN is to adaptively combine the self-and label-attention representations M (s) and M (l) as where the two interpolation weight factors (α, β ∈ R K ) are computed as with the constraint α j + β j = 1, W 3 , W 4 ∈ R 2P are trainable parameters, σ(x) ≡ 1/(1+e −x ) is the element-wise sigmoid function, and M ∈ R K×2P .
Although the original LSAN model proposed multiple additional layers in its f 2 step, in our implementation we performed average pooling along the label dimension and then to a fully-connected layer with softmax output, similar to LEAM. Finally, the cross entropy loss is minimized.
Hyperbolic Embedding
In e-commerce item categorization we tend to use a more complicated label structure with a large number of labels organized as a taxonomy tree compared to standard text classification data sets. One immediate issue is that hundreds of labels can exist at the leaf level, some with very similar labels like "Japanese tea" and "Chinese tea," and the difference in label embedding vectors in Euclidean space can be too small to be distinguished by machine learning models. Such issues become more severe with increasing taxonomy tree depth as well. Hyperbolic embedding is one technique that has been developed which can address these issues (Nickel and Kiela, 2017;Chen et al., 2020a,b).
Hyperbolic space is different from Euclidean space by having a negative curvature. Consequently, given a circle, its circumference and disc area grow exponentially with radius. In contrast, in Euclidean space the circumference and area grow only linearly and quadratically, respectively. For representing hierarchical structures like trees, this property can ensure that all leaf nodes which are closer to the edge of the circle maintain large enough distances from each other.
As a specific application, Poincaré embedding uses the Poincaré ball model which consists of points within the unit ball B d where the distance between two points, u, v ∈ B d is defined as .
(10) The Poincaré embedding is obtained by minimizing a loss function depending only on d(u, v) for all pairs of labels (u, v) using Riemannian optimization methods. Figure 4 illustrates the differences between using an Euclidean space and a Poincaré ball model when representing nodes organized in a tree. Using a hyperbolic embedding has the potential to maintain large enough distances when our models aim to distinguish subtle differences among these labels.
Experimental Setup
Data set: Our data set consisted of more than one million products in aggregate from Rakuten, a large e-commerce platform in Japan, focusing on four major product categories which we call root-level genres. Our task, a multi-class classification problem, was to predict the leaf-level product categories from their Japanese titles. Further details of our data set are shown in Table 2.
Evaluation metric: We used the macro-averaged F-score F to evaluate model performance. This is defined in terms of the per-class F-score F k as where K is the total number of classes, and P k and R k are the precision and recall for class k.
Pre-trained embedding methods: We tested the following three methods: • All genre: Word embedding pre-trained on all of the data across different root-level genres; for the label embedding, the average of the word embedding from all word tokens in a label is used to initialize the label embedding C and this is further updated in the model training process.
• Genre specific: Word embedding pre-trained from data specific to each root-level genre; label embeddings were obtained similarly to the all-genre method.
• Poincaré: Label embedding pre-trained on the Poincaré ball taking into account the full hierarchical taxonomy tree.
Models: We compared a number of variants of LEAM and LSAN as described below.
• LSAN Poincaré : LSAN using genre-specific pre-trained word embeddings for the titles and pre-trained Poincaré embeddings for the labels.
Experimental parameters: Our models were implemented in TensorFlow 2.3 using a GPU for training and evaluation. Since Japanese text does not have spaces to indicate individual words, tokenization was performed with MeCab, an open source Root genre Class size Train size Dev size Test size Mean words/title Catalog Gifts & Tickets 29 11,369 1,281 559 31 Beverages 32 205,107 22,805 10,315 21 Appliances 286 399,584 44,529 18,478 20 Men's Fashion 71 593,126 65,939 43,243 23 Japanese part-of-speech and morphological analyzer using conditional random fields (CRF). 3 Once the text was tokenized, we fixed our input length to L = 60 words by truncating the title if it was longer than L and zero-padding the title if it was shorter than L. If a word appeared less than three times, it was discarded and replaced with an out-ofvocabulary token. Pre-trained word embeddings of dimension D = 100 using just product titles were obtained with fastText, which uses a skipgram model with bag-ofcharacter n-grams (Bojanowski et al., 2016). No external pre-trained embeddings were used. After initialization of word and label embeddings with pre-trained values, they were jointly trained with the remaining parameters of the model.
For Poincaré embedding of labels, we used an embedding dimension of 300. Pre-trained Poincaré embeddings of labels were obtained by representing the genre taxonomy tree as (child, parent) pairs and minimizing a loss function which depends only on inter-genre distances as defined in Eq. 10 (Nickel and Kiela, 2017). These pre-trained Poincaré label embeddings were used to initialize the label embeddings in LSAN but during training were allowed to vary according to the standard loss optimization process in Euclidean space.
For LEAM, we used a 1D convolution window size of 5. For LSAN, we set d a = 50, and when we experimented with the Poincaré embedding we set the LSTM hidden state dimension P = 300 to match the Poincaré embedding dimension.
The models were trained by minimizing the cross-entropy loss function using the Adam opti-3 https://taku910.github.io/mecab/ mizer with an initial learning rate of 0.001 (Kingma and Ba, 2015). We used early stopping with a patience of 10 to obtain the final models.
Results and Discussions
Impact of label attention: We examined the impact of label attention by comparing performance without and with label attention for LEAM and LSAN for each of the four root-level genres using all-genre pre-trained word embeddings. The result is shown in Table 3. For LEAM, we do not observe consistent improvements by including the label attention component, contrary to what was previously reported on standard text classification data sets (Wang et al., 2018). On the other hand for LSAN we do observe consistent improvements over all root-level genres by including the label attention component of the model. Since we did not observe a consistent improvement for LEAM in using label attention, for the remainder of this section we focus on variations of LSAN.
Impact of different pre-trained embeddings: We next evaluated the impact of using different pretrained embeddings for the title embeddings as well as the label embeddings for each of the four root-level genres. This is shown in Table 4. We observed that different pre-trained embeddings can consistently have a significant effect on model performance. In particular, using genre-specific embeddings outperformed all-genre embeddings for all genres. This is particularly notable for the smallest genre where we used more than 10 times the data to obtain the all-genre embeddings.
We believe this is because words that occur in the same root-level genre will tend to be embedded closer to each other in the full embedding space, which then makes it more difficult for the label attention to distinguish between different but similar labels such as "Japanese tea" and "Chinese tea." By using pre-trained embeddings obtained from specific genres, the embeddings become spaced farther apart and therefore the label attention is able to better distinguish labels with similar names. Poincaré embeddings take this further by requiring the embedding space distance between all leafgenre labels to be far apart from each other, and our results show that this leads to the best model performance. This supports our hypothesis that the distance between labels in the label embedding space is an important factor in ensuring that label attention improves model performance.
Compared to models using only the product titles, we see that models using label-guided learning can significantly improve the F-score. In particular, LSAN using a Poincaré label embedding shows the following F-score increases compared to LSAN base: 19.7% for "Catalog Gifts & Tickets," 3.0% for "Beverages," 3.4% for "Appliances," and 3.7% for "Men's Fashion." Note that the largest increase was achieved on the genre with the fewest training instances.
Conclusions
Since 2018, there have been increasing interest in the field of NLP to use the semantic information of class labels to further improve text classification performance. On the item categorization task in ecommerce, a taxonomy organized in a hierarchical structure already contains rich meaning and provides an ideal opportunity to evaluate the impact of label-guided learning. In this paper, we used real industry data from Rakuten, a leading Japanese e-commerce platform, to evaluate the benefits of label-guided learning.
Our experiments showed that LSAN is superior to LEAM because of its usage of context encoding and adaptive combination of both self-and labelattention. We also found that using genre-specific pre-trained embeddings led to better model per-formance than pre-trained embeddings obtained from all product genres. This is likely because pretraining on specific genres allows the embedding to focus on differences between similar genres and the label embeddings are able to take advantage of this. Finally, we showed that using hyperbolic embedding, more specifically Poincaré embedding, can improve model performance further by ensuring that all class labels are sufficiently separated to allow label-guided learning to work well.
One possible limitation of our current work is that although the label embedding is initialized using a hyperbolic embedding, the rest of the training process proceeds in Euclidean space. Future work could explore the possibility of training the entire model in hyperbolic space. Another direction is to incorporate the label-attention mechanism into the BERT model (Devlin et al., 2019), which has proven to be a powerful approach to text encoding. In addition, more advanced approaches to obtaining better representations of labels on top of our existing approach of using word tokens in labels could be explored.
|
v3-fos-license
|
2022-12-07T16:36:51.235Z
|
2023-01-01T00:00:00.000
|
254345790
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://ieeexplore.ieee.org/ielx7/16/10004030/09969996.pdf",
"pdf_hash": "a58e6ffc2534163b9317861cb97a8247c20e5057",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43069",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"sha1": "4f65750acef733add5974c2cf45e2836be6274c5",
"year": 2023
}
|
pes2o/s2orc
|
Impact of Inherent Design Limitations for Cu–Sn SLID Microbumps on Its Electromigration Reliability for 3D ICs
—Continuous scaling of package architectures requires small volume and high-density microbumps in 3D stacking, which often result in solders fully transforming to intermetallic compounds (IMCs). Cu–Sn solid–liquid interdiffusion (SLID) bonding is an attractive technology where the μ bumps are fully composed of IMCs. In this work, test structures made up of Cu 3 Sn IMC μ bump with a lateral dimension of 25 μ m × 25 μ m and 50 μ m × 50 μ m, respectively,weremanufacturedon apair of4-inchSiwafers demonstratingwafer-levelbondingcapability.Electromigration(EM)testswereperformedforacceleratedconditionsatatemperatureof150 ◦ C for various current densities ranging from ≈ 2 × 10 4 to 1 × 10 5 A/cm 2 . Scanning electron microscopy (SEM) and elemental dispersive spectroscopy (EDS) were employed to characterize the as-fabricated test structures. Due to Sn squeeze out, Cu 3 Sn was formed at undesired location at the upper Cu trace. Both nondestructive [lock-inthermography(LiT)] and destructivetechniques were employed to analyze the failure locations after EM tests. It was observed that the likelihood of failure spots is the current crowding zone along the interconnects in 3D architectures, which gets aggravated due to the formation of Cu 3 Sn in undesirable locations. Thermal runaway was observed even in Cu 3 Sn, which has been shown to be EM-resistant in the past, thus underlining inherent design issues of μ bumps utilizing SLID technology.
I. INTRODUCTION
T HE 3D stacking of discrete chips with different func- tionalities is a key requirement for advanced packaging solutions for realization of smart systems, high-performance computing systems, internet of things (IoT), or "More than Moore" technologies [1], [2].Often, the requirement is on high density, fine pitch, and small-volume interconnects for power efficient, high bandwidth, low latency, and low system cost for 3D heterogeneous packaging technologies [2].This brings stringent reliability requirements on the interconnects.One of the main failure mechanisms of ultra-fine interconnects is electromigration (EM), which occurs due to momentum transfer from moving electrons to the metallic atoms under the influence of an applied electric field [1], [3].Moreover, due to the inherent complexities in 3D architectures, such as current crowding at turns, the combined effect of Joule heating and EM has been identified as a dominant failure mechanism in 3D ICs [3].
Small-volume interconnections (diameter <100 μm) mostly rely on flip chip (FC) bumping technology also known as the "workhorse for advanced packaging solutions" [1].The FC bumps incorporates a solder layer, such as SnAgCu (SAC) alloys placed in between metallic contacts, Fig. 1(a).With continuous shrinking of the solder volume for high density and fine pitch interconnects, a large volume of solder μbumps gets transformed into intermetallic compounds (IMCs), Fig. 1(b) [4].These IMCs (hard, high Young's modulus) then not only dominate the mechanical properties of the solders (soft, low Young's modulus), but also degrade their EM reliability performance by offering additional flux divergence paths within the μbump at the solder/IMC interface [5].The flux divergence in a layer occurs due to different mass diffusivities across layers with different material properties and are the sites of EM failure at high current densities.In contrast, in solid-liquid interdiffusion (SLID) bonding technology, the interconnect formation depends on complete formation of IMCs, Fig. 1(c).These IMCs are metallic interconnects owing to its low resistivity.Moreover, it has been shown that full IMC μbumps offer better EM resistance compared with the solder μbumps [4], [6].The critical product, which is a measure of the resistance to EM, is reported to be larger for IMC-based μbumps than for soft Sn-based solders [7].Although SLID utilizing various metal combinations has been demonstrated, such as Au-Sn, Ag-Sn, Cu-In, Cu-Sn, and Ni-Sn-Cu, the extensively researched is the Cu-Sn SLID system [8], due to its low cost, simple processing steps, and easy integration with Cu through silicon vias (TSVs) in the system.Here, the Sn layer is equivalent to the solder layer in FC bumps, sandwiched between Cu layers.In the Cu-Sn SLID bonding process, the low melting point (MP) T L metal (Sn) is deposited on the higher MP T H metal (Cu).The bonding occurs at temperature (≈300 • C), which is greater than T L .As a result, Sn melts with subsequent Cu dissolution and formation of IMCs.At the typical bonding temperature of ≈300 • C, the IMCs that could form are Cu 6 Sn 5 (MP-415 • C) and Cu 3 Sn (MP-676 • C) [8].The Cu 3 Sn IMC is the thermodynamically stable phase, which forms entirely in the bond after which no more Cu is consumed.Since reduction in size of FC μbumps (diameter <30 μm) would often result in complete formation of IMCs in μbumps, it will ultimately resemble SLID μbumps, Fig. 1(c) [4].Therefore, EM studies should be carried out on SLID IMC-based μbumps to examine the failures not only at lower current densities but also catastrophic failures at higher current densities to gain insights to avoid such failures in the real devices.
Limited studies are present in the literature on the EM reliability of fully Cu 3 Sn IMC μbumps.A 20-nm Cu 3 Sn IMC layer on Cu interconnect has been shown to block surface diffusion paths in Cu, thereby enhancing its EM reliability [9].Moreover, a larger driving force is required to dissociate Cu or Sn atoms from Cu 3 Sn intermetallic layer [9].Labie et al. [10] demonstrated that fully formed IMC μbumps outperform standard solder bumps in reliability.In other work, Labie et al. [6] compared the EM reliability of two Cu/Sn/Cu samples manufactured with 3.5-and 8-μm thickness of Sn.The IMCs present were both Cu 6 Sn 5 (at the middle of the μbump) sandwiched between Cu 3 Sn, which then were connected to Cu under bump metallization (UBM).At the stringent test conditions of 1.1 × 10 5 A/cm 2 at 200 • C, the sample with 8-μm Sn layer survived the tests for initial 200 h, whereas the other one (3.5 μm of Sn) survived for more than 1000 h with no failures reported [6].The failure in thick Sn samples was not attributed to the EM-induced damage, but Kirkendall void formation at the Cu/Cu 3 Sn interface when Cu UBM is fully consumed.Chen et al. [4] performed EM tests at a current density of 2.1 × 10 5 A/cm 2 under 180 • C on a full IMC microjoint, and no EM-induced damage was observed for 5000 h of stressing.A negligible resistance increase (4%) after EM test in one of the samples was attributed to the damage in Al trace connecting the μbumps.Wang et al. [11] performed EM tests at 150 • C on Cu-Cu 3 Sn-Cu microbumps manufactured with a solid-state-diffusion bonding process.For a current density of 5 × 10 4 A/cm 2 , the resistance of the bumps was stable up to 140 h, but it showed an increase in the resistance value when the current density was increased to 1 × 10 5 A/cm 2 .The resistance increase was attributed to the migration of copper layer, but it was not clarified how that increases the resistance.In other work, IMC bumps were EM tested at a current density of 4 × 10 5 A/cm 2 at 200 • C, and a resistance increase of 20% was measured within 1000 h of stressing time [7].However, the morphology of the IMC contact remained stable, and the failure was related to the formation of Cu-Al IMCs at the lower Al level.In all the above studies, EM resistance capability of IMC μbumps was demonstrated to exceed in comparison with the solder μbumps.Also, the failures in the IMC μbumps were attributed to different mechanisms with mostly damage in the metallization contact layers, which are the current crowding locations [4], [6], [7].
Current crowding locations in 3D ICs have shown to be a major failure spot where thermal runaway problems typically arise as a combined effect of Joule heating and EM [3].Thermal runaway resulting in local melting at current crowding locations has been reported earlier for eutectic PbSn-based solder [12].In this work, test structures were fabricated by demonstrating wafer-level SLID bonding to study the reliability of Cu 3 Sn IMCs with Cu at both top and bottom layers.The EM tests were conducted for various current densities at a temperature of 150 • C. Due to the combined effect of Joule heating and EM, thermal runaway was observed even in Cu 3 Sn with a high MP of 676 • C, which was formed at upper Cu trace because of squeeze out and subsequent reaction of Sn during bonding.This work would examine this new failure mechanism not reported before in Cu 3 Sn, which is formed at such undesired locations due to Sn squeeze out and which could easily aggravate in small-volume, fine-pitch SLID μbumps.This emphasizes the inherent limitations in SLID technology and the related design considerations for which the risk is rather underestimated.
A. Design of Test Structures
Various test structures, such as two bumps, daisy chain, and kelvin structures, were incorporated in the 4-in mask process.The μbumps were designed to be square in shape with the lateral dimensions ranging from 10 to 100 μm.A two-bump test structure was employed for this study.The advantage of this test structure is that the current flow from both top to bottom chip and bottom to top chip could be investigated with focus on just two μbumps.Fig. 2(a) shows the mask layout of the two-bump test structure along with the dimensions of the top and bottom chips.The bottom chip dimensions were 6 mm × 6 mm, and the top chip dimensions were 4 mm × 4 mm.The scribe lines used to dice the bonded chips are shown in red (partial cut) and blue (through cut).The partial cuts denote the locations where only the top chip was diced to expose the Cu contact pads on the bottom chip for probing.The through cut denotes the locations where both top and bottom chips were diced.An array of support μbumps were also provided to mitigate the stress formed during the dicing process.
B. Device Fabrication
The fabrication of the test structures is a four-mask process.It starts with a pair of 4-inch double-sided polished (DSP) 100-orientated Si wafer.In the first step, the back-side patterning was carried out on the wafer pair, wherein the alignment marks and scribe lines were patterned using optical lithography and etched with reactive ion etching (RIE) tool with SF 6 as an etching gas for both bottom and top wafers, Fig. 3(a).A 15 nm of titanium tungsten (TiW) adhesion layer and a 100 nm of Cu seed layer were then sputter deposited on the front side of both the wafer pairs, Fig. 3(b) and (c).The front side of the wafer pair was then patterned to form the traces, contact pads, and support μbumps using an AZ15nXT (450 CPS) negative photoresist.The patterned wafer pair was treated with an oxygen plasma with an O 2 flowrate of 30 mL/min and an RF power of 100 W for 3 min to improve the wettability of the surface before electroplating.A 1 μm of Cu was then electroplated at a current density of 15 mA/cm 2 with NB SEMIPLATE CU 100 after which the resist was stripped, Fig. 3(d).Then, the device μbumps and the support μbumps were subsequently patterned using the same photoresist and treated with the oxygen plasma as described before.Subsequently, 4 μm of Cu (NB SEMIPLATE CU 100) and 2.5 μm of Sn (NB SEMIPLATE SN 100) were electroplated at a current density of 15 and 10 mA/cm 2 , respectively, Fig. 3(e) and (f).After every electroplating step, the thickness was confirmed by a contact profilometer at five different locations across wafer.The final thickness of electrodeposited Cu and Sn was measured to be Cu-5.1 ± 0.3 μm and Sn-2.3 ± 0.2 μm for bottom wafer and Cu-4.9 ± 0.2 μm and Sn-2.5 ± 0.3 μm for top wafer.
After stripping the resist, the final patterning was done with the photoresist to protect the Cu traces and device μbumps during the etching step of the Cu seed layer and TiW adhesion layer.Cu was etched in a commercial etchant Cu etch 150 purchased from NB technologies GmbH, and TiW was etched in an H 2 O 2 solution heated at 60 • C, Fig. 3(g).The etching of Cu and TiW layers was visually confirmed under an optical microscope before stripping the resist.Finally, the wafer bonding process was carried out in an Applied Microengineering Limited (AML) wafer bonder.The wafers were mounted on the top and bottom platens, and the chamber was pumped down to ≈1e − 6 bar.The wafers were preheated to 150 • C before bringing them into contact.After alignment of the wafers, a force of 8 kN (≈20 MPa, total bonding area-4 cm 2 ) was applied on the platens, and the temperature was ramped up to 320 • C at a ramp up rate of 10 • C/min.The bonding was then carried out for 1-h duration, after which the temperature was ramped down.Finally, the bonded wafer pairs were diced along the through and partial cut scribe lines on a DAD3220 dicer tool from Disco, Fig. 3(h).Fig. 3(i) shows the schematic of the bond line, Cu-Cu 3 Sn-Cu.Fig. 3(j) shows the platen force and temperatures versus time from the wafer bonding process.A good match was there between the temperatures of the upper and lower platens.Fig. 3(k) shows the scanning electron microscopy (SEM) image of the final fabricated chip.
C. Nondestructive Analysis
The nondestructive analysis of the chips was carried out with two techniques.Phoenix GE Nanomex with minimum detectability of 200 nm was employed for X-ray imaging of the chips.Sentris from Optotherm Inc. was employed for lockin thermography (LiT) to locate the failure zones in the EM stressed chips.
D. Destructive Analysis
As can be seen from Fig. 2, the device bumps are at the middle of the chip.To assess it for cross-sectional imaging, the chips were first cut in two parts along a horizontal cutline at the vicinity of the device μbumps (≈300 μm away) with a femtosecond laser micromachining tool.A femtosecond laser has now been widely used for destructive analysis, which generates negligible laser-induced damage to the samples [13].The cut chips were then molded into epoxy and cured for subsequent grinding and polishing steps, which was carried out using standard metallographic methods.
E. SEM Characterization
After sample preparation, the initial cross-sectional imaging of the support bumps and elemental dispersive spectroscopy (EDS) point analysis was carried out with JEOL JSM-6335F field-emission SEM (FESEM) equipped with the Oxford Instruments INCA X-sight EDS detector.After EM tests, the cross-sectional imaging and in-depth analysis of the test structures were carried out with a dual beam focused ion beam (FIB)-SEM JEOL JIB-4700F equipped with the Oxford instruments Ultim Max 100 EDS detector.
F. EM Tests
Before the EM tests, a current-voltage (I -V ) sweep analysis was performed on the as-fabricated chips to assess the linear resistance of the bumps.The I -V tests were conducted via an Agilent B1500 semiconductor parameter analyzer.A voltage sweep from −0.5 to 0.5 V was performed, and the corresponding current values were recorded.Subsequently, EM tests were performed on a thermal chuck at 150 • C with Keithley 2231A-30-3 three-channel dc power supply with nominal current densities across the μbumps ranging from 2 × 10 4 to 1 × 10 5 A/cm 2 .The current densities were calculated based on the ideal dimensions of μbumps (25 and 50 μm 2 ).
III. RESULTS AND DISCUSSION
A. SEM Cross-Sectional Imaging Fig. 4(a) and (b) shows the cross-sectional SEM images of the support μbumps of 25-and 50-μm test structures, respectively.The EDS point analysis of the μbumps shows that Cu 3 Sn was formed in the bond line, with the atomic percentage of Cu and Sn as ≈73.8% ± 0.3% and ≈26.2% ± 0.3%, respectively.Moreover, in the support μbumps from 50-μm test structures, small amount of Cu 6 Sn 5 was observed, Fig. 4(b).It was also confirmed with the EDS point analysis with the atomic percentage of Cu and Sn as ≈58.0%± 1.8% and ≈42.0%± 1.8%, respectively.However, Cu 6 Sn 5 was not observed in the support μbumps from 25-μm test structures.
Fig. 4(c) and (e) shows the SEM images of the device μbumps from 25-and 50-μm test structures, respectively.4(g) and (h) shows the atomic percentage plots across the cutline shown in the inset, respectively.As can be seen, full Cu 3 Sn was formed in the bond line, and the atomic weight percentage of the Cu and Sn was found to be ≈71.1% ± 1.8% and ≈27.2% ± 1.9%, respectively.No Cu 6 Sn 5 was observed in the device μbumps of the test structures under consideration.The height of all the μbumps measured was ≈11 μm.The total thickness of the electroplated Cu and Sn stack from top and bottom wafer was ≈15 μm, so the reduction in thickness of the final bond line is attributed to the squeeze out of liquid Sn.Moreover, formation of voids could be seen in the test structures concentrated mostly at the Cu/Cu 3 Sn interface, which is widely reported due to interplay of various parameters, such as Kirkendall voiding and impurities incorporation in the electroplated structures [14], [15].The resistance of the two-bump test structures measured varied from 6 to 7 and 3 to 4 for the 25-and 50-μm test structure, respectively.Fig. 5(c) shows the EM test of a 25-μm test structure (25-μm TS1) conducted on a thermal chuck at 150 • C at a current density of ≈8 × 10 4 A/cm 2 (current value 0.5 A).The resistance was stable for ≈63 h [time to failure (TTF)] after which there was a sudden jump in the resistance value, which was stable for further 30 h.Fig. 5(d) shows the EM test result of another 25-μm test structure (25-μm TS2)
B. I-V Sweep and EM Tests
This article has been accepted for inclusion in a future issue of this journal.Content is final as presented, with the exception of pagination.conducted at 150 • C and at a current density of ≈1 × 10 5 A/cm 2 .In this case, the resistance was stable for ≈105 h before the failure.
For the 50-μm test structure (50-μm TS), for the current level of 0.5 A, the current density corresponds to ≈ 2 × 10 4 A/cm 2 .At this current density, the resistance was observed to be stable for ≈336 h (approximately two weeks), and no failure was recorded, Fig. 5(e) and (f).The current density was then increased to ≈4 × 10 4 A/cm 2 and monitored for ≈40 min during which the resistance was again stable.The small increase in the base resistance because of increase in current could be attributed to the Joule heating, which increased the measured resistance value.Subsequently, the current density was increased to ≈6 × 10 4 A/cm 2 (current-1.5 A), and the resistance was observed to gradually increase after which the test was terminated.
C. Failure Analysis 1) Nondestructive Failure Analysis: Two techniques were employed for the nondestructive analysis of the failure spots in the EM tested sample-X-ray imaging and LiT.The X-ray imaging of the chips is an effective way of examining the μbumps and checking the alignment of the chips.On the other hand, LiT demonstrated effectiveness in locating the failure spots.The applied voltage during the LiT tests was set to 8.5 V at a frequency of 2.5 Hz, and the thermal image was acquired after 46 cycles.Fig. 6(c) shows the infrared (IR) thermal image of EM stressed sample for 25-μm test structure (25-μm TS1) of which the EM test result is shown in Fig. 5(c).The hot spot corresponding to the probable failure location near the device μbumps is clearly visible.
2) Destructive Failure Analysis (FIB): A destructive analysis was carried out to explore the probable failure locations in detail.Fig. 7(a) and (b) shows the SEM and elemental EDS mapping of the device μbumps from 25-μm TS1, which was EM tested [Fig.5(c)] and on which the LiT analysis was performed [Fig.6(c)].Cu traces were found to be delaminated near the location where hot spot was observed.Due to delamination, the dissipation of Joule heating gets affected, which results in hot spot formation adjacent to those locations.On the other hand, accumulation of Cu was observed on the Cu trace near to the right bump, Fig. 7(a).Fig. 7(c) shows the EDS line scan generated from the cutline shown in the inset.With closer analysis of the line scans, diffusion of Sn into the Cu trace due to EM could be suspected.The diffusion of Sn in the direction of electronic current at higher temperatures has also been widely reported in the literature for solder bumps [16].from the elemental maps, complete burnout and melting of μbump and silicon were observed in the right bump.Also, delamination [similar to Fig. 7(a)] and cracking of upper Cu trace were observed near the left bump.However, no detailed information could be extracted regarding the initiation and mechanism of failure, as the tests continued for almost ≈55 h after the failure [Fig.5(d)].This could be due to the lower current density (even for max-6 × 10 4 A/cm 2 ) because of the 4× increase in the cross-sectional area as compared with 25-μm test structures.Fig. 8(d) and (e) shows the SEM image and the elemental maps of the left bump, respectively.From the mapping result, a segregation of Cu and Sn could be observed in the Cu 3 Sn layer near the upper Cu trace, which indicates local melting, as no thermodynamically stable phase could be identified from Cu-Sn system phase diagram with respect to the measured Cu-Sn atomic percentages.Fig. 8(f) shows the atomic percentage plots versus distance across the yellow cutline shown in Fig. 8(d).At the middle of the bump, the composition indicates Cu 3 Sn but clearly deviates and shows irregular Cu-and Sn-rich regions.Since the region that was originally Cu 3 Sn is supposed to be EM-resistant and thermally stable until 676 • C, observation of these kind of failures has not been reported before.
Then, FIB milling was carried out to examine the region underneath the surface of the failure location.Fig. 9(a)-(d) shows the SEM image after FIB cut and the corresponding elemental map data for Cu, Sn, and Si.The Cu-and Sn-rich regions also penetrate beneath the surface.Interestingly, silicon was also found to be incorporated in traces at the This article has been accepted for inclusion in a future issue of this journal.Content is final as presented, with the exception of pagination.failure location.Fig. 9(e) shows the EDS line scan across the yellow cutline shown in Fig. 9(a), which confirmed the presence of silicon.Although the current density across the 50-μm bumps is lower than 25-μm bumps (4× cross-sectional area difference), the current density across the upper Cu trace of 50-μm test structure is higher than the Cu trace of 25-μm test structure (2× cross-sectional area difference) for the maximum current of 1.5 A. Furthermore, even though the EM tests were conducted at 150 • C, the actual temperature near the μbumps at the current crowding zone could be much higher due to Joule heating [3].As a result, the resistance at the current crowding zone would further increase, ultimately resulting in thermal runaway.This would then result in the local melting of Cu 3 Sn and dissolution of Si in the melt with subsequent solidification after the tests are terminated.
To assess the current density levels, the 50-μm two-bump test structure was constructed in the finite element (FE) model in COMSOL by accounting the Cu 3 Sn formation due to Sn squeeze out across the Cu bumps, Fig. 9(f).Here, Cu 3 Sn formation fully encloses the Cu bump to mimic the experimental observation.The electrical conductivities of Cu and Cu 3 Sn were taken as 58.1 × 10 6 and 11.2 × 10 6 S/m, respectively.Fig. 9(g) shows the current density distribution for a current of 1 A across a cut plane from the center of the two-bump test structure.The current density is an order of magnitude higher near to the location where failures were observed as compared with the current density in μbumps.Moreover, the resistivity of Cu 3 Sn is also higher than Cu, which also worsen the scenario due to Joule heating at the current crowding locations resulting in thermal runaway.This shows that Cu 3 Sn though it is thermally stable until 676 • C, such kind of failures could also occur, underlining the importance of proper designing of the SLID μbumps in 3D ICs.Specifically, care must be taken to minimize Sn squeeze out during the bonding process to ultimately prevent Cu 3 Sn formation in unwanted locations.
IV. CONCLUSION
In this work, the wafer-level Cu-Sn SLID bonding process was demonstrated by incorporating various test structures.Due to Sn squeeze out, Cu 3 Sn was observed to form at unwanted locations at the upper Cu trace connecting the two-bump test structure.The test structures were tested for its EM reliability to study different failure modes.For 25-μm test structures, two types of failures were observed.For a current density of 8 × 10 4 A/cm 2 , delamination of Cu traces was observed, whereas catastrophic failure in addition to delamination of Cu trace was observed at a higher current density of 1 × 10 5 A/cm 2 .For the 50-μm test structure, thermal runaway-based failure was observed in Cu 3 Sn at current crowding location.Therefore, although Cu 3 Sn has been shown to be EM-resistant with high thermal stability, thermal runaway and catastrophic failures could not be ruled out in Cu 3 Sn in 3D architectures at high current densities due to its formation at unwanted locations.This emphasizes the need of proper designing of SLID μbumps in heterogeneous integration to prevent Sn squeeze out and prevent the formation of Cu 3 Sn at undesired locations.This would include proper design considerations to minimize Sn squeeze out, which includes the following: 1) engineering lateral dimensions of the bumps in top and bottom wafers; 2) optimal Sn thickness; and 3) optimal bonding force.The above aspects are easy to control in chip-level bonding, but will be difficult to control in the wafer-level bonding process where nonuniformities in the thickness of electroplating stacks could pose problems on the overall yield.Future work will incorporate these aspects to address the inherent limitations in SLID μbumps for interconnects for enhanced EM reliability.
Fig. 2 .
Fig. 2. (a) Layout of the chip with a two-bump test structure.Violet traces are the contacts on the bottom wafer, and green traces are the contacts on the top wafer.Bottom and top wafers are bonded through the bumps (black), (b) 3D schematic of the bonded chip, and (c) zoomed-in 3D schematic of the two-bump test structure.
Fig. 2 (
b) and (c) shows the 3D schematic of the bonded chip highlighting the device μbumps, support μbumps, and top and bottom Cu traces.In this work, the two-bump test structures, which were investigated, were of two lateral dimensions, i.e., 25 and 50 μm.
Fig. 3 .
Fig. 3. (a) Back side patterning for scribe lines, magenta for bottom wafer (through cut), and cyan for top wafer (partial cut-only top wafer), (b) front side Si wafer, (c) TiW/Cu deposition, (d) patterning and electrodeposition of Cu electrodes on bottom and top wafers, (e) patterning and electrodeposition of Cu bumps, (f) electrodeposition of Sn bumps, (g) etching of Cu seed layer and TiW, (h) wafer bonding and dicing, (i) cut plane across the chip showing the Cu (brown)-Cu 3 Sn (blue)-Cu (brown) bond line, (j) wafer bonding process profile, and (k) SEM image of the as-fabricated chip.
Cu 3
Sn was also observed to be formed on the top Cu trace, which is due to the squeeze out of liquid Sn during the bonding process [shown in Fig. 4(c) (red circle)].Ideally, the upper Cu trace should connect the Cu pads of the two μbumps, as shown in the schematic of Fig. 3(i).Fig. 4(d) and (f) shows the EDS elemental mapping of Cu, Sn, and Si on the test structures.Fig.
Fig. 5 (
Fig.5(a) and (b) shows the linear I -V characteristics of the two-bump test structures performed before the EM tests.The resistance of the two-bump test structures measured varied from 6 to 7 and 3 to 4 for the 25-and 50-μm test structure, respectively.Fig.5(c) shows the EM test of a 25-μm test structure (25-μm TS1) conducted on a thermal chuck at 150 • C at a current density of ≈8 × 10 4 A/cm 2 (current value 0.5 A).The resistance was stable for ≈63 h [time to failure (TTF)] after which there was a sudden jump in the resistance value, which was stable for further 30 h.Fig.5(d)shows the EM test result of another 25-μm test structure (25-μm TS2)
Fig. 4 .
Fig. 4. (a) SEM image of 25-µm support bumps, (b) SEM image of 50-µm support bumps, and (c) SEM image of 25-µm two-bump test structure.Cu 3 Sn formed due to Sn squeeze out at the upper Cu trace is shown in red circle, (d) EDS elemental maps of the 25-µm test structure, (e) SEM image of 50-µm two-bump test structure, (f) EDS elemental maps of the 50-µm test structure, and atomic percentage plot versus distance across the cutline shown in the inset figure for (g) 25-µm device µbump and (h) 50-µm device µbump.
Fig. 6 .
Fig. 6.(a) 2-D X-ray image of 25-µm TS1 chip with the device µbumps and support µbumps marked, (b) zoomed-in image of the device bumps with tilt, (c) LiT thermal image of the chip showing the hot spot, which indicates probable failure location, and (d) schematic showing the location of the hot spot in (c).
Fig. 7 .
Fig. 7. (a) SEM image of EM tested 25-µm TS1 for a current density of 8 × 10 4 A/cm 2 with the electron current direction shown, (b) corresponding EDS elemental maps, (c) atomic percentage plot versus distance across the cutline shown in the inset, (d) SEM image of EM tested 25-µm TS2 for a current density of 10 5 A/cm 2 with the electron current direction shown, and (e) corresponding EDS elemental maps.
Fig. 6 (
Fig. 6(a) shows the image of an entire chip, where device μbumps and support μbumps are marked.The alignment marks at the top corners (left and right) demonstrate good alignment of the top and bottom chips.Fig. 6(b) shows the zoomed-in image of the device μbumps.Two limitations were recognized: 1) small size-the size of the bumps was ≈25 μm × 25 μm × 11 μm, which is too low for effective analysis and 2) the device μbumps were surrounded by the support μbumps, which makes the full 3D image construction of the device μbumps challenging.On the other hand, LiT demonstrated effectiveness in locating the failure spots.The applied voltage during the LiT tests was set to 8.5 V at a frequency of 2.5 Hz, and the thermal image was acquired after 46 cycles.Fig. 6(c) shows the infrared (IR) thermal image of EM stressed sample for 25-μm test structure (25-μm TS1) of which the EM test result is shown in Fig. 5(c).The hot spot corresponding to the probable failure location near the device μbumps is clearly visible.2) Destructive Failure Analysis (FIB): A destructive analysis was carried out to explore the probable failure locations in detail.Fig.7(a) and (b) shows the SEM and elemental EDS mapping of the device μbumps from 25-μm TS1, which was EM tested [Fig.5(c)] and on which the LiT analysis was performed [Fig.6(c)].Cu traces were found to be delaminated near the location where hot spot was observed.Due to delamination, the dissipation of Joule heating gets affected, which results in hot spot formation adjacent to those locations.On the other hand, accumulation of Cu was observed on the Cu trace near to the right bump, Fig.7(a).Fig.7(c) shows the EDS line scan generated from the cutline shown in the inset.With closer analysis of the line scans, diffusion of Sn into the Cu trace due to EM could be suspected.The diffusion of Sn in the direction of electronic current at higher temperatures has also been widely reported in the literature for solder bumps[16].Fig. 7(d) and (e) shows the SEM and elemental EDS mapping of 25-μm TS2 sample, which was tested at a current density of ≈1 × 10 5 A/cm 2 [Fig.5(d)].As can be seen
Fig. 7 (
Fig. 6(a) shows the image of an entire chip, where device μbumps and support μbumps are marked.The alignment marks at the top corners (left and right) demonstrate good alignment of the top and bottom chips.Fig. 6(b) shows the zoomed-in image of the device μbumps.Two limitations were recognized: 1) small size-the size of the bumps was ≈25 μm × 25 μm × 11 μm, which is too low for effective analysis and 2) the device μbumps were surrounded by the support μbumps, which makes the full 3D image construction of the device μbumps challenging.On the other hand, LiT demonstrated effectiveness in locating the failure spots.The applied voltage during the LiT tests was set to 8.5 V at a frequency of 2.5 Hz, and the thermal image was acquired after 46 cycles.Fig. 6(c) shows the infrared (IR) thermal image of EM stressed sample for 25-μm test structure (25-μm TS1) of which the EM test result is shown in Fig. 5(c).The hot spot corresponding to the probable failure location near the device μbumps is clearly visible.2) Destructive Failure Analysis (FIB): A destructive analysis was carried out to explore the probable failure locations in detail.Fig.7(a) and (b) shows the SEM and elemental EDS mapping of the device μbumps from 25-μm TS1, which was EM tested [Fig.5(c)] and on which the LiT analysis was performed [Fig.6(c)].Cu traces were found to be delaminated near the location where hot spot was observed.Due to delamination, the dissipation of Joule heating gets affected, which results in hot spot formation adjacent to those locations.On the other hand, accumulation of Cu was observed on the Cu trace near to the right bump, Fig.7(a).Fig.7(c) shows the EDS line scan generated from the cutline shown in the inset.With closer analysis of the line scans, diffusion of Sn into the Cu trace due to EM could be suspected.The diffusion of Sn in the direction of electronic current at higher temperatures has also been widely reported in the literature for solder bumps[16].Fig. 7(d) and (e) shows the SEM and elemental EDS mapping of 25-μm TS2 sample, which was tested at a current density of ≈1 × 10 5 A/cm 2 [Fig.5(d)].As can be seen
Fig. 8 .
Fig. 8. (a) SEM image of EM tested 50-µm TS with the electron current direction shown, (b) zoomed-in SEM image of the right bump and (c) corresponding EDS elemental maps, (d) zoomed-in SEM image of the left bump and (e) corresponding EDS elemental maps, and (f) atomic percentage plot versus distance across the yellow cutline shown in (d).
Fig. 8 (
a) shows the SEM image of the 50-μm TS for which the EM test is shown in Fig. 5(e) and (f).The EM tests were terminated when sudden increase in the resistance was observed at a current density of ≈6 × 10 4 A/cm 2 .Fig. 8(b) and (c) shows the zoomed-in SEM image of the right bump and elemental EDS maps, respectively.No failure spots were observed at this location.In contrast to the 25-μm test structures [Fig.7(a) and (d)], the right bump is fully intact.
Fig. 9 .
Fig. 9. (a) SEM image after FIB cut of the failure location, elemental maps for (b) Cu, (c) Sn, (d) Si, (e) atomic percentage plots across cutline shown in Fig. 8(a), (f) model schematic of the two-bump test structure, and (g) current density distribution across a cut plane midway of the test structure.
|
v3-fos-license
|
2021-06-24T04:50:53.394Z
|
2020-09-28T00:00:00.000
|
244384971
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.14502/tekstilec2020.63.166-184",
"pdf_hash": "296a3050e1b53e6edb36040d5ee6597c394cd055",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43070",
"s2fieldsofstudy": [
"Business"
],
"sha1": "9f4c60221af7fbbcdf81c4cd760b2cccf8b9dcb3",
"year": 2020
}
|
pes2o/s2orc
|
Assessment of the Factors Affecting Apparel Pattern Grading Accuracy: Problems Identification and Recommendations
Grading is an inseparable part of producing multiple sized patterns in clothing production. From the inception of apparel manufacturing, various methods have been developed for precision pattern grading. Nevertheless, most conventional grading systems have some flaws. The objectives of this study were to analyse traditional grading systems, identify the factors responsible for pattern grading deficiencies and finally, recommend sug-gestions to minimise grading problems related to the use of CAD software. For the experiments, three different measurement sheets of different buyers were collected and combined into a single specification for better comparison. All garment patterns were then drawn and graded with varying parameters. Later on, measurements of graded patterns were analysed for grading accuracy. This study presents the factors responsible for grading deficiencies and how they can be minimised for higher precision grading for the better fitting of clothing and the prevention of garment sample rejection before bulk production.
Introduction
Today's business policy for apparel manufacturers requires quick response systems that turn out a wide variety of products to meet customers' demand. In the apparel industry, in particular, stakeholders are trying to develop their current systems for new production techniques in order to keep pace with the rapid changes in the fashion and clothing industry [1]. The garment production process is separated into four main phases: designing and clothing pattern generation, fabric spreading and cutting, sewing and ironing and packing [2]. In order to manufacture apparel, proper sizing information is mandatory. Sizing is the process used to create a size chart of key body measurements for a range of apparel sizes [3]. For the mass production of ready-to-wear clothing, it is necessary to create all sized garments in the size range or sizes provided in the specification sheet. However, the creation of all size patterns is cumbersome and time-consuming. Pattern grading is traditionally used to create various sizes. Grading is a complex process used to create a complete set of patterns of different sizes contained in the size range. This is done by creating a pattern of a selected base size and then grading it up to create the largest sizes and down to create the smallest sizes. To grade a pattern, a set of grade rules are created or grading increment values are calculated. They are then inserted into the grade or cardinal point. Grade points or cardinal points are those points present at the perimeter of the pattern and distribute the changes in body dimension [4]. Generally, pattern grading is done to increase or decrease the dimension of the pattern to reproduce a complete set of patterns of different sizes in the size range to fit a group of people [5−6]. At present, with the mass the customisation of apparel sizing, advanced computer technology is being used widely [7]. Primarily for quick and precise production in apparel manufacturing, flexible computer-aided manufacturing systems are being applied to apparel manufacturing processes, such as apparel pattern making, grading, and marker making [8−9]. Computer-aided pattern making and grading are based on 2D and 3D CAD technologies. Individual patterns created using basic 2D pattern technologies apply grading and alternation rules [10]. In addition to individual patterns created by 3D CAD technology are 2D patterns that are flattened from a 3D body model, so that they reflect the human body type. However they have practical limitations, including the need to build a new 3D CAD system on the top of the existing apparel manufacturing process [11−13]. For that reason, 2D CAD technology is currently used in the apparel industry primarily for mass customisation. Although the 2D CAD system provides time-saving solutions, the latter are not free from limitations. The grade rule creation or grading increment calculation, which is used by all types of 2D apparel CAD to complete the grading process, is based on manual calculation and inputs [14]. Computerised pattern grading is the most precise and expedient method, but only when an accurate value is entered into the computer [6]. Nevertheless, there are many factors that influence grading and lead to grading deficiencies. The objectives of this study were to identify and analyse the reasons behind the inaccuracy and associated problems, while maintaining the required level of precision in garment pattern grading.
Materials
For experiments, three different specification sheets (hereinafter: spec sheets) of different buyers were collected and then combined and drawn to a solitary sketch of a T-shirt ( Figure 1 and Table 1), including all points of measures (POM) for the sake of easy comparison. For example, shoulder point can be calculated using three POMs in combination, if any two of "S", "SD and "AS" are given. Table 1: Measurement points and descriptions of all three specification sheets
Methods
The patterns of T-shirts of specifications A, B and C were drawn and graded with varying parameters. The measurements of graded patterns were then checked for grading accuracy. The conventional grading system is based on the increment of the given measurement of apparel for different sizes using the Cartesian coordinate values of the grading increment. For example, if high point shoulder is increased by 2 cm, points H and G should increase by 2 cm in the direction of Y. For T-shirt Specs A, B and C, cardinal points represented by A, B, C, E, G, H for front and back and A, B, C, D, E, F, G for sleeve and the Cartesian coordinate values of the grading increment as (X, Y) are shown in Figure 2. The body parts of the three specification sheets have the same grading increment value despite differences in measurement location. In case of the sleeve, however, it is NW 16 17 18 19 20 16 17 18 19 20 16 17 18 19 20 AS -----45 48 51 54 57 45 48 51 54 57 S 15 16 17 18 19 15 16 17 18 19 -----SD 5 5 5 5 5 -----5 5 5 5 5 AHS 24 25 26 27 28 24 25 26 27 28 -----ASD ----------29 30 31 32 33 HC 48 51 54 57 60 48 51 54 57 60 48 51 54 57 60 HPS 70 72 74 76 78 70 72 74 76 78 70 72 74 76 78 SL 21 22 23 24 25 21 22 23 24 25 21 22 23 24 25 SO 18 19 20 21 22 18 19 20 21 22 18 19 20 21 22 US 14 14.5 15 15.5 16 ----------SW -----23 important to match the sleeve front and back curve with armhole front and back curve. For both Spec A and B, armhole straight is given, which is a diagonal measurement. In the case of Spec C, however, there are no diagonal measurements. Thus, the impact of the diagonal measurement is explained further in the following sections "presence of diagonal measurement" and "maintaining accuracy and matching of curve line".
Presence of diagonal measurements
Some inclined or diagonal POMs (points of measure) create measurement errors in the traditional XY Cartesian coordinate apparel pattern grading system. In every grading textbook, different authors mention different types of shoulder seam grading [6, 15−18]. There is no consistency on how the textbook authors grade the shoulder [19]. For shoulder seam grading in the conventional method, some assumptions have been used. If across shoulder measurement and shoulder lengths are given (example: Reference Spec C), the X-axis increment is the change in half across shoulder and the Y-axis increment is the change in the shoulder length measurement plus the change in half neck width. However, if shoulder length and shoulder drop is given, the X-axis increment is the change in shoulder length plus the change in half neck width and the Y-axis increment is the change in the shoulder drop. It is thus assumed that shoulder length will increase the amount that is increased in the X or Y-axis. According to geometrical rules, however, any diagonal measurement will not increase for the amount of the increase in the X-or Y-axis. An experiment was conducted to check the effect of the diagonal measurement (e.g. shoulder length). For this experiment, patterns of the Spec A were graded using conventional Cartesian coordinate grading from the L size assumed as the base size. Bye et al. (2008) [20] confirmed that size 10 (medium size) was the optimum base size for grading patterns in the size range of 6-14. Size 10 was selected because a common practice in grading is to select a size approximately in the middle of the size range to be graded. It can be concluded from Table 2 that all the horizontal and vertical line lengths are the same because they are plotted on the X and Y-axis respectively, as the computerised grading uses Cartesian coordinates. However, variations are found only in diagonal lines grading. Thus, diagonal measurements should be avoided as much as possible in the spec sheet because they cause grading deficiency.
Maintaining accuracy and matching of curve lines
The computer uses Cartesian coordinates where both points have X and Y values. It is therefore always a challenge how much they should move in both directions to get the accurate curve length. The grading of a straight line is a simple process as the straight is defined by two endpoints in the computer Cartesian coordinates where both the points have X and Y values. So, it is possible to change the grading values (X, Y) in one or both points to get the desired length. However, the curve line grading is a complex process. Generally, the curve line is formed by connecting several points in the Cartesian coor-dinates location. When grade rules are applied to the endpoints of a curved edge, the program must mathematically determine how each internal curve and control point should move. The results can distort the curve. Again, in order to construct a well-made garment, the matching seam lines should be of the same length and the shape should not be distorted by the graded pattern pieces. During the grading of the curve line, the amount of change in X and Y directions to achieve the desired length of the curve is unknown. The grading increment must be adjusted several times until the desired curve length is achieved. For this experiment, all three spec-sheets (A, B and C) are selected and graded as specified, and the L size is chosen as a base size. Curve measurements are shown in Table 3. From Table 3, it can be deduced that if horizontal and vertical measurements are given, curves automatically intersect with each other. If, however, diagonal measurements are given for instance like armhole straight, the pattern grader then has to calibrate the measurements until front and back armhole curve lengths match with the front and back sleeve curve lengths. The measurements should be checked and the grading increment should be adjusted until the required curve lengths are achieved.
Selection of base size in grading
If we choose jumping sizes rather than moving gradually from one size to another, some measurements often exceed the tolerance limit. The selection of the base size also has an influence over the pattern grading accuracy. Basically, there are three methods of recording the growth of the pattern: • Method 1: Progressive increment of the base size (from smallest to the largest size). • Method 2: Progressive increment or decrement of the base size to acquire all the sizes from the smallest to the largest. • Method 3: Digressive decrement of the base size to the smallest size. After evaluating the graded measurement from Table 4, it can be deduced that horizontal and vertical measurements do not change even if the base size changes. The reasoning behind is that they were plotted along X and Y axis of Cartesian coordinates. However, inclined measurements of a graded pattern are inconsistent and sometimes exceed the tolerance limit if the base size changes. Additionally, greater variations are found from the smallest and to the largest base size. So, if the middle size from the pro-vided size chart is considered as a base size (e.g. L as base size, if the size chart contains S, M, L, XL and XXL size), the errors can be minimised as they can have both positive and negative direction towards the given tolerance. So, the deficiencies of inclined measurements grading can be minimized by selecting the middle size as the base size. Another reason for the selection of the base size is the presence of breakpoint. The breakpoint of a size chart is such a measurement upon whose increment, graded pattern varies. For instance, if mentioned half-chest is 46, 48, 50, 52, 55, and 58 (units in cm) respectively for six sizes; the base size should be the size which contains half-chest 52 (units in cm), so that both sides' measurement differences would be the same. It is recommended to grade from middle size to all sizes to reduce measurement errors if diagonal measurements are given.
Presence of higher number of sizes
Diagonal measurements relating to grading error increase as the number of sizes in the spec sheet increases. If the grading is done to get the extreme sizes, then the design, drape and fit of the garment [6,21,22]. Moore et al. (2001) [23] recommend that no more than five sizes (two larger, two smaller and one base size) should be graded from the base size together using a simplified grading system; otherwise the average size range would then require multiple base sizes. A pattern should not be graded more than two sizes from the base size, so that the visual appearance remains unaffected [21]. Note: * = Base size, Black = Length required, Blue = Exactly same, Green = Within tolerance, Red = Over tolerance limit, Units: Measured in 'cm'. Tekstilec, 2020, Vol . 63(3), [166][167][168][169][170][171][172][173][174][175][176][177][178][179][180][181][182][183][184] Experts affirm that the base size should be graded no more than two sizes before another fit model is implemented and the closer the individual to the fit model standard, the fewer alterations are required. Taylor and Shoben (1990) [24] argues against the 2D system of grading and they state "fitting and balance faults will automatically occur to the graded garment range" and they also indicate that "the 2D system can be safely used for very-loose-fitting garments over a very limited size range (three sizes)". For this experiment, two spec sheets having two different size numbers were selected (Table 5).
After comparing Table 2 with Table 6, it can be deduced that as the number of size increases, grading error increases as well. If the spec sheet contains 5 different sizes, the middle size should be selected [20]. But if the sizes are more than 7, then additional errors will be generated. Based on the previous studies this statement is well verified, Bye and DeLong (1994) [21] demonstrate that garment appearance and proportion are also affected when the pattern is graded more than two sizes from the base size while using standard grading practices. Moore et al. [23] recommend that no more than five sizes (two larger and two smaller) are to be graded together. The average size range would then require more than one base size. They gave examples of simplified systems that include grading information for nine sizes (three smaller and five larger than the base size), which is a common practice in the apparel industry. In accordance with the aforementioned studies, some CAD personnel in the industry generally perform the following things for minimizing grading errors instead of rectifying them. Even if number of sizes exceed 7 sizes or more, the total sizes are divided into two parts (e.g. a spec containing 10 different sizes). They thus separate them into two groups of 5 sizes each and then draw two patterns as the base size and finally grade them. However, if the size exceeds 15 sizes or more, the total sizes are divided into three groups, of which three base sizes are selected. Afterwards from the selected base size, three patterns are drawn and are then graded. It should also be noted that if it is possible to eliminate all the diagonal measurements from the spec sheet then the number of sizes in a size range does not influence the grading. Few companies within the industry fit more than one sample size, which is a common practice in the industry if garment sizes are more than five, like size 06 to size 18 with an increment of 2.
Combination of measurement points
Some lines can be drawn using different measurement combinations. For example, the shoulder line can be drawn using any two of the three Tekstilec, 2020, Vol . 63(3), 166-184 measurements, "Shoulder Length, Shoulder Drop and Across Shoulder Width". It must be noted that some cardinal points of the pattern (e.g. shoulder point) can be created by using different measurement combinations. For instance, a shoulder point can be created if spec sheet contains horizontal-inclined (e.g. AS and S) or vertical-inclined (e.g. SD and S) or the horizontal-vertical (e.g. AS and SD) measurement combination. However, among the three options, the horizontal-vertical combination is preferable during pattern making as the measurement changes during grading are plotted in the Cartesian coordinates. For this experiment, three spec sheets A, B and C were chosen and were graded from base size L (middle size). Table 7 clearly shows that shoulder point grading increment can be calculated without any error if horizontal and vertical POM combination is used, which can be plotted in X and Y direction respectively. The inclined graded measurement errors would not generally exceed the tolerance limit when any cardinal point of a pattern (e.g. shoulder point) is created from a horizontal-inclined (e.g. AS and S) or vertical-inclined (e.g. SD and S) measurement combination. However, better accuracy is found in the case of a horizontal-vertical combination. Horizontal and vertical POMs should be used instead of diagonal or inclined POMs to get the desired shape of the pattern. During spec sheet creation, spec sheet creators should thus use the horizontal and vertical measurements instead of inclined measurements wherever it is possible.
Selection of zero points
The selection of a zero point is required to calculate accurate grading increment value within a minimum amount of time. At first, a zero point has to be selected to apply grade rules or grading increment values. Then the values are calculated for a different grade or cardinal points. Each pattern grading starts by identifying the grainline, the zero point of reference, and the points where increases (or decreases for smaller sizes) are to be applied. It is necessary for any grading method to establish a point of reference for each pattern piece known as the zero point [25]. Moore et al. (2001) [23] used the centre front (and back) at the waist as the point of reference throughout their book. Vong, A. L. (2011) [4] states that "the location of the zero point on the pattern may change the grade of the pattern; additional study of whether the drape of the garment changes when the zero point is moved is needed". To check the impact of zero-point selection in grading, an experiment was conducted from spec sheet B by changing the zero point as mentioned in Table 6, as well as in Figure 3. Based on the experiment it is evident that the graded patterns consistently have the same measurements. It can therefore be concluded that the change in zero-point location does not impact the fitting unless the pattern is wrongly drafted. Consequently, the procedure was applied on the sleeve and the result remained the same. The presence of diagonal measurement produced some miscalculations, however, not due to the zero-point selection. If all the diagonal measurements are avoided, like for example in "spec C", the errors can be avoided as well. Any cardinal point can be selected as zero point. However, the calculation becomes much easier if the starting point is selected as zero-point.
Angle of measurement
Criterion 1 of the book Sizing in Clothing written by Ashdown [25] states that "the measurement must be either horizontal or vertical". But even if the measurements are neither horizontal nor vertical, Pythagoras' law can be used for calculating grading increment properly. The angle is not a mandatory factor. In the same book it is also stated that "the measurement must be either horizontal or vertical -shifting and edge-changes grading techniques use grading information that is either horizontal or vertical; angled measurements could be used for proportional grading or could be divided into horizontal and vertical components, but only if the angle is known." However, even if the angle is not given it can be calculated from the horizontal and the vertical component of measurement. Knowing the angle is not mandatory; an example is shown in Figure 4.
Assessment of the Factors Affecting Apparel Pattern Grading Accuracy: Problems Identification and Recommendations
Angle can be measured by using the following formula: After calculation the following data were found, Table 8.
In this way, it is not only possible to calculate the angle but also to reduce the grading errors. It must be noted that grading should be done manually or by using CAD software, which has an actual angle grading increment (e.g. Boke CAD) rather than employing an alternative reference line used by other software, such as Optitex, TUKA CAD, etc., which is elaborated more in section 2.2.9.
If diagonal measurements, such as shoulder length or armhole straight are given, then grading anomalies can be found. So, if diagonal measurements are given along with other horizontal or vertical components, then it is possible to calculate the angle and grade them to acquire more accurate graded measurements.
Alternative reference line
Some software uses an 'alternative reference line' for grading diagonal lines, but if the angle is not constant, they cannot grade the pattern accurately. Generally, the reference line for grading is parallel to the grainline but sometimes an alternative reference line not parallel to the grainline is used. Taylor and Shoben (1984), Cooklin (1990), and Mullet et al., Tekstilec, 2020, Vol . 63(3), [166][167][168][169][170][171][172][173][174][175][176][177][178][179][180][181][182][183][184] alternative reference line, which is actually known as "Angle grading" as it will distort the across shoulder or shoulder drop measurement. It is evident from the findings of Table 9 that alternative reference line grading cannot solve the grading problem. If the angle is constant, then the usage of Optitex or TUKA CAD's alternative reference line grading is recommended.
Angle grading variation
Sometimes shoulder slope angle is not constant throughout all the sizes, so it results in grading error if alternative reference line grading is used. Alternative reference line is actually known as 'angle grading' in apparel CAD software. Angle grading varies in different software such as TUKA CAD, Optitex etc. CAD system uses an alternative reference line in angle grading, whereas Boke CAD uses actual angle increment in angle grading. Examples are shown in Figure 6. From the Table 10, it is clear that the actual angle grading can solve the grading problem.
If the angle remains inconstant then the use of Boke CAD's angle grading, instead of alternative reference line grading by Optitex, TUKA CAD software, etc is advised.
Selection of grade point or absence of certain
measurements Different shaping errors (e.g. armhole shape curve) occur due to the absence of some measurement points. Grade point or cardinal points are those points that are present at the perimeter of the pattern and distribute the changes in body dimension [4]. Grade points are also known as cardinal points [6]. Solinger, (1988) [28] states that "when grading, the 'essence' of a garment should be maintained through all sizes". Doyle and Rodgers (2003) [17] state the importance of keeping the curves of the base pattern consistent: "If the grader changes the shape of the curve, the fit of the garment changes". Taylor and Shoben (2004) [18] state that while grading the armhole shape, "the angles at the cardinal point on the pattern must remain the same on all sizes". After grading, seam lines of the
Assessment of the Factors Affecting Apparel Pattern Grading Accuracy: Problems Identification and Recommendations
graded pattern should be checked to ensure that they are of the same length during sewing. Some spec sheets provide measurements for across chest and back. Occasionally, such measurements are absent in some spec sheets. In that case, pattern makers construct front and back armhole curve lines from shoulder point to underarm point. Sometimes the shape of armhole curves might be imperfect due to the absence of armhole curve depth, i.e. absence of across chest and across back measurements. And if these measurements are not given, the grading increment values for middle point of the curves (e.g. across chest and across back point) remain unknown. Different examples of armhole curve shapes are shown in Figure 7, indicated by red, green and blue colour. If the across chest and across back measurements are provided in the spec sheet, the curves become more precise. When the curves are drawn from the shoulder point, across chest or across back and underarm point to avoid the fitting problem the curves do not require readjustment for adjacent sizes as then grading increment values can be calculated. In short, across chest and across back measurements are to be used for drawing armhole shape curves accurately. Most of the time, pattern shape related problems occur due to the absence of curve depth. So, if AC and AB are given, then armhole shape curves can be drawn through three points: shoulder point, across chest/across back point and armpit point. Across chest and across back measurements should be used for drawing armhole shape curves. For better armhole shape, the following things can be done: • Manual drawing by French curve [29] • Saving and selection of curve (e.g. Gemini CAD French curve tool)
Absence of measurement location
If some measurements are absent in the spec sheet (e.g. across chest and across back position) or even Tekstilec, 2020, Vol. 63(3), [166][167][168][169][170][171][172][173][174][175][176][177][178][179][180][181][182][183][184] in the standard measurement chart, the shape of the pattern changes and fitting problems occur. Some spec sheets have across chest and back but do not have their vertical position from HPS. Sometimes, they are not properly clarified in standard measurement charts. Different pattern making books provide different guidelines on how to make the vertical position of across chest and back measurements. Different armhole curves were therefore drawn indicating different colours in Figure 9 according to the different procedures, which are mentioned below.
In the developed method, across chest position from armpit point (X−Y, in Figure 10) is one-third of armscye depth (W−X, in Figure 10) and across back position from armpit point (XX−YY, in Figure 10) is one-third of armscye depth (WW−XX, in Figure 10). It can be concluded from Figure 9 and Figure 10 that green and red colour give more accurate shapes. For better armhole curve shape, the across chest and across back position should be drawn by dividing the armscye depth into two-third of its original length from the neck point, if across chest and across back position are absent. On some other occasions, buyers gave us a soft copy of pattern along with the spec sheet but without any natural waist length (NWL) measurement ( Figure 11). Different pattern makers use different techniques to meet the standard length of given measurements in the spec sheet, if it is absent in the spec sheet. For instance, some pattern makers use "2/3 of the total body length from high point of shoulder to ½ waist position" for calculating NWL if it is not provided in the spec sheet. According to the 8 head theory, the NWL position is the second head position from the neckline, and hip position is the third head position ( Figure 11A). Other pattern makers use half of the side seam measurements ( Figure 11B). So, if any measurement or procedure is unknown to the grader it then becomes very difficult to grade the pattern with accurate measurement. It can be concluded from Figure 11 that if the procedure is unknown to the grader it leads to grading errors as grading increment value depends on the pattern drafting procedure. When manufacturers only need to grade the pattern, the grader should be familiar with the procedure unless the grading increment values are provided in the Tech Pack.
2.2.13
Non-identifiable body landmarks or unusual measurement Some measurements used in the spec sheet do not relate to the identifiable body landmarks. Furthermore, measurements are sometimes unknown to the majority of pattern makers. Different pattern makers use different methods along with different measurements for the same design. But some measurements used in the body measurement chart are not related to the identifiable body landmarks. For example, a world-famous pattern maker Helen Joseph Armstrong (2010) [30] uses 'new strap measurement' (Figure 12), which is neither used by any pattern maker nor present in any body-measurement chart.
Though Helen Joseph Armstrong's (2010) [30] method gives the best fitting due to unconventional measurement, it would be difficult to grade the pattern. As seen in Figure 12, the measurement is neither perfectly diagonal nor a curve measurement, which can be measured through some definite points. In pattern making such measurements should be used that do not impact the grading and unusual measurements should therefore be avoided if they cause grading deficiencies.
Manual vs. computerised method of grading
Manual grading is a time-consuming and troublesome process whereas computerised grading is much more convenient and precise. Often, the accuracy of the graded pattern pieces of clothing is affected by grader's skill [34]. The manual procedure of grading is exceptionally tedious and grading efficiency is affected by grader's experience [14]. Although the 2D CAD system provides time-saving solutions, they are not free from limitations. The grade rule creation or grading increment calculation is used by all types of 2D CAD system for apparel. But to complete the grading process, manual calculation and inputs are required for 2D CAD [14]. Computerised pattern grading is the most precise and expedient method but only when the accurate values are entered into the computer [6]. It is evident that manual grading is less efficient than the computerised method and usage of computerised grading is therefore recommended if possible.
Results and discussion
After conducting all grading experiments, different problems are identified and finally, some recommendations are given for every problem. Different kinds of spec sheets were provided by different buyers with different POM variations. So, it is necessary to learn the proper grading calculation method and how the patterns are actually made from different measurements. Grade rule calculation has to be done in such a way that minimum measurement errors occur from graded pattern pieces and also, styles features left intact. The recommendations are given so that pattern graders can use them as a reference or guideline to avoid unnecessary grading problems.
General recommendations
i. Presence of diagonal measurements. The diagonal measurements should be avoided as much as possible in the spec sheet because they cause grading deficiency. ii. Maintaining accuracy and matching of curve lines. Measurement checking and optimisation of the grading increment should be done until the required curve lengths are achieved. iii. Selection of base size. If diagonal measurements are provided, then grading should be done from middle size to all sizes in order to reduce measurements errors. iv. Presence of higher number of sizes. If the spec sheet contains 5 to 7 sizes, the middle size should be selected. If the number of sizes exceeds 7 or more, then the total number of sizes should be divided into two parts, and two base sizes should be selected. Afterwards, grading should be done by drawing two separate patterns. Even if the number of total sizes exceeds 15 or more, the total sizes should be divided into three individual parts. And then by selecting three base sizes, three individual patterns are to be drawn and later graded. It should also be noted that if it is possible to eliminate all the diagonal measurements from the spec sheet then the number of sizes in a size range does not influence the grading. v. Combination of measurements. Horizontal and vertical POMs should be used instead of diagonal or inclined POMs to achieve the desired shape of pattern wherever possible. During the crea-tion of spec sheets, spec sheet creators should use horizontal and vertical measurements instead of inclined measurements wherever possible. vi. Selection of zero points. Any cardinal point can be selected as zero point but if the starting point is selected as zero-point, the calculation becomes easier. The starting point should therefore be chosen as zero point. vii. The angle of measurement. If diagonal measurements, such as shoulder or armhole straight are given, then grading anomalies are found. If diagonal measurements are provided along with other horizontal or vertical components, then it is possible to calculate the angle and grade them to get more accurate graded measurements. viii. Alternative reference line. If the angle is constant, then the usage of Optitex or TUKA CAD's alternative reference line grading is recommended. ix. Angle grading variation. If the angle is not constant then the usage of Boke CAD's angle grading instead of alternative reference line grading by Optitex, TUKA CAD software etc. are advised. x. Selection of grade point or absence of certain measurements. Across chest and across back measurements are to be used for drawing armhole shape curves. For better armhole shape, the following recommendations can be employed: A) Manual drawing by French curve, B) Saving and selection of curve (e.g. Gemini CAD French curve tool). xi. Absence of measurement location. For better armhole curve shape, the across chest and across back position should be drawn by dividing the armscye depth into 2/3 from neck point if across chest and across back position are not given. xii. Lack of proper drafting procedure. When manufacturers only need to grade the pattern, the procedure should be well-known to the grader unless the grading increment values are provided in the Tech Pack. xiii. Non-identifiable body landmarks or unusual measurement. Unusual measurements should be avoided if they cause grading deficiencies. xiv. Manual vs. computerised method of grading.
It is evident that manual grading is less efficient than a computerised method, so it is recommended to use computerised grading if possible.
Conclusion
Pattern grading is the most popular method in readymade garment industries for large scale manufacturing of different sizes, even though grading calculation can sometimes be complex. Grading is still popular because it is less time consuming and cost-efficient in making different sized patterns during production. However, defective grading affects other computerised downstream operations, such as computerised marker making and computerised cutting. It is important to note that although computer-aided applications contributed to minimising production costs and improving manufacturing efficiency, it cannot satisfy the customer's need for individualisation. Although grading calculation is very complex, patterns can be graded successfully without errors and distortion of style features, if the calculation is done properly. It will not only reduce the sample approval time, but will also help us to create clothing that fits better on the wearer's body.
|
v3-fos-license
|
2023-01-18T15:07:20.880Z
|
2015-09-02T00:00:00.000
|
255949538
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1186/s12987-015-0017-7",
"pdf_hash": "7079e1c09139850d1413cc7bfa14698e78ced8f7",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43072",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "7079e1c09139850d1413cc7bfa14698e78ced8f7",
"year": 2015
}
|
pes2o/s2orc
|
Adenosine receptor signaling: a key to opening the blood–brain door
The aim of this review is to outline evidence that adenosine receptor (AR) activation can modulate blood–brain barrier (BBB) permeability and the implications for disease states and drug delivery. Barriers of the central nervous system (CNS) constitute a protective and regulatory interface between the CNS and the rest of the organism. Such barriers allow for the maintenance of the homeostasis of the CNS milieu. Among them, the BBB is a highly efficient permeability barrier that separates the brain micro-environment from the circulating blood. It is made up of tight junction-connected endothelial cells with specialized transporters to selectively control the passage of nutrients required for neural homeostasis and function, while preventing the entry of neurotoxic factors. The identification of cellular and molecular mechanisms involved in the development and function of CNS barriers is required for a better understanding of CNS homeostasis in both physiological and pathological settings. It has long been recognized that the endogenous purine nucleoside adenosine is a potent modulator of a large number of neurological functions. More recently, experimental studies conducted with human/mouse brain primary endothelial cells as well as with mouse models, indicate that adenosine markedly regulates BBB permeability. Extracellular adenosine, which is efficiently generated through the catabolism of ATP via the CD39/CD73 ecto-nucleotidase axis, promotes BBB permeability by signaling through A1 and A2A ARs expressed on BBB cells. In line with this hypothesis, induction of AR signaling by selective agonists efficiently augments BBB permeability in a transient manner and promotes the entry of macromolecules into the CNS. Conversely, antagonism of AR signaling blocks the entry of inflammatory cells and soluble factors into the brain. Thus, AR modulation of the BBB appears as a system susceptible to tighten as well as to permeabilize the BBB. Collectively, these findings point to AR manipulation as a pertinent avenue of research for novel strategies aiming at efficiently delivering therapeutic drugs/cells into the CNS, or at restricting the entry of inflammatory immune cells into the brain in some diseases such as multiple sclerosis.
Background
Neurons of the central nervous system (CNS) are separated from the lumen of blood vessels by physical barriers which ensure both protective and homeostatic functions [1,2]. The main barriers are the blood-brain barrier (BBB) and its spinal cord counterpart, the bloodspinal cord barrier. Such barriers are made of tightly connected endothelial cells that line the CNS microvasculature and form a more highly restrictive barrier than endothelial cells of the peripheral circulation. These cells are characterized by a markedly restricted pinocytosis and trancytosis potential, the expression of dedicated transporters that regulate the influx/efflux of nutritive/ toxic compounds, a low expression of leukocyte adhesion molecules and the elaboration of specialized luminal structures involved in tight and adherens junctions that efficiently restrain passive diffusion of blood-borne molecules [3][4][5][6][7][8][9]. The BBB endothelium is surrounded by basement membrane, pericytes and processes from neighboring astrocytes that contribute to the so-called neurovascular unit (NVU) which regulates barrier functions, homeostasis and stability [10] (Fig. 1). Astrocytes provide nutrients that are important for endothelial cells activation/polarization; and they function as a scaffold, providing structural support for the vasculature. While astrocytic processes enwrap endothelial cells, they also interact with microglial cells and neurons [11,12]. Astrocytes regulate BBB tightness by providing soluble factors that aid in endothelial cell proliferation and growth or are involved in maintenance of BBB integrity [13,14]. While controlling the passage of molecules between the brain blood circulation and the brain microenvironment in the healthy brain, the BBB may also contribute to the pathogenesis of several neurological disorders such as neurodegenerative diseases, under conditions of abnormal functioning [1]. Therefore, dissecting the mechanisms underlying the properties of the BBB is necessary for understanding both the physiology of the healthy CNS as well as the development of some brain pathologies.
Adenosine is a nucleoside naturally produced by neurons and glial cells. Through a well characterized set of receptors called P1 purinergic receptors, adenosine has long been known to act as a potent modulator of various brain functions through the regulation of multiple neurotransmitters, receptors and signaling pathways [15]. Here, we review recent in vivo and in vitro studies that point to the adenosine-AR axis as an important regulatory pathway controlling BBB permeability to macromolecules and cells, and propose that manipulation of AR signaling might represent a new approach to achieve an efficient delivery of therapeutic agents into brain parenchyma.
Adenosine and ARs in CNS physiology
Adenosine is a purine nucleoside involved in a myriad host functions. It is a potent immune regulator and, in addition, is notable for its role in regulating inflammation, wound healing, angiogenesis and myocardial contractility (Fig. 2). Within the CNS, adenosine is released by both neurons and glial cells. It regulates multiple physiological functions such as sleep, arousal, neuroprotection, learning and memory, cerebral blood circulation as well as pathological phenomena such as epilepsy. These effects involve adenosine modulation of neuronal excitability, vasodilatation, release of neurotransmitters, synaptic plasticity/function and local inflammatory processes [15][16][17] (Fig. 3).
Within cells, adenosine is an intermediate for the synthesis of nucleic acids and adenosine triphosphate (ATP). It is generated from 5′-adenosine monophosphate (AMP) by 5′-nucleotidase and can be converted back to AMP by adenosine kinase. Adenosine can also be derived from S-adenosylhomocysteine (SAH) due to the activity of SAH hydrolase. Intracellular adenosine is metabolized into inosine by adenosine deaminase (ADA) and into AMP by adenosine kinase. Inosine formed by deamination can exit the cell intact or can be degraded to hypoxanthine, xanthine and ultimately uric acid. A low level of cellular adenosine can be quickly released in the extracellular space via equilibrative nucleoside transporters (ENTs). This release increases when intracellular adenosine concentration is augmented (ischemia, hypoxia, seizures).
Importantly, adenosine can also be directly generated outside the cell through the breakdown of cellreleased adenosine tri/diphosphate (ATP/ADP) by coupled cell surface molecules with catalytically active sites (ectonucleotidases) that are abundant in the brain. The ecto-nucleoside triphosphate diphosphohydrolase 1 (E-NTPDase1) or CD39, converts ATP/ADP into AMP and the glycosyl phosphatidylinositol(GPI)-linked ecto-5′-nucleotidase (Ecto5′NTase) or CD73, converts AMP to adenosine by promoting the hydrolysis of phosphate esterified at carbon 5′ of nucleotides with no activity for 2′-and 3′-monophosphates [18,19]. Human CD73 assembles as a dimer of GPI-anchored glycosylated mature molecules. Each monomer contains an N-terminal domain that binds divalent metal ions and a C-terminal that binds the nucleotide substrate. The ectonucleotidases-mediated generation of adenosine from adenine nucleotides is very rapid (about 1 ms). Adenosine half-life in the extracellular space is about 10 s. Under basal conditions, most extracellular adenosine appears to re-enter cells through equilibrative transporters. A small fraction can be irreversibly converted into inosine and its derivatives (hypoxanthine, xanthine, uric acid) by ADA Fig. 1 Schematic of blood brain barrier (BBB) structure and the neurovascular unit (NVU). The brain vasculature is lined with a single layer of endothelial cells that is tightly sealed by tight and adherens junction molecules. It is further insulated by pericytes and astrocytic endfoot processes and in total are referred to as the NVU. Efflux and influx transporters expressed on BBB endothelial cells selectively allow the entry or exit of molecules into or out of the brain and xanthine oxidase (Fig. 2). Such a fraction increases under conditions of hypoxia/ischemia [20,21]. Extracellular adenosine can also be targeted by ectokinases to regenerate AMP, ADP and ATP. The concentration of extracellular adenosine is maintained at low levels within the brain (ranging from 25 to 250 nM) which represent the balance between the export/generation of extracellular adenosine and its metabolism. Under pathophysiological circumstances, such as hypoxia or ischemia, extracellular adenosine concentrations can increase up to 100 fold [22,23]. Because of its rapid metabolism, adenosine acts locally rather than systemically [24].
Extracellular adenosine exerts its action through seventransmembrane domain, G-protein coupled receptors (GPCRs) that are connected to distinct transduction pathways. There are four different subtypes of ARs, A 1 , A 2A , A 2B and A 3 with distinct expression profiles, pharmacological characteristics and associated signaling pathways [25,26]. A 1 and A 3 ARs are inhibitory and suppress adenylyl-cyclase which produces cyclic-AMP (cAMP) while A 2A and A 2B ARs are stimulatory for adenylylcyclase [27]. In turn, A 2B -induced cAMP can upregulate Adenosine is a purine nucleoside produced by many different organs throughout the body. Extracellular adenosine is a primordial molecule that is produced by many cell types in the body. These include heart, lung, gut, brain and immune cells. Adenosine produced by these cells can in turn act on the producing cells or on adjacent cells to modulate function. Extracellular adenosine is produced from ATP released in the extracellular environment upon cell damage and is converted to ADP and AMP by CD39. AMP is further converted to adenosine by CD73. Extracellular adenosine binds to its receptors expressed on the same cell or adjacent cells to mediate its function. Adenosine is rapidly degraded to inosine by adenosine deaminase Fig. 3 Cells of the central nervous system (CNS) not only produce adenosine but are also regulated by adenosine. Cells of the CNS, such as astrocytes, microglia, pericytes and neuronal cells can produce adenosine or their activity/function is regulated by adenosine. Adenosine regulates the blood brain barrier permeability and is involved in neural transmission and glial cell immune function and metabolism CD73 [28]. A 1 and A 2A ARs have high affinity for adenosine (about 70 and 150 nM respectively) whereas A 2B and A 3 have a markedly lower affinity for adenosine (about 5100 and 6500 nM, respectively) [27]. This suggests that A 1 and A 2A may be the major ARs that are activated by physiological levels of extracellular adenosine within the CNS. Accordingly, unlike A 1 and A 2A receptors, A 2B receptor engagement in the brain is triggered by higher adenosine levels such as levels associated with cell stress or tissue damage [25]. The expression level of ARs varies depending on the type of cells or organs where they are expressed [22].
The influence of adenosine in the CNS depends both on its local concentration and on the expression level of ARs. The A 1 receptor is highly expressed in the brain cortex, hippocampus, cerebellum and in spinal cord [25,29,30] and at lower levels at other sites of the brain [31]. In multiple sclerosis (MS) patients, the A 1 receptor expression level appears to be decreased in CD45 positive glial cells of the brain [32]. The A 2A receptor expression is high in the olfactory tubercle, dorsal and ventral striatum and throughout the choroid plexus which forms the bloodcerebrospinal fluid (CSF) barrier [33][34][35][36] and more moderate in the meninges, cortex and hippocampus [29,33,36]. The steady state expression of A 2A receptor permits for example the proper regulation of extracellular glutamate titer by adenosine, through modulation of glutamate release and control of glutamate transporter-1-mediated glutamate uptake [37][38][39]. A 2A receptors interact negatively with D2 dopamine receptors [40]. A 2A receptor expression in glial cells such as astrocytes is substantially upregulated by stress factors including pressure, pro-inflammatory factors (interleukin (IL)-1β, tumor necrosis factor (TNF)-α) or hypoxia. In contrast to A 1 and A 2A , A 2B and A 3 receptors are expressed at relatively low levels within brain [31].
Expression of the CD73 ecto-enzyme on CNS barrier cells
Some studies have pointed to CD73 as a regulator of tissue barrier function [41]. Within the CNS, ATP can be released from neurons or other cells such as astrocytes. As mentioned above, CD39 catalyses the conversion of proinflammatory ATP/ADP into AMP and CD73 subsequently converts AMP into adenosine [42]. Thus, the proper functioning of CD39/CD73 ectonucleotidases concomitantly ensures the production of extracellular adenosine and the extinction of purinergic P2 receptordependent, ATP-induced signaling due to reduction of the ATP/ADP pool. Both of these effects contribute to the anti-inflammatory potential of the CD39/CD73 axis. Along with colon and kidney, the brain has particularly high levels of CD73 enzyme activity [41]. Similar to the A 2A receptor, CD73 shows its strongest expression level in the CNS within the choroid plexus epithelium and is also detected on glial cells of the submeningeal areas of the spinal cord [22,27,43]. CD73 can be expressed on many types of endothelial cells [44]. Its expression on BBB endothelial cells remains low under steady state conditions relative to peripheral endothelial cells (Fig. 4a). It is present on mouse (Bend.3) and human (hCMEC/D3) brain endothelial cell lines in vitro [27,45]. Unlike human brain endothelial cells [46,47], CD73 expression on primary mouse brain endothelial cells was very low and not detected in vivo [43] (Fig. 4a). However, CD73 expression can be detected in primary human brain endothelial a b Anti-CD73 Isotype Unstained Fig. 4 CD73 expression on primary brain endothelial cells (EC). a Histogram depicting CD73 expression on primary brain endothelial cells isolated from naïve, WT, C57BL/6 mice after staining with a monoclonal antibody to CD73 and analyzed by FACS. b Expression of CD73 (green) on cultured primary human brain endothelial cells visualized by immunofluorescent microscopy. Cells were counterstained with F-actin (red). Scale bar is 50 μm cells (Fig. 4b) [45,48]. CD73 expression is sensitive to cyclic AMP (cAMP) and hypoxia-inducible factor (HIF)1 through its promoter [49]. Interferon (IFN)-β increases CD73 expression and adenosine concentration at the level of the CNS microvasculature, BBB and astrocytes [46,47] and through enhanced adenosine production, may contribute to the anti-inflammatory effect of IFN-β in MS treatment.
AR signaling in the NVU
Functional A 1 , A 2A , A 2B and A 3 receptors are all expressed at moderate levels in glial cells under physiological context [50] and this level is upregulated under inflammatory conditions or brain injury. All P1 purinergic receptors appear to be present on cultured oligodendrocytes [51], on microglial cells [42] and are functional on astrocytes [52][53][54][55][56] (Fig. 3). In astrocytes, ARs engagement is not only important for glutamate uptake regulation (A 2A receptor) but also to maintain cellular integrity (A 1 receptor) [53,57,58], protect from hypoxiarelated cell death (A 3 receptor) [57] and regulate CCL2 chemokine production (A 3 receptor) [59]. In microglial cells, A 2A receptor engagement inhibits process extension and migration while A 1 and A 3 receptors engagement have the opposite effect [42]. A 1 and A 2A receptor transcripts are detectable in Z310 epithelial cells derived from mouse choroid plexus [43]. As to CNS endothelial cells, A 1 , A 2A and A 2B receptor transcripts and proteins were expressed in hCMEC/D3 human brain endothelial cells [45]. Also, A 1 and A 2A ARs are expressed in primary human brain endothelial cells (Fig. 5a). In Bend.3 mouse brain endothelial cells, transcripts and proteins for A 1 and A 2A receptors were detected [27]. Finally, both A 1 and A 2A receptor proteins were found expressed in primary mouse brain endothelial cells and transcripts and proteins for both A 1 and A 2A receptors were present in brain endothelial cells in mice [27] (Fig. 5b).
AR signaling and CNS barrier permeability
The recent notion that adenosine could play a substantial regulatory role in CNS barrier permeability stems from the observation that extracellularly generated adenosine positively regulates the entry of lymphocytes into the brain and spinal cord during disease development in the experimental autoimmune encephalomyelitis (EAE) model [43] and the observation that irradiated A 2A AR deficient mice reconstituted with wild-type bone marrow cells developed only very mild signs of EAE with virtually no CD4 + T cell infiltration in spinal cord [43]. In line with an important role for AR signaling in regulating the permeability of the BBB is the observation that inhibition of ARs by caffeine (a broad-spectrum AR antagonist) prevents the alteration of BBB function induced by cholesterol or 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) in animal models of neurodegenerative diseases [60,61]. Recent observations support the notion that engagement of ARs on brain endothelial cells modulates BBB permeability in vivo. Experimental recruitment of ARs either by the broad spectrum agonist NECA or the engagement of both A 1 and A 2A receptors by selective agonists (CCPA and CGS21680) cumulatively and transiently augmented BBB permeability facilitating the entry of intravenously infused macromolecules (including immunoglobulins such as the anti-β-amyloid 6E10 antibody) into the CNS [27]. Accordingly, the analysis of engineered mice lacking these receptors reveals a limited entry of macromolecules into the brain upon exposure to AR agonists. CNS entry of intravenously delivered macromolecules was also induced by the FDA-approved, A 2A AR agonist Lexiscan: 10 kDa dextran was detectable within the CNS of mice as soon as 5 min after drug is likely to account for the lower duration of BBB permeability relative to that induced by NECA (half-life: 5 h). Upon exposure to NECA or Lexiscan, monolayers of Bend3 mouse brain endothelial cells (CD73 + , A 1 AR + , A 2A AR + ) lowered their transendothelial cell electrical resistance, a phenomenon known to be associated with increased paracellular space and augmented permeability [62,63]. AR activation by agonists was indeed associated with augmented actinomyosin stress fiber formation indicating that ARs signaling initiates changes in cytoskeletal organization and cell shape. These processes are reversed as the half-life of the AR agonist decreases. At the level of tight junctions, signaling induced by A 1 and A 2A receptor agonists altered the expression level of tight junction proteins such as claudin-5 and ZO-1, and particularly of occludin in cultured brain endothelial cells [27]. The exact signaling circuits connecting AR engagement and cytoskeletal remodeling remain to be dissected. In agreement with these findings and with the observation that human brain endothelial cells do respond to adenosine in vitro, agonist-induced A 2A receptor signaling transiently permeabilized a primary human brain endothelial cell monolayer to the passage of both drugs and Jurkat human T cells in vitro [48]. Interestingly, transendothelial migration of Jurkat cells was primarily of the paracellular type. The permeabilization process involved RhoA signaling-dependent morphological changes in actincytoskeletal organization, a reduced phosphorylation of factors involved in focal adhesion (namely Ezrin-Radixin-Moesin (ERM) and focal adhesion kinase (FAK)) as well as a marked downregulation of both claudin-5 and vascular endothelial (VE)-cadherin [48], two factors instrumental for the integrity of endothelial barriers. Hence, by regulating the expression level of factors crucially involved in tight junction integrity/function, signaling induced through receptors for adenosine acts as a potent, endogenous modulator of BBB permeability in mouse models as well as in human cellular models in vitro.
Some G proteins such as G α subunits can influence the activity of small GTPases RhoA and Rac1 that are known modulators of cytoskeletal organization. RhoA and Rac1 are responsive to adenosine signaling and promote actin cytoskeleton remodeling [62,[64][65][66]. The precise molecular events linking A 1 /A 2A AR engagement to changes in the expression pattern of factors involved in tight junction functioning remains however to be analyzed in detail. In particular, whether both canonical (G proteindependent) and non-canonical (e.g. G protein-independent, β-arrestin-related) signaling pathways contribute to such regulation is an open question. Another interesting issue relates to the capacity of A 1 and A 2A receptors to form heterodimers [67] and the possible impact of such oligomeric receptors on the regulation of CNS barrier by ARs agonists.
CD73 and AR signalling in immune cell entry into the CNS
Besides its capacity to regulate the local inflammatory context through consumption of ATP and generation of adenosine, the expression of CD39/CD73 by endothelial cells can regulate homeostasis by preventing high local concentrations of ATP that promote thrombosis and generating adenosine which instead, contributes to an antithrombotic microenvironment [44]. The CD39/ CD73 axis also regulates leukocyte migration induced by chemokines [68,69] and immune cell adhesion to endothelial cells. Such adhesion is favored by high ATP concentrations and limited by adenosine, with mutant mice lacking CD39 or CD73 having augmented level of leukocyte adhesion to endothelial cells [70,71]. Thus, adenosine contributes to restraining leukocyte recruitment and platelet aggregation and might be important to control vascular inflammation.
We have observed that CD73-generated adenosine promotes the entry of inflammatory lymphocytes into the CNS during EAE development [43]. Genetically manipulated mice unable to generate extracellular adenosine due to deficiency in the ectonucleotidase CD73 (CD73 −/− ) are resistant to lymphocyte entry into the CNS and EAE development relative to wild type animals and such a phenotype could be recapitulated in regular mice by using either the broad spectrum AR antagonist, caffeine or the SCH58261 that selectively antagonizes the signaling induced by adenosine-bound A 2A receptor [43,[72][73][74]. This effect was remarkable since auto-reactive lymphocytes from CD73 −/− mice indeed harbor an enhanced inflammatory potential. In addition, the expression of CD73 (and presumably its enzymatic activity) on either T cells or CNS cells was sufficient to support lymphocyte entry into the CNS since CD4 T cells from wild type donors (i.e. CD73 + ) could mediate a milder yet substantial level of EAE pathogenesis in CD73 −/− recipient mice [43].
Since CD73 and the A 1 /A 2A receptors are expressed at the level of the choroid plexus, locally produced extracellular adenosine is likely to act in an autocrine manner. Given that A 1 and A 2A receptor recruitment are functionally opposed to each other and harbor some differences in their affinity for adenosine [26], the regional extracellular concentration of adenosine may strongly influence the response of neighboring cells expressing both receptors. A 1 receptor signaling may be involved at low adenosine concentrations while A 2A receptor signaling is likely to become prominent at elevated adenosine concentrations. Thus, CD73 enzymatic activity at the choroid plexus and the regional adenosine levels are likely to influence local inflammatory events. Interestingly, the choroid plexus is suspected to represent a primary entry site for immune cells during neuroinflammation [3][4][5] and for steady state immunosurveillance [6,75]. By combining the gene expression pattern of chemokines and chemokine receptors relevant to EAE in CD73 null mutant versus control mice developing EAE and the effect of the broad spectrum AR agonist NECA on the expression profile of these molecules in unmanipulated animals, it was possible to identify CX3CL1/fractalkine, a chemokine/ adhesion molecule [76], as the major factor induced by extracellular adenosine in the brain of mice developing EAE [77]. The cleavage of the cell surface-expressed form of CX3CL1 by ADAM-10 and −17 factors generates a local CX3CL1 gradient [78]. The selective A 2A AR agonist CGS21680 caused an increase in CX3CL1 level in the brain of treated mice. Conversely, the A 2A AR antagonist SCH58261 protected mice from CNS lymphocyte infiltration and EAE induction recapitulating the phenotype of CD73 null mutant mice. Thus, the augmented CX3CL1 expression level seen in the brain of EAE developing mice can be regulated by A 2A AR signaling. During EAE, the greatest increase in CX3CL1 occurred at the choroid plexus and returned to normal when mice recovered from disease. As choroid plexus cells express both CD73 and A 2A AR, they have the intrinsic capacity to generate and respond to extracellular adenosine. In vitro, A 2A AR engagement on the choroid plexus epithelial cell line CPLacZ-2 by CGS21680 induced CX3CL1 expression and promoted lymphocyte transmigration suggesting that CX3CL1 induction by extracellular adenosine contributes to lymphocyte migration into the brain parenchyma during EAE. In agreement with an important role for CX3CL1 in EAE pathogenesis is the fact that CX3CL1 blockade by neutralizing antibodies prevented lymphocyte entry into the CNS and EAE development [77]. This notion is in line with the elevated serum level of CX3CL1 which can be observed during CNS inflammation including MS patient brain lesions [79][80][81]. Importantly, there was a positive correlation between CX3CL1 expression levels and the relative frequency of lymphocytes present in the CSF of inflamed brains [80,82]. Moreover, relative to BBB endothelial cells, choroid plexus epithelial cells constitutively express high levels of CD73. Blockade of CD73 or A 2A AR inhibits inflammatory cells entry into the CNS [43,77]. Thus, at the level of the choroid plexus, induction of A 2A receptor signaling by elevated local adenosine concentrations is likely to contribute to immune cell entry into the brain parenchyma.
Among immune cells, the CX3CL1 receptor (CX3CR1) was detected on a sizable fraction of CD4 T cells, CD8 T cells, macrophages and NK cells in mice [33]. CNS CX3CL1 might also modulate neuroinflammation by recruiting a subset of CNS resident NK cells able to attenuate the aggressiveness of autoreactive CD4 T cells of the Th17 effector type [83,84]. Interestingly, while the frequency of inflammatory immune cells is significantly decreased in the CNS of A 2A AR −/− , CD73 −/− or mice treated with A 2A AR antagonist, the numbers/frequency of CD4 + CD25 + T regulatory cells in these mice were similar to WT. This suggests that CD73/A 2A AR signaling may preferentially regulate inflammatory immune cells entry into the CNS but confers less stringency on these suppressor T cells.
Perspectives for improved delivery of therapeutic factors within the CNS
Although the BBB serves a protective role, it can constitute a complication for treatment in CNS diseases by hindering the entry of therapeutic compounds into the brain [85]. Researchers have focused on uncovering ways to manipulate the BBB to promote access to the CNS [86]. Determining how to safely and effectively do this could impact the treatment of various neurological diseases, ranging from neurodegenerative disorders to brain tumors. This implies simultaneous treatments with agents susceptible to increase permeation of CNS barriers. Current approaches involve barrier disruption which is induced by drugs such as mannitol or Cereport/RMP-7. Hypertonic mannitol, is active through shrinking of endothelial cells [87,88] but can cause epileptic seizures and does not allow for repeated use [89,90]. The bradykinin analog (Cereport/RMP-7) has shown some potential in transiently increasing normal BBB permeability [91] but did not give satisfactory results in clinical trial [92] despite some efficacy in treating rodent models of CNS pathologies [93][94][95][96].
The barrier permeability can also be circumvented, for instance by direct injection of drugs into ventricles [87,97]. More recent approaches involve the delivery of drugs during compression waves induced by high-intensity focused ultrasounds [98]. Both these approaches are invasive and may lead to permanent brain damage. Another strategy involves chemical modifications of compounds in order to confer upon them some capacity to cross CNS barriers. For example, increasing the lipophilicity of drugs can enhance their capacity to cross the BBB although it often requires an increase in their size which limits the cell penetration capacity [99]. Alternatively, therapeutic compounds can be linked to factors that trigger receptor-mediated endocytosis. Coupling a compound to an antibody directed to the transferrin receptor can promote delivery of proteins to the brain in rats [100,101]. However, the endocytic activity of endothelial cells is rather limited at the BBB and the expression level of the relevant receptor needs to be sufficient.
For adenosine to exert biological effect, CD73 and ARs must be present on the same cell or on adjacent cells, because adenosine acts locally due to its short half-life. CD73, A 1 and A 2A ARs are indeed expressed on BBB endothelial cells in mice and humans. While CD73 is highly and constitutively expressed on choroid plexus epithelial cells that form the blood to CSF barrier, its expression on brain endothelial barrier cells is low under steady state conditions, but increases in neuroinflammatory diseases or under conditions where adenosine is produced in response to cell stress/inflammation or tissue damage. In mice, pharmacological activation or inhibition of the A 2A AR expressed on BBB cells opens and tightens the BBB, respectively, to entry of macromolecules or cells. The observation that adenosine can modulate BBB permeability upon A 2A receptor activation suggest that this pathway might represent a valuable strategy for modulating BBB permeability and promote drug delivery within the CNS [27,48]. Factors such as the FDA-approved, A 2A AR agonist Lexiscan, or a broadspectrum agonist, NECA, increased BBB permeability and supported macromolecule delivery to the CNS in experimental setting [27]. Such exogenous agonists might represent a new avenue of research for therapeutic macromolecule delivery to the human CNS. Of note is the fact that the window of the induced permeability correlated with the half-life of the agonist. Thus, BBB permeation induced by NECA treatment (half-life, 4 h) lasted significantly longer than that induced by Lexiscan treatment (half-life, 2.5 min) [27]. Interestingly, despite its short half-life, extracellular adenosine permeabilized the BBB to entry of 10 kDa dextran (Fig. 6). Approaches based on the use of such an agonist might be useful for the delivery of therapeutic antibodies to the CNS since invasive delivery is a commonly used method [102] and is not patient-friendly.
Further studies are needed to better understand the mechanisms involved in BBB permeability modulation by A 1 /A 2A receptors-triggered signaling as well as the parameters susceptible to optimize the timing of such a modulation (Fig. 7). In particular, in vitro cell based model of BBB where cerebral endothelial cells are cocultured with other components of the NVU such as pericytes or astrocytes (co-cultures and triple co-cultures systems) [103] should be considered for evaluation. Another important issue relates to the question of the identification of the CNS areas where the microvasculature is significantly permeabilized by A 1 /A 2A receptor-induced signaling and more generally whether or not, there exists a restricted or a global permeabilization within the CNS. An alternative strategy to be explored is the experimental manipulation of the regional level of endogenous adenosine or of the responsiveness/expression level of A 1 /A 2A receptors. Such knowledge will be instrumental in designing novel approaches for the improved delivery of drugs, therapeutic monoclonal antibodies and possibly, stem cells, within the CNS.
Concluding remarks
Inhibiting AR signaling on BBB cells restricts the entry of macromolecules and inflammatory immune cells into the CNS with limited impact on anti-inflammatory, T regulatory cells. Conversely, activation of ARs on BBB cells promotes entry of small molecules and macromolecules in the CNS in a time-dependent manner. The duration of BBB permeabilization depends on the half-life of the AR activating agent or agonist, suggesting that AR modulation of the BBB is a tunable system. We conclude that: (1) The adenosine-based control of the BBB is an endogenous mechanism able to regulate cells and molecules entry into the CNS in basal conditions and during response to CNS stress or injury. (2) AR-induced opening of the BBB is time-dependent and reversible. (3) A tight regulation of CD73 expression on BBB cells is crucial to restrict and regulate adenosine bioavailability and prevent promiscuous BBB permeability.
Consequently, the control of BBB permeability via modulation of AR signaling is pertinent for research on the delivery of therapeutics to the CNS: (1) AR signaling is an endogenous mechanism for BBB control. (2) It has the potential for precise time-dependent control of BBB permeability. Adenosine increases the permeability of the blood brain barrier to 10 kDa FITC dextran. Concomitant administration of Adenosine and 10 kDa FITC-Dextran in C57BL/6 mice with exogenous adenosine induces significantly higher accumulation of FITC-Dextran into the brain than PBS control treatment group (n = 2, asterisk indicates p < 0.01) is reversible. (4) ARs are accessible directly on BBB endothelial cells. (5) Over 50 commercial reagents targeting ARs are available, with some approved by the FDA for clinical use. (6) In vivo and in vitro model systems can help to gain molecular mechanistic understanding of how adenosine naturally regulates changes in BBB permeability. Therapies aimed at treating neuro-inflammatory diseases such as MS, where inflammatory cells penetration of the CNS causes irreparable damage to CNS tissue, would ideally include one that could inhibit the entry of inflammatory immune cells into the CNS parenchyma. Many other diseases associated with CNS inflammation such as meningitis, encephalitis, and cerebritis could all benefit from inhibiting immune cell entry into the CNS. The challenge is determining how to safely and effectively do this. We hypothesize that manipulating the adenosine-ARs axis on CNS barrier cells may represent an efficient way to modulate the entry of immune cells into the CNS and to limit CNS inflammation and pathology.
Fig. 7
A model: Adenosine modulation of blood brain barrier (BBB) permeability. Endothelial cells lining the brain vasculature express adenosine receptors (ARs) and CD39 and CD73. In the presence of cell stress/inflammation or tissue damage (a), ATP is released and is rapidly converted to ADP and AMP by CD39 (b) and AMP is converted to adenosine by CD73. c Adenosine binds to its receptor/s (A 1 or A 2A ) on BBB endothelial cells (d), the activation of which induces reorganization of actin cytoskeleton in BBB endothelial cells, resulting in tight and adherens junction disassembly (e), increasing paracellular permeability
|
v3-fos-license
|
2019-04-04T13:10:02.031Z
|
2015-10-14T00:00:00.000
|
55372369
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.5194/acpd-15-27539-2015",
"pdf_hash": "a164e047950f0b28d5bffb7940b7f5be39ae4105",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43073",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "7948a042565333b79a8eae89f3b516a31be3f050",
"year": 2015
}
|
pes2o/s2orc
|
Controlled meteorological (CMET) balloon profiling of the Arctic atmospheric boundary layer around Spitsbergen compared to a mesoscale model
Observations from CMET (Controlled Meteorological) balloons are analyzed in combination with mesoscale model simulations to provide insights into tropospheric mete-orological conditions (temperature, humidity, wind-speed) around Svalbard, European High Arctic. Five Controlled Meteorological (CMET) balloons were launched from Ny-5 Ålesund in Svalbard over 5–12 May 2011, and measured vertical atmospheric profiles above Spitsbergen Island and over coastal areas to both the east and west. One notable CMET flight achieved a suite of 18 continuous soundings that probed the Arctic marine boundary layer over a period of more than 10 h. The CMET profiles are compared to simulations using the Weather Research and Forecasting (WRF) model using 10 nested grids and three di ff erent boundary layer schemes. Variability between the three model schemes was typically smaller than the discrepancies between the model runs and the observations. Over Spitsbergen, the CMET flights identified temperature inversions and low-level jets (LLJ) that were not captured by the model. Nevertheless, the model largely reproduced time-series obtained from the Ny-Ålesund meteorological 15 station, with exception of surface winds during the LLJ. Over sea-ice east of Svalbard the model underestimated potential temperature and overestimated wind-speed compared to the CMET observations. This is most likely due to the full sea-ice coverage assumed by the model, and consequent underestimation of ocean–atmosphere exchange in the presence of leads or fractional coverage. The suite of continuous CMET
Introduction
The polar regions provide a challenge to atmospheric numerical models.Firstly, model parameterisations are often adapted to and validated against lower latitudes and might not necessarily be applicable to high latitude processes.Secondly, there exists limited detailed in-situ observational data for model initialization and validation in remote polar regions.Accurate representation polar meteorology and small-scale processes, is, however, essential for meteorological forecast models, whose comparison to observations is particularly relevant for improving understanding of climate in the Arctic, a region undergoing rapid change (Vihma, 2014).A particular challenge is that the polar atmospheric boundary layer (ABL) is usually strongly stable during winter, and only weakly stable to neutral during summer (Persson et al., 2002).This stability acts to magnify the effects of flows over small scale topography, such as channeling, katabatic flows and mountain waves, and can promote the formation of low-level jets.Further, in coastal areas, thermodynamic ice formation, growth and melt, and wind-and oceanic current driven advection of sea ice can lead to highly variable surface conditions that control air-sea exchange of heat and momentum, and affect the radiative balance e.g. through albedo.Snow layers deposited upon sea-ice provide a further insulating layer that modifies heat exchange between the ocean and the overlying atmosphere.For example, for polar winter conditions at low atmospheric temperature (e.g.−40 • C), the surface temperature of open water areas is practically at the freezing point of water (−1.8 • C), while the surface temperature of thick snow covered sea ice is substantially lower, being close to the atmospheric temperature (e.g.−40 • C).Hence, the heat and energy fluxes can vary by up to two orders of magnitude, depending on the surface state (Kilpeläinen et al., 2011).Introduction
Conclusions References
Tables Figures
Back Close
Full Thus, significant uncertainties remain in modelling Arctic meteorological variables.For example, a comparison of eight different RCM (Regional Climate Model) simulations over the Western Arctic to European Center for Medium-Range Weather Forecast (ECMWF) analyses over September 1997-September 1998 found general agreement to the model ensemble mean but large across-model variability, particularly in the lowest model levels (Rinke et al., 2006).Direct comparisons of Arctic ABL meteorology observations to mesoscale model simulations using the regional Weather Research and Forecasting (WRF) model (in standard or "polar" version) have also been performed.These include comparison to automatic weather stations (AWS) on the Greenland ice sheet in June 2001 and December 2002 (Hines and Bromwich, 2008); to drifting ice station SHEBA meteorological measurements over the Arctic Ocean in 1997-1998 (Bromwich et al., 2009); to tower observations and radio-sonde soundings in three Svalbard (Spistbergen) fjords in winter and spring 2008 (Kilpeläinen et al., 2011); to AWS stations along Kongsfjorden in Svalbard in spring 2010 (Livik, 2011); to meteorological mast measurements in Wahlenbergfjorden, Svalbard in May 2006and April 2007(Makiranta et al., 2011); to tethered balloon soundings and mast observations in Advent-and Kongsfjorden in Svalbard in March-April 2009 (Kilpeläinen et al., 2012), and to a remotely controlled model aircraft equipped with meteorological sensors (the small unmanned meteorological observer, SUMO) over Iceland and Advent valley in Svalbard (Mayer et al., 2012a, b).These studies collectively found that (Polar) WRF was able to partially reproduce the meteorological observations, typically only when operated at higher model resolution (e.g. 1 km).Sea-ice was found to be particularly important at high sea-air temperature differences, and occurrence of lowlevel jets were observed yet not always reproduced by the model.Such comparisons between model and observations are, however, limited by the spatial scale of the field observations, typically only a few km.
To provide an in-situ meteorological ABL dataset covering a wider Arctic region, we deployed five Controlled METteorological (CMET) balloons, launched in May 2011 from Ny-Ålesund on Svalbard.CMET balloons are capable of performing sustained flights Introduction
Conclusions References
Tables Figures
Back Close
Full within the troposphere at designated altitudes, and can take vertical soundings at any time during the balloon flight on commanded via satellite link (Voss et al., 2013).The CMETs can also be configured for automated profiling of the atmospheric boundary layer during the flight, as we demonstrate in this study.The nested dual balloon design ensures very little helium loss, enabling the balloons to make multi-day flights.This gives the opportunity to investigate areas far away from research bases, at greater spatial scales (many hundreds of kilometers from the launch point) than can be obtained by line-of-sight unmanned aerial vehicle (UAV) approaches, radio-sondes or tethered balloons.Previous CMET balloon applications include Riddle et al. (2006), Voss et al. (2010), Mentzoni (2011) andStenmark et al. (2014).Voss et al. (2010) investigated the evolving vertical structure of the polluted Mexico City Area outflow by making repeated balloon profile measurements of temperature, humidity and wind in the advecting outflow.Riddle et al. (2006) and Mentzoni (2011) used the CMET balloons as a tool to verify atmospheric trajectory models -namely FlexTra (Stohl et al., 1995) and FlexPart (Stohl et al., 1998) period 5 to 12 May 2011.The CMET payload included meteorological sensors for temperature, relative humidity (RH) and pressure, as well as GPS and satellite modem for in-flight control.The CMET balloon design and control algorithms are described in detail by Voss et al. (2013).Figure 1a and b shows the balloon flights of the May 2011 campaign as well as two meteorological sites providing additional ground-based data: the Ny-Ålesund AWI-PEV station (from where the balloons were launched), and Verlegenhuken in North-East Spitzbergen.Balloons 1 and 2 had short flights due to technical issues encountered at the start of the campaign, and included only one vertical sounding each.Balloon 3 flew far north but did not perform soundings after leaving the coastal area of Spitzbergen, thus only the vertical sounding (ascent and descent) at the very beginning of the flight is used for this study.Balloon 4 flew eastwards, but despite strong balloon performance needed to be terminated before encroaching Russian airspace.In addition to its vertical sounding obtained shortly after launch it includes two closely spaced (ascent and descent) soundings over sea-ice east of Svalbard.Balloon 5 undertook a 24 h duration flight that first exited Kongsfjorden, then flew northwards along the coast and measured a much longer series of 18 consecutive profiles of the ABL in automatic sounding mode, before being raised to higher altitudes where winds advected it eastwards (Voss et al., 2013).To the best of our knowledge, this was the first automated sounding sequence made by a free balloon.Temperature and humidity profiles were extracted from the CMET flights for model comparison as indicated in Fig. 1a and b, in locations over Svalbard topography, over a sea-ice covered region east of Svalbard, and over a sea-ice free region west of Svalbard where continuous automated soundings were performed.The capacitance humidity sensor (G-TUCN.34 from UPSI, covering 2 to 98 % RH range over −40 to +85 • C) generates a signal which is a function of the ambient relative humidity (RH) with respect to water.Humidity was therefore reported as RH over (supercooled liquid) water, which is standard procedure for atmospheric balloon-sonde measurements (even at sub-zero temperatures).Land-and/or sea-ice were, however, present for some of the campaign locations (although not during the automated soundings of flight 5 over ice-Introduction
Conclusions References
Tables Figures
Back Close
Full free ocean west of Svalbard).Where present, they could promote ice deposition, thus act to lower the water saturation vapour pressure.In such conditions, RH calculated over water underestimates the RH with respect to ice.Nevertheless, for the relatively warm ambient surface temperatures encountered over ice during the campaign (typically a few degrees negative • C or higher) such effects are modest.For consistency, RH over water is reported across the field-campaign and is similarly illustrated for the model output.
For comparison to two WRF nested model runs (see details below), the balloon profiles were interpolated to 50 m height intervals and the measurement from paired ascent/descent soundings were averaged at each height.These ascent/descent profiles typically each required between 30 min and about one hour depending on the altitude change.These averaged ascent/descent profiles were compared to WRF model output at the longitude and latitude of the balloon location at the maximum of its ascent/descent cycle, averaged over a full hour centred on the middle of the balloon profile.A more detailed analysis was made of the meteorological evolution observed during consecutive automated soundings of flight 5, by comparing to WRF output at selected times along a transect line approximately following the CMET flight path, and geographically within the model layer corresponding to the average CMET flight altitude.
Numerical model implementation
Regional model simulations were performed using the Weather Research and Forecast (WRF) version 3.3.1.It is based on non-hydrostatic and fully compressible Euler equations that are integrated along terrain-following hydrostatic-pressure (sigma) coordinates, see Skamarock et al. (2008).The model was run for the simulation period from Introduction
Conclusions References
Tables Figures
Back Close
Full tions are shown in Fig. 2. The outer domain was centered at 78.9 and remained fixed during the whole simulation period, assuming full sea-ice coverage for any model grid-point with positive sea-ice flag.This approach is justified by the good agreement between the ECMWF sea-ice flag and satellite images of sea-ice coverage on 5 May, both showing dense sea-ice east of Svalbard (Fig. 3).Conversely, to the west of Svalbard, sea-ice is absent.Sea surface temperatures are, as usual, higher to the west than east of Svalbard.This is due to the northward flowing warm and saline Atlantic Warm Current (AWC) or "Gulf stream" that elevates temperatures along Svalbard's west coast (the AWC subsequently sinks below the cold polar waters further north).
For cloud microphysics the WRF single moment 3-class simple ice scheme (Dudhia, 1989;Hong et al., 2004) was used.Radiation was parameterised with the Rapid Radiative Transfer Model (RRTM) longwave scheme (Mlawer et al., 1997), and the Dudhia shortwave scheme (Dudhia, 1989).Surface fluxes were provided by the Noah Land Surface Model (LSM), a four-layer soil temperature and moisture model with snow Introduction
Conclusions References
Tables Figures
Back Close
Full cover prediction (Chen and Dudhia, 2001).In the first and second domain, the Kain-Fritsch cumulus scheme (Kain, 2004) was applied in addition, whereas in the third domain, cumulus convection was neglected.
Sensitivity tests were made with three different boundary layer parameterisation schemes as follows: the Yonsei University (YSU) scheme (Hong et al., 2006) is a nonlocal first order closure scheme that uses a counter gradient term in the eddy diffusion equation, and is the default ABL scheme in WRF.The Mellor-Yamada-Janjic (MYJ) scheme (Janjic, 1990(Janjic, , 1996(Janjic, , 2002) ) uses the local 1.5 order (level 2.5) closure Mellor-Yamada model (Mellor and Yamada, 1982), where the eddy diffusion coefficient is determined from the prognostically calculated turbulent kinetic energy (TKE).According to Mellor and Yamada (1982), it is an appropriate scheme for stable to slightly unstable flows, while errors might occur in the free convection limit.The Quasi-Normal Scale Elimination (QNSE) scheme (Sukoriansky et al., 2006) is, as the MYJ scheme, a local 1.5 order closure scheme.In contrast to the MYJ scheme, it includes scale dependence by using only partial averaging instead of scale independent Reynolds averaging, and is therefore able to take into account the spatial anisotropy of turbulent flows.It is thus considered especially suited for the stable ABL.
Meteorological conditions and ground-stations compared to the WRF simulation
The winds over not much more than one day, leading to increasing temperatures and an hourly maximum of 2.9 • C on 6 May.The wind direction subsequently become more westerly and then northerly with high wind-speeds on 8 and 9 May, given occurrence of a high pressure system SW and a lower pressure system NE of Svalbard, and the AVI-PEV station registered a maximum wind-speed of 17.4 m s −1 around noon on 9 May.
This was followed by a period of lowwind-speed over 11-12 May, also reflected in the 24 h CMET flight to the east of Svalbard, with low temperatures recorded at the Spitsbergen meteorological stations.The WRF simulations show good general agreement to the 6 hourly averaged surface meteorological observations at Ny-Ålesund, Fig. 4, (and Verlegenhuken station in N Svalbard, Fig. S1 in the Supplement), with similar results for all three ABL schemes.However, the high (> 10 m s −1 ) southerly surface winds predicted on 6-7 May for Ny-Ålesund were not observed.Outside of these dates, the model generally reproduced the winds, albeit at a wind direction 30 • greater (clockwise) than typically observed in Ny-Ålesund (see wind-roses, Fig. S2), likely due to a wind channeling effect in the Kongsfjorden that is not fully captured by the model.Temperature was well reproduced however somewhat overestimated during cold periods (e.g. 5 and 11-12 May) at both surface stations.
Atmospheric boundary layer over Spitsbergen: topography, inversions and low level jets
The four CMET soundings over Svalbard topography are compared to WRF windspeed, relative humidity and temperature profiles in Fig. 5, for the three different boundary layer schemes.Notably the results using the three ABL schemes are not strongly differing from each other, but collectively show greater disagreement to the observed ABL profiles.WRF captures the profiles with weak winds (profile 1, 4) well, but not on 5-6 May (profile 2, 3) where the CMET observations show the occurrence of a weak low level jet (LLJ) with a wind-speed maximum at around 1200 m and lower wind-speeds above and below.WRF in contrast predicts the highest wind-speeds below 1000 m and also does not capture the observed inversion above 1300 m.Thus, the model difficulty 27548 Introduction
Conclusions References
Tables Figures
Back Close
Full to predict the lofted altitude of the LLJ appears connected to the model overestimation of surface wind-speed in Ny-Ålesund on 5-6 May (a model-observation discrepancy not found in Verlegenhuken further north in Svalbard).The occurrence of LLJs is likely promoted by the Svalbard topography in conjunction with a stable boundary layer.These model-observation discrepancies are consistent with previous studies: Molders and Kramm (2010) found that WRF had difficulties in capturing the full strength of the surface temperature inversion observed during a five day cold weather period in Alaska.Kilpeläinen et al. (2012) found that WRF reproduced only half the observed inversions, and often underestimated their depth and strength, and that the average modeled LLJ was deeper and stronger than that observed.An overestimation of surface wind-speeds by WRF, especially in case of strong winds, has also been reported by Claremar et al. (2012), in comparison to AWS placed on three Svalbard glaciers, and by Kilpeläinen et al. (2011) and Kilpeläinen et al. (2012), in a study of Kongsfjorden.Since low wind-speeds are associated with inversion formation, WRF's overestimation of wind-speed might partly explain the difficulties in capturing (the strength of) inversions (Molders and Kramm, 2010).Consequently, since elevated inversions are often connected to low level jets (Andreas et al., 2000), the difficulties in capturing inversions could help explain the model difficulties in predicting low level jets.
A likely limitation to the WRF model capability over complex topography is its horizontal and vertical resolution.The model set-up used here includes 61 vertical layers, which Mayer et al. (2012b) suggests are necessary to resolve ABL phenomena, such as low level jets.However, Esau and Repina (2012) note that even a model resolution of 1 km in the horizontal does not properly represent the valley and steep surrounding mountains in Kongsfjorden, finding that even a fine resolution model (56×61 m grid cell, 20 times higher than the 1 km grid cell used in this and other WRF studies) could not fully resolve near-surface small-scale turbulence in the strongly stratified Kongsfjorden atmosphere.Introduction
Conclusions References
Tables Figures
Back Close
Full The two consecutive CMET profiles over sea-ice east of Svalbard are compared to WRF model run 2 in Fig. 6.All three schemes tend to overestimate wind-speed, especially at the low levels.Nevertheless the slope of the wind profile corresponds approximately to the observations.Potential temperature is underestimated by around 2.5 K in all schemes.The largest difference between the observations and the model is found at the low levels, where it reaches up to 4 K.However, relative humidity is in better agreement, meaning that specific humidity must also be lower in the model than in the observations (e.g. a 4 K difference at 85 % RH corresponds to a 9 × 10 −4 kg m −3 absolute humidity, a difference of around one quarter to one third ambient levels).The temperature and specific humidity bias is most probably due to an over representation of sea ice in the WRF model setup, which exerts a strong control on surface conditions.Even though the sea ice flag from the ECMWF data seems to agree fairly well with satellite sea-ice observations (Fig. 3), areas of polynyas and leads that can be recognized on the satellite picture were represented as homogeneous sea ice in the model.Further, the 100 % sea-ice coverage assumed in the model for grid cells with positive sea-ice flag may not reflect reality: small patches of open water amongst very close (90-100 %) or close (80-90 %) drift ice would promote sea-air exchange, enhancing both temperature and specific humidity at the surface (Andreas et al., 2002).Inclusion of fractional sea-ice in WRF (available for WRF version 3.1.1and higher) might rectify this problem, but is not straightforward to implement: the amount of sea ice in a grid cell varies with time through sea ice formation, break up and drifting, the latter typically a dominant control on ice-presence during late spring east of Svalbard.However, the WRF meteorological model does not simulate surface oceanographic processes, thus predicted sea-ice presence depended only on whether the SST was above or below the freezing point of sea-water.An option is to remove excessive sea ice manually, as, e.g., in Mayer et al. (2012b) or to update the sea ice field and the SST at certain intervals (e.g. six hours) with data from observations or re-analyses, Introduction
Conclusions References
Tables Figures
Back Close
Full as in Kilpeläinen et al. (2012), but this becomes demanding over large regions.Nevertheless, given its strong control on ABL processes, a fractional sea-ice approach is recommended for future studies, particularly if a longer series of CMET soundings can be achieved, e.g. during balloon flights advected in a pole-ward direction, rather than towards Russia, which necessitated the flight to be terminated on command after only two profiles in our study.
Automated CMET soundings during a 24 h flight west of Spitsbergen
Flight 5 provided a series of 18 boundary-layer profiles over a largely sea-ice free region west of Svalbard.With the low wind-speeds (< 5 m s −1 ), the 24 h balloon trajectory remained relatively close to Svalbard coastline.Figure 7 shows the observed profiles of potential temperature, specific humidity, wind-speed and wind-direction, with interpolated data between the soundings.The soundings ranged from approximately 150 to 700 m during the first part of the flight (∼ 02:00-12:30 UTC, JD 131.08-131.52).
Specific humidity is greatest and potential temperature lowest nearer the surface, as expected.Specific humidity tends to increase during the flight, particularly in the lower and middle levels, which can be interpreted as a diurnal enhancement from surface evaporation.However, beyond JD 131.40 (09:36 UTC) there is actually a decrease in humidity in the lowermost levels, with maximum humidity in the sounding occurring around 350 m.Concurrent to this there is also a small increase in potential temperature at low altitudes.The wind-speed and direction plots indicate relatively calm conditions, with greatest wind-speed in the lower levels generally from a southerly direction.In contrast, at the top of the soundings the balloon encountered winds from a northerly direction, above 600 m.From JD 131.35 onwards, the observed winds became broadly southerly also at 600 m.However, a band of rather more west-south-westerly winds developed at mid-altitudes (∼ 450 m), and low-level winds became (east)-south-easterly from JD 131.4 onwards.An important overall conclusion from these measurements is that the balloon was not sampling a uniform air-mass during this flight, rather it encountered a variety of air mass properties and behaviours over the course of the soundings.27551 Introduction
Conclusions References
Tables Figures
Back Close
Full While the complex flow in this case largely precludes a quasi-Lagrangian-type process study, the series of profiles none-the-less provides a nuanced understanding that is not possible with traditional rawinsondes or constant-altitude balloons.
The CMET observations appear consistent with the occurrence of a low-level flow that is decoupled from higher altitudes, and -at least initially -a diurnal increase in surface humidity through enhanced ocean evaporation.The observed wind-shear is consistent with a tilted high pressure system (that tilts with altitude towards the west of Svalbard, according to the WRF model), whilst surface winds may be further influenced by low-level channel flows.An outflow commonly exits from nearby Kongsfjorden-Kongsvegen valley (e.g.Esau and Repina, 2012) but is hard to identify from the ground-station in Ny Ålesund (south side of Kongsfjorden) given the rather low wind-speeds during this period.Winds that originate over land are likely colder, with lower humidity than marine air masses.Thus, the CMET observations of lower specific humidity between JD 131.40-131.5 (09:36-12:00 UTC) might be explained by fumigation from or simply sampling of such a channel outflow.Alternatively, the CMET's location over Kapp Mitra Penninsula at this time may indicate an even more local source of dry air impacting low levels.A final possibility could be overturning of air masses in the vertical, bringing less humid air, with higher potential temperature to lower altitudes.At mid-levels (∼ 450 m) a relatively humid air layer persists, properties which suggest it likely has origins from the surface.It appears to be advected north-eastwards, potentially replenishing air over Svalbard to replace that which may be lost from the channel outflow.Further discussion is provided in conjunction with the WRF model results.
The CMET observations are compared to WRF model output at two time-periods, 07:00 and 15:00 UTC on 11 May (JD 131.oceans although reaching higher altitudes over the Svalbard terrain) that provide a geographic spatial context.For clarity, only output from WRF MYJ BL scheme is illustrated (see Supplement for QNSE and YSU schemes).For (i), the WRF model temperature and humidity cross-sections at 07:00 and 15:00 UTC are shown alongside CMET observations along the whole balloon flight, in Figs. 9 and 10, respectively, and where the balloon locations at 07:00 and 15:00 UTC are denoted by a triangle or cross, respectively.The model generally agrees with the balloon observations: potential temperature increases with altitude, and surface temperature decreases with increasing latitude in the 07:00 UTC cross-section.Boundary layer height is denoted by a sharp humidity decrease, at approx.600 m (declining to 400 m at higher latitudes) in the 07:00 UTC WRF cross-section.For all the model schemes, a greater relative humidity and a higher boundary layer is predicted in the 15:00 UTC cross-section, as expected from the diurnal cycle, whereby solar heating increases evaporation to enhance RH, and increases thermal buoyancy to enhance ABL height.By 15:00 UTC, the model potential temperature is also generally higher, however, surface temperatures now increases with latitude.This may reflect greater solar heating experienced at higher Arctic latitudes in the spring.
This overall RH trend of the model is in agreement to the observations: the CMET balloon data also exhibits a higher relative humidity at 15:00 UTC than 07:00 UTC.There is also some variability between the different model boundary layer schemes: for the 15:00 UTC cross-section boundary layer height, YSU > QNSE > MYJ in terms of both relative humidity and ABL height.However, diurnal variability is not the only control on ABL humidity (as discussed above).The geographical influence is illustrated by (ii); spatial maps of absolute humidity across a model layer (corresponding to ∼ 300 m a.s.l.over oceans, somewhat higher over land) in Fig. 11.As expected, humidity in the marine air in the ice-free coastal region is greater than over Spitsbergen land, where temperatures were below freezing (see AWIPEV station time-series Fig. 4).Mixing or transfer between the marine-and land-influenced air masses can thus exert a significant influence on the observations, consistent with the findings from the CMET analysis Introduction
Conclusions References
Tables Figures
Back Close
Full above.The model results presented at 07:00 UTC and 15:00 UTC clarify this influence in a geographic context.Between launch and 07:00 a.m.UTC the CMET moved into a more marine environment thus humidity increased.The balloon then moved northwards, perhaps drawn by a channel outflow from Kingsbay.Over this period humidity is constant or declining slightly, as the balloon passes across Kongsfjorden Bay and over the Kapp Mitra peninsula.From ∼ midday to 15:00 UTC the humidity increases again as the balloon travels northwards (a temporary westerly diversion occurs following blocking of the low-level flow by Svalbard terrain).This humidity enhancement appears mostly caused by the diurnal effects of enhanced evaporation.Alternatively, simple transport of the balloon into or air-mass mixing with moister marine air could play a role, but in any case the diurnal humidity signal appears strong across this NW region.After 15:00 UTC the balloon was raised to higher altitudes hence the humidity decreased compared to that in the fixed model level (a similar decrease can be seen in the model altitude-transect plots, Fig. 10).Finally, we return to the subject of the quasi-Lagrangian nature of the CMET balloon flight.A detailed analysis is beyond the scope of this study, nevertheless the wind, humidity and temperature observations indicate presence of more than one air mass in this coastal region.Whilst CMETs have previously been used in Lagrangian-type experiments to track the evolution of an airmass (e.g., Voss et al., 2010), this casestudy presents more complex atmospheric conditions.Both vertical winds and horizontal wind-shear can affect the Lagrangian nature of the CMET balloon experiment.Vertical air mass movement is not measured by the CMET payload but is estimated by the WRF model to be sufficiently low (typically 0.01 m s −1 ) to be negligible in most cases, with the exception of localized areas in QNSE scheme (see Supplement, Fig. S4).The CMET balloon movement was itself used to determine horizontal winds (Fig. 7), and showed decoupled air flows of near opposite direction in the morning of 11 May (southerly winds at low altitudes, northerly winds at higher levels).Balloon soundings that traverse these layers will thus influence its overall trajectory.Trajectories were estimated from the observed winds at 50 m altitude intervals.This approximate tech-Introduction
Conclusions References
Tables Figures
Back Close
Full nique assumes horizontally uniform flow (in the vicinity of the balloon and computed trajectories) during the 8 h period starting in the early morning of 11 May, Fig. 12.The lowermost layer exhibited greatest wind-speed thus has the longest (and least certain) trajectory, approximately double that of the balloon during the same period.The uppermost layer flows southwards before reversing direction, approximately returning to its initial position.The middle layer trajectory is quite similar to that of the CMET balloon, but is transported initially somewhat more westwards, and later somewhat more eastwards, due to the ESE winds experienced in the late morning (see Fig. 7).It is worth noting this final direction mirrors findings from two of the other CMET flights, whose initial paths out of Kongsfjorden deviated to the north-east into nearby Krossfjorden.
While the balloon-based trajectories and repeating profile measurements are not Lagrangian, they do provide insight into the complex dynamics of low-altitude circulation influenced by complex terrain.Furthermore, the trajectories and profile data can be computed and displayed in near-real time, allowing future experiments to be modified during flight (e.g., to track specific layers or events).Such experiments can provide observational insights that help constrain the complex meteorology.
Conclusions
Five Controlled Meteorological (CMET) balloons were launched from Ny-Ålesund, Svalbard on 5-12 May 2011, to measure the meteorological conditions (RH, temperature, wind-speed) over Spitsbergen and in the surrounding Arctic region.Analysis of the meteorological data, in conjunction with simulations using the Weather and Research Forecasting (WRF) model at high (1 km) resolution provide insight into processes governing the Arctic atmospheric boundary layer and its evolution.Three ABL parameterizations were investigated within the WRF model, YSU (Yonsei University), MYJ (Mellor-Yamada-Janjic) and QNSE (Quasi-Normal Scale Elimination).These schemes showed closer similarity to each other than between the model runs and the observations.This indicates more fundamental challenges to mesoscale Introduction
Conclusions References
Tables Figures
Back Close
Full modelling in the Arctic, as identified from this study to include (i) the occurrence of inversions and low level jets over Svalbard topography in association with stable boundary conditions, which likely can only be captured at greater model resolution (ii) the presence of (fractional) sea-ice that acts to modify sea-air exchange, but whose dynamical representation in the model is not straight-forward to implement.
The WRF model simulations showed good general agreement to surface meteorological parameters (temperature, wind-speed, RH) in Ny Ålesund and Verlegenhuken, N Svalbard over 3-12 May 2011.However, temperatures were somewhat underestimated during colder periods, and surface winds were severely overestimated on 5-6 May in Ny Ålesund.Comparison of four CMET profiles over Svalbard topography to the WRF model indicated model difficulties in capturing inversion layers and a low-level jet (LLJ).The CMET observations thereby provided a context for the predicted high surface wind-speeds in Ny Ålesund, which were observed aloft but not at the surface during the campaign.A higher resolution is likely required to improve the model ability to simulate the small-scale atmospheric dynamics particularly for stable Arctic boundary layer conditions combined with Svalbard topography.
Two CMET soundings also probed the boundary layer over sea-ice to the east of Svalbard, during a balloon flight which despite good performance needed to be terminated to avoid encroaching on Russian territory.Model biases in wind-speed and surface level temperature (and inferred for specific humidity) over this region are likely due to the representation of sea-ice in the model.Whilst the ECMWF-derived seaice flag used appears reasonable, the presence of fractional sea-ice east of Svalbard may have enabled greater air-sea exchange of heat and moisture than predicted by the model, which assumed 100 % sea-ice coverage for positive sea-ice flag.Fractional representation of sea-ice in WRF is thus desirable, but is not straightforward to implement as sea-ice coverage depends on both sea-surface temperature driven freezing/melting processes and ocean-current driven advection, the latter being dominant East of Svalbard during spring.Improved sea-ice representation (e.g.applying a manual correction every 6 h) is recommended for future studies especially if multiple soundings over sea-Introduction
Conclusions References
Tables Figures
Back Close
Full ice during longer duration CMET flights (i.e.northerly rather than easterly advected) can be achieved.A series of continuous automated soundings was performed during a CMET flight over a sea-ice free region west of Svalbard, tracing atmospheric boundary layer temperature and relative humidity profiles along the flight and with altitude.Meteorological conditions encountered were complex, including a low-level flow decoupled from the air mass at higher altitudes.An increase in low-level relative humidity was observed, consistent with diurnal enhancement expected from evaporation.The WRF model predicted both an increase in RH and ABL height over the diurnal cycle concurrent with the CMET observations.The data-model interpretation also considers influence of air masses of different origin which augment the diurnal trends: air masses originating over the warm saline ocean waters have typically greater humidity than over the cold Svalbard topography.
Finally, the semi-Lagrangian nature of CMET flights is discussed.In this ABL study the balloon likely sampled different air masses through vertical soundings undertaken during the flight, under conditions of strong vertical wind-shear.Analysis of the observed wind-fields provides an indication of the balloon trajectory in the context of surrounding wind trajectories at different altitudes.
In summary, CMET balloons provide a novel technological means to profile the remote Arctic boundary layer over multi-day flights, including the capacity to perform multiple automated soundings.CMET capabilities are thus highly complementary to other Arctic observational strategies including fixed station, free and tethered balloons, and UAVs.Whilst UAVs offer full 3-D spatial control for obtaining the meteorological observations, their investigation zone is generally limited to tens of kilometers based on both range and regulatory restrictions.CMETs flights provide a relatively low-cost approach to observing the boundary layer at greater distances from the launch site (e.g.tens to hundreds of km), at altitudes potentially all the way down to the surface, and more remote from the disturbances of Svalbard topography.Analysis of the CMET observations along with output from a regional model output provides insights into the Introduction
Conclusions References
Tables Figures
Back Close
Full Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | -in the United States and in the Arctic, respectively.Stenmark et al. (2014) combined data from CMETs, ground-based and a small model airplane data with WRF simulations to highlight the role of nunatak-induced convection in Antarctica.Here we compare the soundings performed during the five Svalbard balloon flights of May 2011 to simulations made using the Weather Research and Forecasting (WRF) mesoscale model with three different boundary layer schemes, and thereby provide insights into key processes influencing meteorology of remote Arctic regions.Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | period of 3-12 May 2011 was characterized by rapidly changing meteorological conditions, reflected in the different CMET flight paths (Fig. 1a and b) and the 6 hourly averaged meteorological station surface observations shown in Fig. 4 (AWIPEV, Ny-Ålesund) and Supplement S1 (Verlegenhuken, N Svalbard).At first, northerly winds carried cold air to Ny-Ålesund, causing surface temperatures to decline, reaching an hourly minimum of −9.4 • C on 5 May.The wind direction then changed to southerly Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | 3.3 CMET atmospheric profiles east of Spitsbergen: the role of sea-ice Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | 3 and 131.6, respectively).Model output (in 2-D) is presented in two ways: (i) cross-sections of relative humidity (RH) and potential temperature with altitude along a transect in the WRF model (QNSE, YSU and MYJ schemes) that lies in an approximately S-N direction and is reasonably close to (but not identical to) the balloon flight path, see Fig. 8, (ii) maps of temperature and absolute humidity (kg kg −1 ) at a constant model layer (equivalent to ∼ 300 m a.s.l.over the Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper |
Figure 1 .Figure 2 .Figure 3 .
Figure 1.Trajectories of five CMET balloons launched from Ny-Ålesund in May 2011.Soundings used for comparison to WRF are labelled P1si, P2si (over sea-ice east of Svalbard for comparison to WRF model run 2), and P1, P2, P3, P4 (over Svalbard topography for comparison to WRF model run 1).P5 and P6 denote balloon locations at 03:00 and 12:00 UTC during flight 5 whilst the balloon made automated continuous soundings to the west of Svalbard.
|
v3-fos-license
|
2023-01-20T15:03:17.715Z
|
2013-05-01T00:00:00.000
|
256011545
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP05(2013)050.pdf",
"pdf_hash": "c552f5d64f6f963f2cf5396b43c10e35441a830f",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43074",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "c552f5d64f6f963f2cf5396b43c10e35441a830f",
"year": 2013
}
|
pes2o/s2orc
|
Sterile neutrino oscillations: the global picture
Neutrino oscillations involving eV-scale neutrino mass states are investigated in the context of global neutrino oscillation data including short and long-baseline accelerator, reactor, and radioactive source experiments, as well as atmospheric and solar neutrinos. We consider sterile neutrino mass schemes involving one or two mass-squared differences at the eV2 scale denoted by 3+1, 3+2, and 1+3+1. We discuss the hints for eV-scale neutrinos from \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \mathop{{{v_e}}}\limits^{{\left( - \right)}} $\end{document} disappearance (reactor and Gallium anomalies) and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \mathop{{{v_{\mu }}}}\limits^{{\left( - \right)}}\to \mathop{{{v_e}}}\limits^{{\left( - \right)}} $\end{document} appearance (LSND and MiniBooNE) searches, and we present constraints on sterile neutrino mixing from \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \mathop{{{v_{\mu }}}}\limits^{{\left( - \right)}} $\end{document} and neutral-current disappearance data. An explanation of all hints in terms of oscillations suffers from severe tension between appearance and disappearance data. The best compatibility is obtained in the 1+3+1 scheme with a p-value of 0.2% and exceedingly worse compatibilities in the 3+1 and 3+2 schemes.
Introduction
Huge progress has been made in the study of neutrino oscillations [1][2][3][4], and with the recent determination of the last unknown mixing angle θ 13 [5][6][7][8][9][10] a clear first-order picture of the three-flavor lepton mixing matrix has emerged, see e.g. [11]. Besides those achievements there are some anomalies which cannot be explained within the three-flavor framework and JHEP05(2013)050 which might point towards the existence of additional neutrino flavors (so-called sterile neutrinos) with masses at the eV scale: • The LSND experiment [12] reports evidence forν µ →ν e transitions with E/L ∼ 1 eV 2 , where E and L are the neutrino energy and the distance between source and detector, respectively.
• This effect is also searched for by the MiniBooNE experiment [13][14][15][16][17], which reports a yet unexplained event excess in the low-energy region of the electron neutrino and anti-neutrino event spectra. No significant excess is found at higher neutrino energies.
Interpreting the data in terms of oscillations, parameter values consistent with the ones from LSND are obtained.
• Radioactive source experiments at the Gallium solar neutrino experiments SAGE and GALLEX have obtained an event rate which is somewhat lower than expected. This effect can be explained by the hypothesis of ν e disappearance due to oscillations with ∆m 2 1 eV 2 [18,19] ("Gallium anomaly").
• A recent re-evaluation of the neutrino flux emitted by nuclear reactors [20,21] has led to somewhat increased fluxes compared to previous calculations [22][23][24][25]. Based on the new flux calculation, the results of previous short-baseline (L 100 m) reactor experiments are in tension with the prediction, a result which can be explained by assumingν e disappearance due to oscillations with ∆m 2 ∼ 1 eV 2 [26] ("reactor anomaly").
Sterile neutrino oscillation schemes have been considered for a long time, see e.g. [27][28][29][30] for early references on four-neutrino scenarios. Effects of two sterile neutrinos at the eV scale have been considered first in [31,32], oscillations with three sterile neutrinos have been investigated in [33,34]. Thus, while the phenomenology of sterile neutrino models is well known, it has also been known for a long time that the LSND and MiniBooNE ( -) ν e appearance signals are in tension with bounds from disappearance experiments [35][36][37], challenging an interpretation in terms of sterile neutrino oscillations. This problem remains severe, and in the following we will give a detailed discussion of the status of the ( -) ν µ → ( -) ν e appearance hints from LSND and MiniBooNE in the light of recent global data. The situation is better for the hints for ( -) ν e disappearance from the reactor and Gallium anomalies, which are not in direct conflict with any other data. This somewhat ambiguous situation asks for an experimental answer, and indeed several projects are under preparation or under investigation, ranging from experiments with radioactive sources, short-baseline reactor experiments, to new accelerator facilities. A recent review on light sterile neutrinos including an overview on possible experimental tests can be found in [38].
In this paper we provide an extensive analysis of the present situation of sterile neutrino scenarios. We discuss the possibility to explain the tentative positive signals from LSND and MiniBooNE, as well as the reactor and Gallium anomalies in terms of sterile neutrino oscillations in view of the global data. New ingredients with respect to our previous analysis [39] are the following.
JHEP05(2013)050
• We use latest data from the MiniBooNE ( -) ν µ → ( -) ν e appearance searches [15][16][17]. Our MiniBooNE appearance analysis is now based on Monte Carlo events provided by the collaboration taking into account realistic event reconstruction, correlation matrices, as well as oscillations of various background components in a consistent way.
• We include the constraints on the appearance probability from E776 [40] and ICARUS [41].
• We include the Gallium anomaly in our fit.
• We take into account constraints from solar neutrinos, the KamLAND reactor experiment, and LSND and KARMEN measurements of the reaction ν e + 12 C → e − + 12 N.
• The treatment of the reactor anomaly is improved and updated by taking into account small changes in the predicted anti-neutrino fluxes as well as an improved consideration of systematic errors and their correlations.
• We take into account charged-current (CC) and neutral-current (NC) data from the MINOS long-baseline experiment [42,43].
• In our analysis of atmospheric neutrino data, we improve our formalism to fully take into account the mixing of ν e with other active or sterile neutrino states.
All the data used in this work are summarized in table 1. For other recent sterile neutrino global fits see [34,46,47]. We are restricting our analysis to neutrino oscillation data; implications for kinematic neutrino mass measurements and neutrino-less double betadecay data have been discussed recently in [48][49][50].
Sterile neutrinos at the eV scale also have implications for cosmology. If thermalized in the early Universe they contribute to the number of relativistic degrees of freedom (effective number of neutrino species N eff ). A review with many references can be found in [38]. Indeed there might be some hints from cosmology for additional relativistic degrees of freedoms (N eff bigger than 3), coming mainly from CMB data, e.g. [47,[51][52][53][54][55]. Recently precise CMB data from the PLANCK satellite have been released [56]. Depending on which additional cosmological data are used, N eff values ranging from 3.30 +0. 54 −0.51 to 3.62 +0.05 −0.48 (uncertainties at 95% CL) are obtained [56]. Constraints from Big Bang Nucleosynthesis on N eff have been considered recently in [57]. Apart from their contribution to N eff , thermalized eV-scale neutrinos would also give a large contribution to the sum of neutrino masses, which is constrained to be below around 0.5 eV. The exact constraint depends on which cosmological data sets are used, but the most important observables are those related to galaxy clustering [51][52][53][54]. In the standard ΛCDM cosmology framework the bound on the sum of neutrino masses is in tension with the masses required to explain the aforementioned terrestrial hints [54]. The question to what extent such sterile neutrino scenarios are disfavored by cosmology and how far one would need to deviate from JHEP05(2013)050 Table 1. Summary of the data used in this work divided into ( -) ν e , ( -) ν µ disappearance, and appearance data. The column "dof" gives the number of data points used in our analysis minus the number of free nuisance parameters for each experiment.
the ΛCDM model in order to accommodate them remains under discussion [47,58,59]. We will not include any information from cosmology explicitly in our numerical analysis. However, we will keep in mind that neutrino masses in excess of few eV may become more and more difficult to reconcile with cosmological observations. The outline of the paper is as follows. In section 2 we introduce the formalism of sterile neutrino oscillations and fix the parametrization of the mixing matrix. We then consider ( -) ν e disappearance data in section 3, discussing the reactor and Gallium anomalies. Constraints from ( -) ν µ disappearance as well as neutral-current data are discussed in section 4, and global ( -) ν µ → ( -) ν e appearance data including the LSND and MiniBooNE signals in section 5. The global fit of all these data combined is presented in section 6 for scenarios with one or two sterile neutrinos. We summarize our results and conclude in section 7. Supplementary material is provided in the appendices including a discussion of complex phases in sterile neutrino oscillations, oscillation probabilities for solar and atmospheric neutrinos, as well as technical details of our experiment simulations.
Oscillation parameters in the presence of sterile neutrinos
In this work we consider the presence of s = 1 or 2 additional neutrino states with masses in the few eV range. When moving from 1 to 2 sterile neutrinos the qualitative new
JHEP05(2013)050
feature is the possibility of CP violation already at short-baseline [33,60]. 1 The neutrino mass eigenstates ν 1 , . . . , ν 3+s are labeled such that ν 1 , ν 2 , ν 3 contribute mostly to the active flavor eigenstates and provide the mass squared differences required for "standard" three-flavor oscillations, ∆m 2 21 ≈ 7.5 × 10 −5 eV 2 and |∆m 2 The mass states ν 4 , ν 5 are mostly sterile and provide mass-squared differences in the range 0.1 eV 2 |∆m 2 41 |, |∆m 2 51 | 10 eV 2 . In the case of only one sterile neutrino, denoted by "3+1" in the following, we always assume ∆m 2 41 > 0, but the oscillation phenomenology for ∆m 2 41 < 0 would be the same. For two sterile neutrinos, we distinguish between a mass spectrum where ∆m 2 41 and ∆m 2 51 are both positive ("3+2") and where one of them is negative ("1+3+1"). The phenomenology is slightly different in the two cases [61]. We assume that the s linear combinations of mass states which are orthogonal to the three flavor states participating in weak interactions are true singlets and have no interaction with Standard Model particles. Oscillation physics is then described by a rectangular mixing matrix U αi with α = e, µ, τ and i = 1, . . . , 3+s, and i U * αi U βi = δ αβ . 2 We give here expressions for the oscillation probabilities in vacuum, focusing on the 3+2 case. It is trivial to recover the 3+1 formulas from them by simply dropping all terms involving the index "5". Formulas for the 1+3+1 scenario are obtained by taking either ∆m 2 51 or ∆m 2 41 negative. Oscillation probabilities relevant for solar and atmospheric neutrinos are given in appendices C and D, respectively.
First we consider the so-called "short-baseline" (SBL) limit, where the relevant range of neutrino energies and baselines is such that effects of ∆m 2 21 and ∆m 2 31 can be neglected. Then, oscillation probabilities depend only on ∆m 2 i1 and U αi with i ≥ 4. We obtain for the appearance probability Eq. (2.1) holds for neutrinos; for anti-neutrinos one has to replace γ αβ → −γ αβ . Since eq. (2.1) is invariant under the transformation 4 ↔ 5 and γ αβ → −γ αβ , we can restrict the parameter range to ∆m 2 54 ≥ 0, or equivalently ∆m 2 51 ≥ ∆m 2 41 , without loss of generality. Note also that the probability eq. (2.1) depends only on the combinations |U α4 U β4 | and |U α5 U β5 |. The only SBL appearance experiments we are considering are in the ( -) ν µ → ( -) ν e channel. Therefore, the total number of independent parameters is 5 if only SBL appearance experiments are considered. 1 Adding more than two sterile neutrinos does not lead to any qualitatively new physical effects and as shown in [33] the fit does not improve significantly. Therefore, we restrict the present analysis to s ≤ 2 sterile neutrinos. 2 In this work we consider so-called phenomenological sterile neutrino models, where the 3 + s neutrino mass eigenvalues and the mixing parameters Uαi are considered to be completely independent. In particular we do not assume a seesaw scenario, where the Dirac and Majorana mass matrices of the sterile neutrinos are the only source of neutrino mass and mixing. For such "minimal" sterile neutrino models see e.g. [62][63][64].
JHEP05(2013)050
The 3+2 survival probability, on the other hand, is given in the SBL approximation by In this work we include also experiments for which the SBL approximation cannot be adopted, in particular MINOS and ICARUS. For these experiments φ 31 is of order one. In the following we give the relevant oscillation probabilities in the limit of φ 41 , φ 51 , φ 54 → ∞ and φ 21 → 0. We call this the long-baseline (LBL) approximation. In this case we obtain for the neutrino appearance probability (α = β) The corresponding expression for anti-neutrinos is obtained by the replacement I αβij → I * αβij . The survival probability in the LBL limit can be written as Note that in the numerical analysis of MINOS data neither the SBL nor the LBL approximations can be used because φ 31 , φ 41 and φ 51 can all become of order one either at the far detector or at the near detector [65]. Moreover, matter effects cannot be neglected in MINOS. All of these effects are properly included in our numerical analysis of the MINOS experiment.
Sometimes it is convenient to complete the 3 × (3 + s) rectangular mixing matrix by s rows to an n × n unitary matrix, with n = 3 + s. For n = 5 we use the following parametrization for U : where O ij represents a real rotation matrix by an angle θ ij in the ij plane, and V ij represents a complex rotation by an angle θ ij and a phase ϕ ij . The particular ordering of the rotation matrices is an arbitrary convention which, however, turns out to be convenient for practical reasons. 3 We have dropped the unobservable rotation matrix V 45 which just mixes sterile states. There is also some freedom regarding which phases are removed by field redefinitions and which ones are kept as physical phases. In appendix A we give a specific recipe for how to remove unphysical phases in a consistent way. Table 2. Mixing angle and Phase counting for s = 2 (3+2) and s = 1 (3+1) sterile neutrino schemes. The column "A/P" denotes the number of physical angles and phases, respectively. The column "LBL approx." ("SBL approx.") corresponds to the approximation ∆m 2 21 → 0 (∆m 2 21 → 0, ∆m 2 31 → 0). We give also specific examples for which angles can be chosen real, by denoting with V ij (O ij ) a complex (real) rotation. phase disappears. In practical situations often one or more of the mass-squared differences can be considered to be zero, which again implies that some of the angles and phases will become unphysical. In table 2 we show the angle and phase counting for the SBL and LBL approximations for the 3+2 and 3+1 cases.
In the notation of eqs. (2.1), (2.3), (2.4), (2.5), it is explicit that only appearance experiments depend on complex phases in a parametrization independent way. However, in a particular parametrization such as eq. (2.6), also the moduli |U αi | may depend on cosines of the phase parameters ϕ ij , leading to some sensitivity of disappearance experiments to the ϕ ij in a CP-even fashion. Our parametrization eq. (2.6) guarantees that ( -) ν e disappearance experiments are independent of ϕ ij .
3 ν e andν e disappearance searches Disappearance experiments in the ( -) ν e sector probe the moduli of the entries in the first row of the neutrino mixing matrix, |U ei |. In the short-baseline limit of the 3+1 scenario, the only relevant parameter is |U e4 |. For two sterile neutrinos, also |U e5 | is relevant. In this section we focus on 3+1 models, and comment only briefly on 3+2. For 3+1 oscillations in the SBL limit, the ( -) ν e survival probability takes an effective two flavor form where we have defined an effective ( -) ν e -disappearance mixing angle by This definition is parametrization independent. Using the specific parametrization of eq. (2.6) it turns out that θ ee = θ 14 .
SBL reactor experiments
The data from reactor experiments used in our analysis are summarized in table 3. Our simulations make use of a dedicated reactor code based on previous publications, see e.g. [79,80]. We have updated the code to include the latest data and improved the treatment of uncertainties, see appendix B for details. The code used here is very similar to the one from ref. [11], extended to sterile neutrino oscillations. The reactor experiments listed in [75] 820 1 rate Chooz [76] 1050 14 bins DoubleChooz [10] 1050 18 bins DayaBay [77] 6 rates -1 norm RENO [9] 2 rates -1 norm KamLAND [78] 17 bins Table 3. Reactor data used in our analysis. The experiments in the upper part of the table have baselines L < 100 m and are referred to as SBL reactor experiments. For these experiments we list the baseline, the ratio of the observed and predicted rates (based on the flux predictions from [20,21]), the uncorrelated error, and the total experimental error (i.e., the square-root of the diagonal entry of the correlation matrix). Uncertainties from the neutrino flux prediction are not included here, but are taken into account in our numerical analysis. For details on the correlations and flux errors see appendix B. In the lower part of the table, we list experiments with baselines of order 1 km (LBL reactors), and the KamLAND experiment with an average baseline of 180 km. For DayaBay, RENO, and KamLAND, we do not give a number for the baseline here because several baselines are involved in each of these experiments. The number of SBL data points is 19 or 76 and the total number of reactor data points is 75 or 132, depending on whether a total rates analysis (3 data points) or a spectral analysis (25+25+10 bins) is used for the Bugey3 experiment.
KamLAND with an average baseline of 180 km. SBL experiments are not sensitive to standard three-flavor oscillations, but can observe oscillatory behavior for ∆m 2 41 , ∆m 2 51 ∼ 1 eV 2 . On the other hand, for long-baseline experiments, oscillations due to (∆m 2 31 , θ 13 ) are most relevant, and oscillations due to eV 2 -scale mass-squared differences are averaged out and lead only to a constant flux suppression. KamLAND is sensitive to oscillations driven by (∆m 2 21 and θ 12 ), whereas all θ 1k with k ≥ 3 lead only to a constant flux reduction. Table 4. Best fit oscillation parameters and χ 2 min values as well as ∆χ 2 no-osc ≡ χ 2 no-osc − χ 2 min within a 3+1 framework. Except in the row labeled "SBL rates only", we always include spectral data from Bugey3. The row "global ν e disapp." includes the data from reactor experiments (see table 3) as well as Gallium data, solar neutrinos and the LSND/KARMEN ν e disappearance data from ν e -12 C scattering. The CL for the exclusion of the no oscillation hypothesis is calculated assuming 2 degrees of freedom (|U e4 | and ∆m 2 41 ).
For the SBL reactor experiments we show in table 3 also the ratio of the observed and predicted rate, where the latter is based on the flux calculations of [21] for neutrinos from 235 U, 239 Pu, 241 Pu fission and [20] for 238 U fission. The ratios are taken from [38] (which provides and update of [26]) and are based on the Particle Data Group's 2011 value for the neutron lifetime, τ n = 881.5 s [81]. 4 We observe that most of the ratios are smaller than one. In order to asses the significance of this deviation, a careful error analysis is necessary. In the last column of table 3, we give the uncorrelated errors on the rates. They include statistical as well as uncorrelated experimental errors. In addition to these, there are also correlated experimental errors between various data points which are described in detail in appendix B. Furthermore, we take into account the uncertainty on the neutrino flux prediction following the prescription given in [21], see also appendix B for details.
Fitting the SBL data to the predicted rates we obtain χ 2 /dof = 23.0/19 which corresponds to a p-value of 2.4%. When expressed in terms of an energy-independent normalization factor f , the best fit is obtained at Here ∆χ 2 f =1 denotes the improvement in χ 2 compared to a fit with f = 1. Clearly the p-value increases drastically when f is allowed to float, leading to a preference for f = 1 at the 2.7σ confidence level. This is our result for the significance of the reactor anomaly. Let us mention that (obviously) this result depends on the assumed systematic errors. While we have no particular reason to doubt any of the quoted errors, we have checked that when an adhoc additional normalization uncertainty of 2% (3%) is added, the significance is reduced to 2.1σ (1.7σ). This shows that the reactor anomaly relies on the control of systematic errors at the percent level.
The flux reduction suggested by the reactor anomaly can be explained by sterile neutrino oscillations. In SBL reactor data in a 3+1 framework. The allowed regions in ∆m 2 41 and sin 2 2θ 14 are shown in figure 1 (left) for a rate-only analysis as well as a fit including also Bugey3 spectral data. Both analyses give consistent results, with the main difference being that the spectral data disfavors certain values of ∆m 2 41 around 0.6 − 0.7 eV 2 and 1.3 eV 2 . The right panel of figure 1 shows the predicted rate suppression as a function of the baseline compared to the data. We show the prediction for the two best fit points from the left panel as well as one point located in the island around ∆m 2 41 0.9 eV 2 , which will be important in the combined fit with SBL appearance data. We observe that for the rate-only best fit point with ∆m 2 41 = 0.44 eV 2 the prediction follows the tendency suggested by the ILL, Bugey4, and SRP (24 km) data points. This feature is no longer present for ∆m 2 41 1 eV 2 , somewhat preferred by Bugey3 spectral data, where oscillations happen at even shorter baselines. However, from the GOF values given in table 4 we conclude that also those solutions provide a good fit to the data.
The Gallium anomaly
The response of Gallium solar neutrino experiments has been tested by deploying radioactive 51 Cr or 37 Ar sources in the GALLEX [84,85] and SAGE [86,87] detectors. Results are reported as ratios of observed to expected rates, where the latter are traditionally computed using the best fit cross section from Bahcall [88], see e.g. [19]. The values for the cross sections weighted over the 4 (2) neutrino energy lines from Cr (Ar) from [88] are σ B (Cr) = 58.1 × 10 −46 cm 2 , σ B (Ar) = 70.0 × 10 −46 cm 2 . While the cross section for 71 Ga → 71 Ge into the ground state of 71 Ge is well known from the inverse reaction there are large uncertainties when the process proceeds via excited states of 71 Ge at 175 and JHEP05(2013)050 500 keV. Following [88], the total cross section can be written as In our analysis we use these values together with eq. (3.4) for the cross section. This means that the ratios of observed to expected rates based on the Bahcall prediction have to be rescaled by a factor 0.982 (0.977) for the Cr (Ar) experiments, so that we obtain for them the following updated numbers for our fits:
GALLEX:
R 1 (Cr) = 0.94 ± 0.11 [85] R 2 (Cr) = 0.80 ± 0.10 [85] , SAGE: R 3 (Cr) = 0.93 ± 0.12 [86] R 4 (Ar) = 0.77 ± 0.08 [87] . Here, we have symmetrized the errors, and we have included only experimental errors, but not the uncertainty on the cross section (see below). We build a χ 2 out of the four data points from GALLEX and SAGE and introduce two pulls corresponding to the systematic uncertainty of the two transitions to excited state according to eq. (3.5). The determination of BGT 175 is relatively poor, with zero being allowed at 2σ. In order to avoid unphysical negative contributions from the 175 keV state, we restrict the domain of the corresponding pull parameter accordingly. Fitting the four data points with a constant neutrino flux normalization factor r we find Because of the different cross sections used, these results differ from the ones obtained in [19], where the best fit point is at r = 0.76, while the significance is comparable, around 3σ. An updated analysis including also a discussion of the implications of the measurement in [89] can be found in [90]. The event deficit in radioactive source experiments can be explained by assuming ν e mixing with an eV-scale state, such that ν e disappearance happens within the detector volume [18]. We fit the Gallium data in the 3+1 framework by averaging the oscillation probability over the detector volume using the geometries given in [18]. The resulting allowed region at 95% confidence level is shown in orange in figure 2. Consistent with the above discussion we find mixing angles somewhat smaller than those obtained by the authors of [19]. The best fit point from combined Gallium+SBL reactor data is given in table 4, and the no-oscillation hypothesis is disfavored at 99.9% CL (2 dof) or 3.3σ compared to the 3+1 best fit point.
Let us consider now the Gallium and SBL reactor data in the framework of two sterile neutrinos, in particular in the 3+2 scheme. SBL ν e andν e disappearance data depend JHEP05(2013)050 Figure 2. Allowed regions at 95% CL (2 dof) for 3+1 oscillations. We show SBL reactor data (blue shaded), Gallium radioactive source data (orange shaded), ν e disappearance constraints from ν e -12 C scattering data from LSND and KARMEN (dark red dotted), long-baseline reactor data from CHOOZ, Palo Verde, DoubleChooz, Daya Bay and RENO (blue short-dashed) and solar+KamLAND data (black long-dashed). The red shaded region is the combined region from all these ν e andν e disappearance data sets. Table 5. Best fit point of SBL reactor data and SBL reactor + Gallium data in a 3+2 oscillation scheme. We give the mass-squared differences in eV 2 and the mixing angles in radians. The relation to the mixing matrix elements is |U e4 | = cos θ 15 sin θ 14 and |U e5 | = sin θ 15 . The ∆χ 2 relative to 3+1 oscillations is evaluated for 2 dof, corresponding to the two additional parameters, while for the ∆χ 2 relative to no-oscillations we use 4 dof. on 4 parameters in this case, ∆m 2 41 , ∆m 2 51 , and the two mixing angles θ 14 and θ 15 (or, equivalently, the moduli of the two matrix elements U e4 and U e5 ). We report the best fit points from SBL reactor data and from SBL reactor data combined with the Gallium source data in table 5. For these two cases we find an improvement of 5.3 and 3.8 units in χ 2 , respectively, when going from the 3+1 scenario to the 3+2 case. Considering that the 3+2 model has two additional parameters compared to 3+1, we conclude that there is no improvement of the fit beyond the one expected by increasing the number of parameters, and that SBL ( -) ν e data sets show no significant preference for 3+2 over 3+1. This is also visible from the fact that the confidence level at which the no oscillation hypothesis is excluded does not increase for 3+2 compared to 3+1, see the last columns of tables 4 and 5. There the ∆χ 2 is translated into a confidence level by taking into account the number of parameters relevant in each model, i.e., 2 for 3+1 and 4 for 3+2.
Global data on ν e andν e disappearance
Let us now consider the global picture regarding ( -) ν e disappearance. In addition to the short-baseline reactor and Gallium data discussed above, we now add data from the following experiments: • The remaining reactor experiments at a long baseline ("LBL reactors") and the very long-baseline reactor experiment KamLAND, see table 3.
• Global data on solar neutrinos, see appendix C for details.
• LSND and KARMEN measurements of the reaction ν e + 12 C → e − + 12 N [91,92]. The experiments have found agreement with the expected cross section [93], hence their measurements constrain the disappearance of ν e with eV-scale mass-squared differences [94,95]. Details on our analysis of the 12 C scattering data are given in appendix E.1.
So far the LBL experiments DayaBay and RENO have released only data on the relative comparison of near (L ∼ 400 m) and far (L ∼ 1.5 km) detectors, but no information on the absolute flux determination is available. Therefore, their published data are essentially insensitive to oscillations with eV-scale neutrinos and they contribute only indirectly via constraining θ 13 . In our analysis we include a free, independent flux normalization factor for each of those two experiments. Chooz and DoubleChooz both lack a near detector. Therefore, in the official analyses performed by the respective collaborations the Bugey4 measurement is used to normalize the flux. This makes the official Chooz and DoubleChooz results on θ 13 also largely independent of the presence of sterile neutrinos. However, the absolute rate of Bugey4 in terms of the flux predictions is published (see table 3) and we can use this number to obtain an absolute flux prediction for Chooz and DoubleChooz. Therefore, in our analysis Chooz and DoubleChooz (as well as Palo Verde) by themselves also show some sensitivity to sterile neutrino oscillations. In a combined analysis of Chooz and DoubleChooz with SBLR data the official analyses are recovered approximately. Previous considerations of LBL reactor experiments in the context of sterile neutrinos can be found in refs. [96][97][98][99].
We show in table 4 a combined analysis of the SBL and LBL reactor experiments (row denoted by "SBL+LBL"), where we minimize with respect to θ 13 . We find that the significance of the reactor anomaly is not affected by the inclusion of LBL experiments and finite θ 13 . The ∆χ 2 no-osc even slightly increases from 9.0 to 9.2 when adding LBL data to the SBL data ("no-osc" refers here to θ 14 = 0). Hence, we do not agree with the conclusions of ref. [100], which finds that the significance of the reactor anomaly is reduced to 1.4σ when LBL data and a finite value of θ 13 is taken into account.
Solar neutrinos are also sensitive to sterile neutrino mixing (see e.g. [101][102][103]). The main effect of the presence of ν e mixing with eV states is an over-all flux reduction. While this effect is largely degenerate with θ 13 , a non-trivial bound is obtained in the combination with DayaBay, RENO and KamLAND. KamLAND is sensitive to oscillations driven by ∆m 2 21 and θ 12 , whereas sterile neutrinos affect the overall normalization, degenerate with JHEP05(2013)050 θ 13 . The matter effect in the sun as well as SNO NC data provide additional signatures of sterile neutrinos, beyond an overall normalization. As we will show in section 4 solar data depend also on the mixing angles θ 24 and θ 34 , controlling the fraction of ν e → ν s transitions, see e.g. [101]. As discussed in appendix C, in the limit ∆m 2 i1 = ∞ for i ≥ 3, solar data depends on 6 real mixing parameters, 1 complex phase and ∆m 2 21 . Hence, in a 3+1 scheme all six mixing angles are necessary to describe solar data in full generality. However, once other constraints on mixing angles are taken into account the effect of θ 24 , θ 34 , and the complex phase are tiny and numerically have a negligible impact on our results. Therefore we set θ 24 = θ 34 = 0 for the solar neutrino analysis in this section. In this limit solar data becomes also independent of the complex phase.
The results of our fit to global ( -) ν e disappearance data are shown in figure 2 and the best fit point is given in table 4. For this analysis the mass-squared differences ∆m 2 21 and ∆m 2 31 have been fixed, whereas we marginalize over the mixing angles θ 12 and θ 13 . We see from figure 2 that the parameter region favored by short-baseline reactor and Gallium data is well consistent with constraints from long-baseline reactors, KARMEN's and LSND's ν e rate, and with solar and KamLAND data.
Recently, data from the Mainz [104] and Troitsk [105] tritium beta-decay experiments have been re-analyzed to set limits on the mixing of ν e with new eV neutrino mass states. Taking the results of [105] at face value, the Troitsk limit would cut-off the high-mass region in figure 2 at around 100 eV 2 [106] (above the plot-range shown in the figure). The bounds obtained in [104] are somewhat weaker. The differences between the limits obtained in [104] and [105] depend on assumptions concerning systematic uncertainties and therefore we prefer not to explicitly include them in our fit. The sensitivity of future tritium decay data from the KATRIN experiment has been estimated in [107]. Implications of sterile neutrinos for neutrino-less double beta-decay have been discussed recently in [48][49][50].
Let us now address the question whether the presence of a sterile neutrino affects the determination of the mixing angle θ 13 (see also [99,100]). In figure 3 we show the combined determination of θ 13 and θ 14 for two fixed values of ∆m 2 41 . The left panel corresponds to a relatively large value of 10 eV 2 , whereas for the right panel we have chosen the value favored by the global ( -) ν e disappearance best fit point, 1.78 eV 2 . The mass-squared differences ∆m 2 21 and ∆m 2 31 have been fixed, whereas we marginalize over the mixing angle θ 12 . We observe a clear complementarity of the different data sets: SBL reactor and Gallium data determine |U e4 |, since oscillations are possible only via ∆m 2 41 , all other mass-squared differences are effectively zero for them. For LBL reactors ∆m 2 41 can be set to infinity, ∆m 2 31 is finite, and ∆m 2 21 is effectively zero; therefore they provide an unambiguous determination of θ 13 by comparing near and far detector data. The upper bound on |U e4 | from LBL reactors is provided by Chooz, Palo Verde, DoubleChooz, since for those experiments also information on the absolute flux normalization can be used, as mentioned above. In contrast, for solar neutrinos and KamLAND, both ∆m 2 41 and ∆m 2 31 are effectively infinite, and θ 13 and θ 14 affect essentially the overall normalization and are largely degenerate, as visible the figure.
In conclusion, the θ 13 determination is rather stable with respect to the presence of sterile neutrinos. We note, however, that its interpretation becomes slightly more complicated. For instance, in the 3+1 scheme using the parametrization from table 2, the JHEP05(2013)050 relation between mixing matrix elements and mixing angles is |U e3 | = cos θ 14 sin θ 13 and |U e4 | = sin θ 14 . Hence, the one-to-one correspondence between |U e3 | and θ 13 as in the three-flavor case is spoiled. 4 ν µ ,ν µ , and neutral-current disappearance searches In this section we discuss the constraints on the mixing of ( -) ν µ and ( -) ν τ with new eV-scale mass states. In the 3+1 scheme this is parametrized by |U µ4 | and |U τ 4 |, respectively. In terms of the mixing angles as defined in eq. (2.6) we have |U µ4 | = cos θ 14 sin θ 24 and |U τ 4 | = cos θ 14 cos θ 24 sin θ 34 . In the present paper we include data sets from the following experiments to constrain ( -) ν µ and ( -) ν τ mixing with eV states: • SBL ν µ disappearance data from CDHS [108]. Details of our simulation are given in [79].
• Super-Kamiokande. It has been pointed out in [109] that atmospheric neutrino data from Super-Kamiokande provide a bound on the mixing of ν µ with eV-scale mass states, i.e., on the mixing matrix elements |U µ4 |, |U µ5 |. In addition, neutral-current matter effects provide a constraint on |U τ 4 |, |U τ 5 |. A discussion of the effect is given in the appendix of [33]. Details on our analysis and references are given in appendix D.
• MiniBooNE [44,45]. Apart from the ( -) ν e appearance search, MiniBooNE can also look for SBL ( -) ν µ disappearance. Details on our analysis are given in appendix E.4. 2 41 at 99% CL (2 dof) from CDHS, atmospheric neutrinos, MiniBooNE disappearance, MINOS CC and NC data, and the combination of them. We minimize with respect to |U τ 4 | and the complex phase ϕ 24 . In red we show the region preferred by LSND and MiniBooNE appearance data combined with reactor and Gallium data on ( -) ν e disappearance, where for fixed |U µ4 | 2 we minimize with respect to |U e4 | 2 . Right: constraints in the plane of |U τ 4 | 2 and ∆m 2 41 at 99% CL (2 dof) from MINOS CC + NC data (green) and the combined global ( -) ν µ and NC disappearance data (blue region, black curves). We minimize with respect to |U µ4 | and we show the weakest ("best phase") and strongest ("worst phase") limits, depending on the choice of the complex phase ϕ 24 . In both panels we minimize with respect to ∆m 2 31 , θ 23 , and we fix sin 2 2θ 13 = 0.092 and θ 14 = 0 (except for the evidence regions in the left panel).
• MINOS [42,43]. The MINOS long-baseline experiment has published data on charged current (CC) ( -) ν µ disappearance as well as on the neutral current (NC) count rate. Both are based on a comparison of near and far detector measurements. In addition to providing the most precise determination of ∆m 2 31 (from CC data), those data can also be used to constrain sterile neutrino mixing, where CC (NC) data are mainly relevant for |U µ4 |, |U µ5 | (|U τ 4 |, |U τ 5 |). See appendix E.5 for details.
Limits on the |U µi | row of the mixing matrix come from ( -) ν µ disappearance experiments. In a 3+1 scheme the ( -) ν µ SBL disappearance probability is given by where we have defined an effective ( -) ν µ disappearance mixing angle by i.e., in our parametrization (2.6) the effective mixing angle θ µµ depends on both θ 24 and θ 14 .
In contrast to the ν e disappearance searches discussed in the previous section, experiments JHEP05(2013)050 probing ( -) ν µ disappearance have not reported any hints for a positive signal. We show the limits from the data listed above in the left panel of figure 4. Note that the MINOS limit is based on the comparison of the data in near and far detectors. For ∆m 2 41 ∼ 10 eV 2 oscillation effects become relevant at the near detector, explaining the corresponding features in the MINOS bound around that value of ∆m 2 41 , whereas the features around ∆m 2 41 ∼ 0.1 eV 2 emerge from oscillation effects in the far detector. The roughly constant limit in the intermediate range 0.5 eV 2 ∆m 2 41 3 eV 2 corresponds to the limit ∆m 2 41 ≈ 0 (∞) in the near (far) detector adopted in [42,43]. In that range the MINOS limit on |U µ4 | is comparable to the one from SuperK atmospheric data. For ∆m 2 41 1 eV 2 the limit is dominated by CDHS and MiniBooNE disappearance data.
In figure 4 (left) we show also the region preferred by the hints for eV-scale oscillations from LSND and MiniBooNE appearance data (see next section) combined with reactor and Gallium data on ( -) ν e disappearance. For fixed |U µ4 | 2 we minimize the corresponding χ 2 with respect to |U e4 | 2 to show the projection in the plane of |U µ4 | 2 and ∆m 2 41 . The tension between the hints in the ν µ data is clearly visible in this plot. We will discuss this conflict in detail in section 6.
Limits on the mixing of ν τ with eV-scale states are obtained from data involving information from NC interactions, which allow to distinguish between 5 The relevant data samples are atmospheric and solar neutrinos (via the NC matter effect) and MINOS NC data. Furthermore, the parameter |U τ 4 | controls the relative weight of the oscillation modes ν µ → ν τ and ν µ → ν s at the "atmospheric" scale ∆m 2 31 : a large value of |U τ 4 | implies a large fraction of ν µ → ν s oscillations at the ∆m 2 31 scale. The limit in the plane of |U τ 4 | 2 and ∆m 2 41 is shown in the right panel of figure 4. As follows from eq. (2.4) (see also appendix A), in the LBL approximation relevant for MINOS NC data a complex phase enters the oscillation probabilities, corresponding to the combination arg(U * µ4 U τ 4 U µ3 U * τ 3 ). In our calculations we take the rotation matrix V 24 to be complex and use the phase ϕ 24 to parametrize this phase. In figure 4 we illustrate the impact of this phase by showing the strongest and weakest limit obtained when varying ϕ 24 . We observe that the limit from MINOS depends quite significantly on this phase. The different shapes of the "best phase" and "worst phase" regions emerge from the different properties of CC and NC data. For the weakest limit ("best phase") the fit uses the freedom of the term including the complex phase, which implies that a finite value of θ 24 (or |U µ4 |) is adopted, subject to the constraint from MINOS CC data. Therefore the same structure as in the left panel of figure 4 becomes visible also in limit on |U τ 4 |. If we force the phase to take on a value not favored by the fit, a smaller χ 2 is obtained for θ 24 close to zero, which implies that the phase actually becomes unphysical. In this case CC data are not important for the limit on |U τ 4 |, which then is dominated by NC data. Because of the much worse energy reconstruction for NC events compared to CC ones, the features JHEP05(2013)050 Figure 5. Constraints in the plane of |U µ4 | 2 and |U τ 4 | 2 for three fixed values of ∆m 2 41 from MINOS CC + NC data (green), atmospheric neutrinos (orange), CDHS + MiniBooNE ( -) ν µ disappearance + LBL reactors (red), and the combination of those data (blue). The constraint from solar neutrinos is shown in magenta. Regions are shown at 90% and 99% CL (2 dof) with respect to the χ 2 minimum at the fixed ∆m 2 41 . We minimize with respect to complex phases and include effects of θ 13 and θ 14 where relevant. The gray region is excluded by the unitarity requirement |U µ4 | 2 + |U τ 4 | 2 ≤ 1. Note the different scale on the axes. induced by finite values of ∆m 2 41 in either the far or near detector become to a large extent washed out.
The global limit on |U τ 4 | is actually dominated by atmospheric neutrino data and shows only a very weak dependence on the complex phase. In our atmospheric neutrino analysis the information on |U τ 4 | enters via the NC matter effect induced by the presence of sterile neutrinos. A large value of |U τ 4 | would imply a significant matter effect in ∆m 2 31 driven ( -) ν µ disappearance, which is not consistent with the zenith angle distribution observed in SuperK. We find the limit |U τ 4 | 2 0.2 at 2σ (1 dof) (4.3) from global data, largely independent of ∆m 2 41 as well as complex phases. Figure 5 shows the constraints in the plane of |U µ4 | 2 and |U τ 4 | 2 for three fixed values of ∆m 2 41 . We observe the comparable bound on |U µ4 | 2 from MINOS (mainly CC data) and atmospheric, which however is superseded by CDHS, MiniBooNE for ∆m 2 41 1 eV 2 (left and middle panels). Those latter data however, do not provide any constraint on |U τ 4 | 2 , where the global bound is dominated by atmospheric neutrinos for all values of ∆m 2 41 of interest. We also observe that solar neutrinos provide a bound on |U τ 4 | 2 of similar strength as MINOS data, thanks to the NC matter effect and SNO NC data. No relevant limit can be set on |U µ4 | 2 from solar neutrinos. 5 ν µ → ν e andν µ →ν e appearance searches Now we move on to the discussion of appearance searches. In contrast to disappearance experiments which probe only one row of the mixing matrix, i.e., only the elements |U αi | for fixed α, an appearance experiment in the channel ( -) ν α → ( -) ν β is sensitive to two rows via combinations like |U αi U βi | and potentially to some complex phases. In the SBL approximation the 3+1 appearance probability in the phenomenologically most relevant channel JHEP05(2013)050 where we have defined an effective mixing angle by In the parametrization from eq. (2.6) we obtain sin 2θ µe = sin θ 24 sin 2θ 14 . The oscillation probability in the 3+2 scheme is given in eq. (2.1). The 3+1 SBL appearance probability does not depend on complex phases, whereas in the 3+2 scheme CP violation via complex phases is possible at SBL [33,60].
Our analyses of LSND [12], KARMEN [118], ν e appearance data are based on [33,79,120], where references and technical details can be found. Our analyses of E776 [40] and ICARUS [41], used for the first time in the present paper, are described in appendices E.2 and E.3, respectively. 6 In the case of LSND, we use only the decay-at-rest (DAR) data which are most sensitive to oscillations. Decay-in-flight (DIF) data on ν µ → ν e are consistent with the signal seen in DAR data, however the significance of the oscillation signal for DIF is much less than for DAR. A combined DAR-DIF analysis in a two-neutrino framework would shift the allowed region to somewhat smaller values of the mixing angle. A detailed discussion of LSND DAR versus DIF in the context of 3+1 neutrino oscillations can be found in [35].
In our analysis of the MiniBooNE ν e andν e appearance search we use the latest data 7 from [16], following closely the analysis instructions provided by the collaboration. Details are given in appendix E.4. Since their very first data release in 2007 [13], MiniBooNE observe an excess of events over expected background in the low energy ( 500 MeV) region of the event spectrum [122]. Since the spectral shape of the excess is difficult to explain in a two-flavor oscillation framework, historically the analysis window has been (somewhat artificially) divided into a low energy region containing the excess events and a high energy part with no excess. 8 Preliminary results from anti-neutrinos showed also some indication for an event excess in the high energy part of the spectrum [14] which indicated the need for CP violation in order to reconcile neutrino and anti-neutrino data. However, for the most recent data [15,16] the shapes of the neutrino and anti-neutrino spectra appear to be consistent with each other, showing excess events below around 500 MeV and data consistent with background in the high energy region, see figure 6. In our work we always analyse the full energy spectrum for both neutrinos and anti-neutrinos. Contrary to the analysis of the MiniBooNE collaboration we take into account oscillations of all background components in a consistent way, according to the particular oscillation framework to be tested, see appendix E.4 for details. 6 Recently also the OPERA experiment presented results from a νµ → νe appearance search [121]. The obtained limit is comparable to the one from ICARUS [41]. 7 The recent updated analysis from MiniBooNE [17] is based on the same data as [16], corresponding to 6.46 × 10 20 protons on target in neutrino mode and 11.27 × 10 20 protons on target in anti-neutrino mode. 8 The importance of energy reconstruction effects for the low energy excess has been pointed out in refs. [123,124], see also [17]. Figure 6. MiniBooNE neutrino (left) and anti-neutrino (right) data compared to the predicted spectra for the 3+1, 3+2, and 1+3+1 best fit points for the combined appearance data (the data set used in figure 7) and global data including disappearance. Shaded histograms correspond to the unoscillated backgrounds. The predicted spectra include the effect of background oscillations. The corresponding χ 2 values (for combined neutrino and anti-neutrino data) are also given in the plot.
ν e appearance experiments in the 3+1 scheme. We show the regions from LSND and MiniBooNE anti-neutrino data and the bounds from MiniBooNE neutrinos, KARMEN, NOMAD, ICARUS, and E776. The latter is combined with LBL reactor data in order to constrain the oscillations of the ( -) ν e backgrounds; this leads to a non-vanishing bound on sin 2 2θ µe from E776 at low ∆m 2 41 . The red region corresponds to the combination of those data, with the star indicating the best fit point.
In figure 7 we show a summary of the ( -) ν µ → ( -) ν e data in the 3+1 scheme. We observe an allowed region from MiniBooNE anti-neutrino data that is driven by the event excess JHEP05(2013)050 below around 800 MeV and has significant overlap with the parameter region preferred by LSND. At the 99% CL shown in the figure, MiniBooNE neutrino data give only an upper bound, although we find closed regions (again driven by the low-energy excess) at lower confidence levels. This is in qualitative agreement with the results obtained by the MiniBooNE collaboration, compare figure 4 of [16] or figure 3 of [17]. The different shape of our regions is due to the oscillations of the background components. Those can be relatively large in an appearance only fit, since for fixed sin 2 2θ µe we allow |U µ4 | and |U e4 | to vary freely, subject to the constraint eq. (5.2). We have checked that when we adopt the same assumptions as the MiniBooNE collaboration we recover their regions/bounds with good accuracy.
The recent constraint on ν µ → ν e appearance from ICARUS [41] at long-baseline leads to a bound on sin 2 2θ µe essentially independent of ∆m 2 41 in the range shown here. It excludes in particular the region of large mixing and low ∆m 2 41 that is otherwise unconstrained by appearance experiments. 9 An important background for the ∆m 2 41 driven ν µ → ν e search in ICARUS are ν e appearance events due to ∆m 2 31 and θ 13 . Furthermore, as discussed in section 2 and appendix A the long-baseline appearance probability in the 3+1 scheme depends on one complex phase. In deriving the ICARUS bound shown in figure 7 we fix the parameters sin 2 2θ 13 = 0.092 and ∆m 2 31 = 2.4 × 10 −3 eV 2 but marginalize over the relevant complex phase.
As visible in figure 7 there is a consistent overlap region for all ν e experiments and we can perform a combined analysis. The resulting region is shown in red in figure 7. The best fit point is at sin 2 2θ µe = 0.013, ∆m 2 41 = 0.42 eV 2 with χ 2 min /dof = 87.9/(68 − 2) dof (GOF = 3.7%). The no-oscillation hypothesis is excluded with respect to the best fit point with ∆χ 2 = 47.7. This large value is mostly driven by LSND. The relatively low GOF comes mainly from MiniBooNE neutrino data, as can be seen from table 6, where we list the individual contribution of the experiments to the total appearance χ 2 . This is also obvious from figure 6, showing that at the 3+1 appearance best fit point (black dotted histogram) the fit to the neutrino spectrum is not very good, predicting too much excess in the region 0.6 − 1 GeV and only partially explaining the excess in the data below 0.4 GeV.
Combined analysis of global data
We now address the question whether the hints for sterile neutrino oscillations discussed above can be reconciled with each other as well as with all existing bounds within a common sterile oscillation framework. In section 6.1 we discuss the 3+1 scenario, whereas in section 6.2 we investigate the 3+2 and 1+3+1 schemes.
3+1 global analysis
In the 3+1 scheme, SBL oscillations are described by effective 2-flavor oscillation probabilities, involving effective mixing angles for each oscillation channel. The expressions for the effective angles θ ee , θ µµ , θ µe governing the ( -) ν e disappearance, ( -) ν µ disappearance, and ( -) ν µ → ( -) ν e appearance probabilities are given in eqs. (3.2), (4.2), (5.2), respectively. From those definitions it is obvious that the three relevant oscillation amplitudes are not independent, since they depend only on two independent fundamental parameters, namely |U e4 | and |U µ4 |. Neglecting terms of order |U α4 | 4 (α = e, µ) one finds sin 2 2θ µe ≈ 4 sin 2 2θ ee sin 2 2θ µµ . (6.1) Hence, the appearance amplitude relevant for the LSND/MiniBooNE signals is quadratically suppressed by the disappearance amplitudes, which both are constrained to be small. This leads to the well-known tension between appearance signals and disappearance data in the 3+1 scheme, see e.g. [29,30] for early references. This tension is illustrated for the latest global data in the left panel of figure 8, where we show the allowed region for all appearance experiments (the same as the combined region from figure 7), compared to the limit from disappearance experiments in the plane of sin 2 2θ µe and ∆m 2 41 . The preferred values of ∆m 2 41 for disappearance data come from the reactor and Gallium anomalies. The regions for disappearance data, however, are not closed in this projection in the parameter space and include sin 2 2θ µe = 4|U e4 U µ4 | 2 = 0, which always can be achieved by letting U µ4 → 0 because of the non-observation of any positive signal in SBL ( -) ν µ disappearance. The upper bound on sin 2 2θ µe from disappearance emerges essentially as the product of the upper bounds on |U e4 | and |U µ4 | from ( -) ν e and ( -) ν µ disappearance according to eq. (6.1). We observe from the plot the clear tension between those data sets, with only marginal overlap regions at above 99% CL around ∆m 2 41 ≈ 0.9 eV 2 and at 3σ around ∆m 2 41 ≈ 6 eV 2 . The tension between disappearance and appearance experiments can be quantified by using the so-called parameter goodness of fit (PG) test [35,125]. It is based on the χ 2 definition χ 2 PG ≡ χ 2 min,glob − χ 2 min,app − χ 2 min,dis = ∆χ 2 app + ∆χ 2 dis , ∆χ 2 x = χ 2 x,glob − χ 2 min,x with x = app, dis, χ 2 x evaluated at the best fit point of the global data. χ 2 PG should be evaluated with the number of dof corresponding to the number of parameters in common between appearance and disappearance data (2 in the case of 3+1). From the numbers given in table 7 we observe that the global 3+1 fit leads to χ 2 min /dof = 712/680 with a p-value 19%, whereas the PG test indicates that appearance and disappearance data are consistent with each other only with a p-value of about 10 −4 . The strong tension in the fit is not reflected in the global χ 2 minimum, since there is a large number of data points not sensitive to the tension, which leads to the "dilution" of the GOF value in the global fit, see [125] for a discussion. In contrast, the PG test is designed to test the consistency of different parts of the global data.
The conflict between the hints for eV 2 -scale oscillations and null-result data is also illustrated in the right panel of figure 8. In red we show the parameter regions indicated by the combined hints for oscillations including SBL reactor, Gallium, LSND, and MiniBooNE appearance data. Those regions are compared to the constraint emerging from all other data. We find no overlap region at 99% CL. Hence, an explanation of all anomalies within the 3+1 scheme is in strong tension with constraints from various null-result experiments.
3+2 and 1+3+1 global analyses
Now we move to the global analysis within a two-sterile neutrino scenario in order to investigate whether the additional freedom allows to mitigate the tension in the fit. We give χ 2 and PG values for the 3+2 and 1+3+1 schemes in table 7 and the corresponding values of the parameters in table 8. We observe from the PG values that the tension between appearance and disappearance data remains severe, especially for the 3+2 case, with a PG value below 10 −4 , even less than for 3+1. For 1+3+1 consistency at the 2 per mille level can be achieved. Figure 9. Allowed regions in the plane of |∆m 2 41 | and |∆m 2 51 | in 3+2 (upper-left part) and 1+3+1 (lower-right part) mass schemes. We minimize over all mixing angles and phases. We show the regions for appearance data (light blue) and disappearance data (light green) at 95% CL (2 dof), and global data (dark and light red) at 95% and 99% CL (2 dof).
JHEP05(2013)050
Let us first discuss the 3+2 fit. We find a modest improvement of the total χ 2 in the global fit compared to 3+1 by Evaluated for 4 additional parameters relevant for SBL data in 3+2 compared to 3+1 this corresponds to 96.9% CL. The origin of the very low parameter goodness of fit can be understood by looking at the contributions of appearance and disappearance data to χ 2 PG . Table 7 shows that the χ 2 of appearance data at the global best fit point, χ 2 app,glob , changes only by about 3 units between 3+1 and 3+2. However, if appearance data is fitted alone, an improvement of 15.2 units in χ 2 is obtained when going from 3+1 to 3+2, see eq. (5.3). The fact that appearance data by themselves are fitted much better in 3+2 than in 3+1 leads to the large value of χ 2 PG = 25.8, with a contribution of 19.7 from appearance data. In other words: the fit to appearance data at the global 3+2 best fit point (χ 2 app,glob = 92.4/68, p-value 2.6%) is much worse than at the appearance-only 3+2 best fit point (χ 2 min,app /dof = 72.7/63, p-value 19%). This interpretation is also supported by figure 6, showing an equally bad fit to MiniBooNE neutrino data at the 3+1 and 3+2 global best fit points (black solid and red solid histograms, respectively).
We further investigate the origin of the tension in the 3+2 fit in figures 9 and 10. In figure 9 we show the allowed regions in the multi-dimensional parameter space projected onto the plane of the two mass-squared differences for appearance and disappearance data JHEP05(2013)050 and ∆m 2 51 at 90% and 99% CL (2 dof). We minimize over all undisplayed mixing parameters. We show the regions for appearance data (blue), disappearance data (green), and the global data (red). separately, as well as the combined region. The 3+2 global best fit point happens close to an overlap region of appearance and disappearance data at 95% CL in that plot. However, an overlap in the projection does not imply that the multi-dimensional regions overlap. In the left panel of figure 10 we fix the mass-squared differences to values close to the global 3+2 best fit point and show allowed regions in the plane of |U e4 U µ4 | and |U e5 U µ5 |. These are the 5-neutrino analogs to the 4-neutrino SBL amplitude sin 2θ µe . Similar as in the 3+1 case we observe a tension between appearance and disappearance data, with no overlap at 99% CL. This explains the small PG probability at the 3+2 best fit point. The right panel of figure 10 corresponds to the local minimum of the combined fit visible in figure 9 around ∆m 2 41 = 0.9 eV 2 , ∆m 2 51 = 6 eV 2 . In this case no tension is visible in the mixing parameters shown in figure 10, however, from figure 9 we see that those values for the mass-squared differences are actually not preferred by appearance data, which again leads to a degraded GOF. We conclude that the tension between appearance and disappearance data cannot be resolved in the 3+2 scheme.
For the 1+3+1 ordering of 5-neutrino mass states a somewhat better fit can be obtained. We find χ 2 3+1,glob − χ 2 1+3+1,glob = 17.8 , (6.4) corresponding to disfavoring 3+1 at the 99.9% CL (4 dof) compared to 1+3+1. We observe from table 7 that at the 1+3+1 global best fit point a much better fit to appearance data is obtained than at the 3+2 best fit point (χ 2 app,glob = 82.4 compared to 92.4). As visible from the blue solid histogram in figure 6 the lack of an event excess in the MiniBooNE neutrino spectrum around 0.6 GeV is reasonably well reproduced at the 1+3+1 global best fit point, although the low energy excess is still under-predicted. The χ 2 PG for appearance versus disappearance for 1+3+1 is even slightly less than for 3+1 (16.8 versus 18.0). Because of the additional parameters relevant for the evaluation of χ 2 PG the p-value 0.2% is obtained for 1+3+1, about one order of magnitude better than in 3+1. The projection of the allowed regions on the plane of the mass-squared differences is shown in the lower-right part of figure 9. Note that the disappearance regions are to good accuracy symmetric for 3+2 and 1+3+1. This can be understood from eq. (2.3), where the difference between 3+2 and 1+3+1 appears only in the last term, which is suppressed by the 4th power of small matrix elements, compared to the leading terms at 2nd order. We observe in figure 9 that appearance and disappearance regions for 1+3+1 both overlap with the combined best fit point. In figure 11 we show again a section through the parameter space at fixed values for the mass-squared differences close to the global best fit point. Although the tension between appearance and disappearance is still visible (no overlap of the 90% CL regions) the disagreement is clearly less severe than in the 3+2 situation shown in the left panel of figure 10, and in figure 11 we find significant overlap at 99% CL, in agreement with the somewhat improved PG p-value.
Summary and discussion
We have investigated in detail the status of hints for eV 2 -scale neutrino oscillations, namely the indications for ( -) ν e disappearance due to the reactor and Gallium anomalies, and the indications for ( -) ν µ → ( -) ν e appearance from LSND and MiniBooNE. Those hints have been analysed in the context of the global data on neutrino oscillations, including short and longbaseline accelerator and reactor experiments, as well as atmospheric and solar neutrinos. Our main findings can be summarized as follows.
1. For all fits a global χ 2 min /dof ≈ 1 is obtained in our analysis, involving 689 data points in total, see table 7.
2. However, a joint fit of all anomalies suffers from tension between appearance and disappearance data, mainly due to the strong constraints from ( -) ν µ disappearance data.
JHEP05(2013)050
3. The tension in the fit is driven by the LSND and MiniBooNE appearance hints, since oscillations in the ( -) ν µ → ( -) ν e channel inevitably predict also a signal in ( -) ν µ disappearance, which is not observed at the relevant L/E scale. 4. In contrast, the reactor and Gallium anomalies are not in direct conflict with other data, since ( -) ν e and ( -) ν µ disappearance at the eV 2 scale are controlled by independent parameters. 5. In a 3+1 scheme the compatibility of appearance and disappearance data is at the level of 10 −4 . The individual allowed regions have marginal overlap at about 99% CL.
6. We do not find a very significant improvement of the fit in a 3+2 scheme compared to 3+1. Based on the relative χ 2 minima, 3+1 is disfavored with respect to 3+2 at 96.9% CL. The compatibility of appearance and disappearance data in 3+2 is even worse than in 3+1, because the fit of appearance data-only is significantly better in 3+2 than in 3+1, however, the appearance fit at the global best fit point is only marginally improved.
7. We find an improvement of the global fit in the 1+3+1 spectrum compared to 3+1, at the 99.9% CL. The compatibility of appearance and disappearance data is still low in 1+3+1, at the level of 0.2%.
Hence, in all cases we find significant tension in the fit, with the marginal exception of the 1+3+1 scheme. At our 1+3+1 best fit point the minimal value for the sum of all neutrino masses would be Σ ≈ 3 |∆m 2 51 | + |∆m 2 41 | + |∆m 2 51 | ≈ 3.2 eV, where we took the values given in table 8 and assumed that the mass-squared difference with the smaller absolute value is negative, using the symmetry 4 ↔ 5 and γ αβ → −γ αβ of SBL data, see eqs. (2.1) and (2.3). It remains an interesting question whether such a large value of Σ is consistent with cosmology, see e.g. [47,52,53,58,59].
Let us briefly compare our results to two other recent global sterile neutrino fits, from refs. [34] and [47]. We are in good agreement with the results of [34]. For instance, in table 2 of [34] χ 2 PG values for the consistency of appearance and disappearance data are given, 17.8 for 3+1 and 23.9 for 3+2, which compare well with our numbers from table 7, 18.0 and 25.8, respectively. There is some disagreement with the results of [47]. The corresponding χ 2 PG values reported in their table I are 6.6 and 11.12, which lead to significantly better compatibility of appearance and disappearance data. Comparing figure 1 of [47] with our figure 8 (left) we observe that our disappearance limits are somewhat stronger and our appearance region is at somewhat larger mixing angles, both effects increasing the tension. Our appearance region is in good agreement with figure 6 (left) of [34]. There are some differences between our disappearance region and figure 6 (right) of [34], mainly at high ∆m 2 41 . Irrespective of the hints for ( -) ν e disappearance and ( -) ν µ → ( -) ν e appearance, we have derived constraints on the mixing of eV-scale states with the τ -neutrino flavor. Those are dominated by data involving information from neutral-current interactions, which are solar JHEP05(2013)050 neutrino data (NC matter effect and SNO NC data), MINOS long-baseline NC data, and atmospheric neutrino data (NC matter effect). The global limit is dominated by the latter.
In conclusion, establishing sterile neutrinos at the eV-scale would be a major discovery of physics beyond the Standard Model. At present a consistent interpretation of all data indicating the possible presence of eV-scale neutrino mass states remains difficult. The global fit suffers from tension between different data sets. An unambiguous solution to this problem is urgently needed. We are looking forward to future data on oscillations at the eV 2 scale [38], as well as new input from cosmology.
Acknowledgments
Numerical results presented in this paper have been obtained on computing infrastructure provided by Fermi National Accelerator Laboratory and by Max Planck Institut für Kernphysik. The authors would like to thank the MINOS collaboration for their invaluable help in including their sterile neutrino search in this work. We are especially grateful to Alexan-
A Complex phases in sterile neutrino oscillations
In this appendix we discuss in some detail the phases for neutrino oscillations involving s extra sterile neutrino states. For definiteness, we will focus on s = 2; the special case of s = 1 can be easily obtained by dropping all terms containing a redundant "5" index. Let us order the flavor eigenstates as (ν e , ν µ , ν τ , ν s 1 , ν s 2 ) and introduce the following parametrization for the n × n mixing matrix, with n = 3 + s: where V ij represents a complex rotation by an angle θ ij and a phase ϕ ij in the ij plane. Note that rotations involving only sterile states (i.e., V with both , ≥ 4) are unphysical, and therefore we have omitted them from eq. (A.1). Removing those unphysical angles, U contains n(n − 1)/2 − s(s − 1)/2 = 3(s + 1) physical angles. In eq. (A.1) we have chosen a priori all rotations to be complex. We present now a method which allows to remove unphysical phases from the mixing matrix in a consistent
JHEP05(2013)050
way. First, we note that a complex rotation can be written as where O ij is a real rotation matrix, D k is a diagonal matrix with (D k ) jj = e iϕ for j = k and (D k ) jj = 1 for j = k. Depending on whether k = i or k = j, the phase in D k is either ±ϕ ij . Second, we note that phase matrices D k at the very left or right of the matrix U drop out of oscillation probabilities and are therefore unphysical. 10 Hence, we have to represent all matrices V ij in eq. (A.1) using eq. (A.2), and then try to commute as many phase matrices to the left and the right. The matrix D k commutes with a matrix O ij if k = i and k = j. Furthermore, if k = i or k = j we can commute D k with a complex matrix V ij by re-defining the phase ϕ ij : e.g., This leads to the following rule for removing phases. Let us start by removing one phase, let's take for instance ϕ 12 , obtaining a real V 12 → O 12 . Then we can no longer use the matrices D 1 and D 2 to remove phases, since we cannot commute them with O 12 to the very left or right of U . But, we can use for instance D 3 to remove one of the remaining phases ϕ i3 , and so forth. Hence, we can remove in total n − 1 phases. Starting with all 3(s + 1) physical angles complex, we obtain that there are 3(s + 1) − (n − 1) = 2s + 1 physical phases, i.e., 1 phase for no sterile neutrinos, 3 phases for the 3+1 spectrum, and 5 phases for the 3+2 spectrum. Those remaining phases cannot be associated arbitrarily to the V ij but only in a way which is consistent with the above prescription to remove phases. In particular, it is not possible to make simultaneously three rotation matrices ij, ik, kj real. One possible choice is the one given in eq. (2.6). Using this recipe to remove phases it is also straightforward to obtain the physical phases in case of the SBL or LBL approximations according to table 2.
In the SBL approximation for a 3+2 scheme, only two physical phases remain. In the parametrization invariant notation from eqs. (2.1) and (2.2), they are given by γ µe and γ µτ . Since the only SBL appearance experiments we consider are studying the ( -) ν µ → ( -) ν e oscillation channels only the phase γ µe is relevant for our analysis. In the specific parametrization from table 2, the physical phases have been chosen as ϕ 25 and ϕ 35 . Since ϕ 35 does not appear in the parametrization independent representation of γ µe according to eq. (2.2) we can remove it from our SBL analysis without loss of generality.
In the LBL limit, more phases are phenomenologically relevant. In particular, eq. (2.4) shows that the oscillation probabilities in the 3+2 case are sensitive to the parametrization independent phases with I αβij defined in eq. (2.2). The experiments for which the LBL approximation is relevant are ICARUS and MINOS. ICARUS searches for ν µ → ν e transitions, whereas 10 In this work we focus on neutrino oscillations. In cases where lepton-number violating processes are relevant, such as neutrino-less double beta-decay, more phases will lead to physical consequences and our phase counting does not apply. In particular, in such a case the phases on the right of the mixing matrix U (these are the so-called Majorana phases) cannot be absorbed.
JHEP05(2013)050
the NC data in MINOS are sensitive to the combination α=e,µ,τ P νµ→να . Therefore, for our analyses the two appearance channels (αβ) = (µe) and (µτ ) are relevant, leading, according to eq. (A.3), to four independent phases, in agreement with table 2. 11 The particular parametrization from the table implies that for the ν µ → ν e channel only the phases ϕ 13 and ϕ 25 are relevant, whereas the ν µ → ν τ channel is also sensitive to ϕ 35 and ϕ 34 .
From the way we have chosen the complex rotations in table 2 the correct phases in the 3+1 case are automatically obtained by dropping all rotations including the index "5" in the 3+2 mixing matrix. We recover the well-known result that in the SBL approximation in a 3+1 scenario no complex phase appears. In the LBL approximation two phases remain, corresponding to the combinations arg(U * µ4 U e4 U µ3 U * e3 ) and arg(U * µ4 U τ 4 U µ3 U * τ 3 ), which can be parametrized by using the phases ϕ 34 and ϕ 13 , where for the ν µ → ν e channel only ϕ 13 is relevant.
Let us comment also on the role of phases in solar and atmospheric neutrinos. As shown in appendix C solar neutrinos do depend on one effective complex phase. This is included in our analysis in full generality however the numerical impact of this phase dependence is small. It has been shown in [33] (appendix C) that the impact of complex phases on atmospheric neutrinos is very small and we neglect their effect in the current analysis.
B Systematic uncertainties in the reactor analysis
The correlation of errors between SBL reactors are quite important in order to obtain the significance of the reactor anomaly. Here we describe our error prescription for the SBLR analysis. From the errors quoted in the original publications we extracted the following components. First we removed the uncertainty on the neutrino flux prediction, since we include this uncertainty in a correlated way for all reactor experiments based on the prescription given in [21] (see below). The remaining error is divided into uncorrelated errors (including statistical as well as experimental contributions) as well as correlated errors between some SBLR measurements. The total uncorrelated error is shown in the last column of table 3. Below we give details on our assumptions on correlations.
The total error on the measured cross section per fission in Bugey4 is 1.38% [66]. It receives contributions which are reactor/site specific (1.09%) as well as detector specific (0.84%). Rovno91 [67] used the same detector as Bugey4. The errors on the experimental cross section comes from the reactor and geometry (2.1%) and the latter from the detector (1.8%). So the first one should be uncorrelated whereas the second one should be correlated with the corresponding one from Bugey4. Hence we have σ uncor Bugey4 = 1.09%, σ cor Bugey4/Rovno91 = 0.84%, σ uncor Rovno91 = 2.1%, σ cor Rovno91/Bugey4 = 1.8%. The Bugey3 measurement consists of 3 detectors at the distances 15 m, 40 m, 95 m. In table 9 of [68] systematic errors of 5% (absolute) and 2% (relative) are quoted. The 11 In deriving eq. (2.4) we have assumed that ∆m 2 41 , ∆m 2 51 , ∆m 2 54 are infinite. Note that this assumption does not reduce the number of physical phases further, since also the general procedure used in table 2 (assuming only ∆m 2 21 = 0) leads to the same number of physical phases as eq. (2.4).
JHEP05(2013)050
uncorrelated errors given in our table 3 are obtained by adding the statistical error (table 10 of [68]) to the 2% relative systematic error. For the correlated error we remove the relative systematic error as well as 2.4% for the flux prediction and obtain σ cor Bugey3 = 3.9%, which we take fully correlated between the 3 rate measurements. In cases when we include the spectral data from Bugey3 we use 2% (3.9%) as uncorrelated (correlated) normalization errors for the three spectra. Details of our spectral analysis of Bugey3 can be found in [79].
In Goesgen the same detector was used at three different distances. In table V of [69] the individual and correlated errors are given. The values for the uncorrelated errors used in our analysis (see table 3) are obtained by adding the statistical and uncorrelated systematic errors in quadrature and expressed in percentage of the ratio. Then [69] quotes a correlated error of 6%, which includes 3% from the neutrino spectrum, 2% from the cross section, 3.8% from efficiency, 2% from reactor power, and a few more < 1%. We remove the 3% neutrino spectrum, as well as the 2% from cross section (this seems way too large). This gives σ cor Goesgen = 4.8%. Part of this error is supposed to be correlated with ILL, since they used a "nearly identical" detector. Removing the reactor power of 2% we get σ cor Goesgen/ILL = 4.36%. In the ILL paper [70] errors of 3.66% statistical and 11.5% systematical are quoted. The contributions to the systematic error are given as 6.5% on the "intensity of the anti-neutrino energy spectrum", 8% detection efficiency, 1.2% neutron life time and some other smaller contributions. In the lack of detailed information we proceed as follows. We remove 3% for the flux uncertainties (the same as in Goesgen) and take 8% (the detection efficiency) to be correlated with Goesgen. This gives σ uncor ILL = 8.52% and σ cor ILL/Goesgen = 8%, where the uncorrelated error includes also the statistical one. We have checked that other "reasonable" assumptions on the ILL/Goesgen correlation do not change our results significantly.
From Krasnoyarsk [71,72] there are three data points based on a single detector, which records events from 2 "identical" reactors. In [71] from 1987, results at distances of 32.8 m and 92.3 m are reported. The statistical errors are 3.55% and 19.8%, respectively, and the systematical error are 4.84% and 4.76%, respectively, which include detector effects (∼ 3%), reactor power (∼ 3%) and the effective distance (∼ 1%). We take systematical errors fully correlated between those two data points. Then there is a measurement from 1994 [72] at 57 m. The errors include detector uncertainty (3.4%), reactor power (2.5%), and statistics (0.95%). We assume the detector error to be correlated with the 1987 data points but include the reactor power in the uncorrelated error.
For SRP [73] measurements at the distances of 18 m and 24 m are reported from the same detector, which has been moved between the two positions. The obtained ratios of data over expectation at the two distances are 98.7% ± 0.6%(stat.) ± 3.7%(syst.) and 105.5% ± 1.0%(stat.) ± 3.7%(syst.). The uncorrelated systematic error is derived from the ratio of the two spectra, 1.61 ± 0.02(stat) ± 0.03(syst), with an expectation of 1.73 [73]. Hence 1.86% = 0.03/1.61 is an uncorrelated systematic error. Then we remove the 2.5% from the neutrino spectrum from the systematical error and obtain σ uncor SRP1 = 1.95%, σ uncor SRP2 = 2.11%, and σ cor SRP = 2.0%. With this assumption on the uncorrelated errors the two data points are consistent at about 2.4σ.
JHEP05(2013)050
Rovno88 [74] reports 5 measurements with two different detectors: 1I, 2I, 1S, 2S, 3S, where the "I" experiments use an integral neutron detector, whereas the "S" experiments use a scintillation detector measuring the positron spectrum. In table III of [74] for each measurement two systematical errors are given, 2.2% for "the uncertainty in the measured reactor power and the geometric uncertainty", and a second uncertainty due to "errors in the detector characteristics and fluctuations". From table II one finds that statistical errors are negligible. In the absence of detailed information we assume the 2.2% uncertainty fully correlated among all experiments. From the second error we assume that half of it is uncorrelated and the other half is correlated among detectors of the same type. We have checked that our results do not depend significantly on those assumptions.
Finally let us comment on the uncertainty on the neutrino flux predictions. As mentioned above this uncertainty has been removed from the SBLR experimental errors since they are treated in a correlated way for all reactor experiments. For the uncertainties of the fluxes from 235 U, 239 Pu, 241 Pu we use the information from tables provided in [21]. The uncertainty is provided as uncorrelated error in each bin of neutrino energy as well as fully correlated (between energy bins as well as the three isotopes) errors. For the uncorrelated errors we proceed as follows. We perform a fit of a polynomial of 2nd order to the numbers given in [21]. Then those coefficients are used as pulls in the χ 2 analysis constrained by the covariance matrix obtained from the polynomial fits. This allows us to take into account the fact that the bin-to-bin uncorrelated errors of the neutrino spectrum will lead to correlated effects in the observed positron spectra. Since the uncorrelated flux errors are sub-leading compared to the correlated ones the parametrization with a 2nd order polynomial is sufficiently accurate. To include the correlated errors we follow [21]: the various contributions to this error in each neutrino energy bin are symmetrized and added in quadrature. Then we obtain an energy dependent fully correlated error for the spectra from 235 U, 239 Pu, 241 Pu which is included as one common pull parameter in the global reactor χ 2 . For the neutrinos from 238 U we use the flux from [20] and include a global normalization error on the 238 U induced events of 8.15% [26].
C Solar neutrino analysis
In the analysis of solar neutrino experiments we include the total rates from the radio chemical experiments Chlorine [126], GALLEX/GNO [85] and SAGE [127]. Regarding real-time experiments, we include the electron scattering energy-zenith angle spectrum data from all the Super-Kamiokande phases I-IV [128][129][130][131] and the data from the three phases of SNO [132][133][134], including the results on the low energy threshold analysis of the combined SNO phases I-III [135]. We also include the main set of the 740.7 days of Borexino data [136] as well as their high-energy spectrum from 246 live days [137]. In total the solar neutrino data used in our analysis consists of 261 data points.
Let us now focus on the probabilities relevant for the analysis of solar neutrino experiments. We will assume that only the first two mass eigenstates are dynamical, while the others are taken to be infinite. Since physical quantities have to be independent of the parameterization of the mixing matrix, we will use the freedom in choosing a parameteriza-
JHEP05(2013)050
tion that makes analytical expressions particularly simple. We start from the Hamiltonian in the flavor basis: where ∆ = diag(0, ∆m 2 21 , ∆m 2 31 , . . . )/2E and V = √ 2 G F diag(2N e , 0, 0, N n , . . . )/2. It is convenient to write U =Ũ U 12 , where U 12 is a complex rotation by an angle θ 12 and a phase δ 12 which we will define later. 12 Then we can write: In order to further simplify the analysis, let us now assume that all the mass-squared differences involving the "heavy" states ν h with h ≥ 3 can be considered as infinite: ∆m 2 hl → ∞ and ∆m 2 hh → ∞ for any l = 1, 2 and h, h ≥ 3. In leading order, the matrixH takes the effective block-diagonal form:H where H (2) is the 2 × 2 sub-matrix ofH corresponding to the first and second neutrino states, and ∆ (s) = diag(∆m 2 31 , ∆m 2 41 , . . . )/2E is a diagonal (s + 1) × (s + 1) matrix (the matter terms in this block are negligible in the limit of very large ∆m 2 hh ). Consequently, the evolution matrix is: We are interested only in the elements S αe . It is convenient to define θ 12 in such a way thatŨ e2 = 0. Taking into account the block-diagonal form ofS, we obtain: 11 +Ũ α2 S The expressions for the probabilities, P αe = |S αe | 2 , are straightforward:
S
(2) 21 (C.6) Here we have used the fact that the terms containing a factor e −i∆ hh L oscillate very fast, and therefore vanish once the finite energy resolution of the detector is taken into account. For solar neutrino experiments we only need P ee and P ae ≡ P ee + P µe + P τ e . It is therefore JHEP05(2013)050 convenient to define δ 12 in such a way that σ=stŨ σ1Ũ σ2 is a real number. Let us also define: Using unitarity relations, α |Ũ αi | 2 = 1 and αŨ α1Ũ α2 = 0, we obtain: where In the above expressions P (2) osc and P (2) int are effective terms derived from the Hamiltonian H (2) , which has the form: with the vacuum term including the phase δ 12 :
JHEP05(2013)050
To perform the analysis we map the parameters which are used for the analysis into the effective parameters for solar neutrinos. In the case of 3+1 the number of real mixing parameters is actually the same as the number of mixing angles in the most general parametrization of the 4 × 4 mixing matrix: the six parameters in eq. (C.12) are a function of the six angles θ 12 , θ 13 , θ 14 , θ 23 , θ 24 , θ 34 . The dependence of solar neutrinos on θ 13 , θ 14 is shown in figure 3 and the one on θ 24 , θ 34 follows from figure 5. The dependence on θ 23 is important for the NC matter effect and SNO NC data. For s > 1 the number of effective mixing parameters of solar data is less than the number of angles in the general mixing matrix. The phase δ 12 is a complicated function of complex phases and angles. We have verified numerically for the 3+1 case that if all angles are non-zero (and θ 24 , θ 34 relatively large) the χ 2 from solar data varies by about 1 to 2 units as a function of the phase. Once all relevant constraints on the mixing angles are imposed the effect of the phase on solar data is negligible. See also [138] for a discussion of complex phases in solar neutrinos in the context of sterile neutrinos.
D Atmospheric neutrino analysis
The analysis of atmospheric data follows closely the one presented in refs. [139,140] and includes the Super-Kamiokande results from phases I, II and III [141] (80 data points in total). Technical details on our χ 2 fit can be found in the appendix of [142].
In order to derive suitable expressions for the relevant probabilities, we can follow the approach presented in appendix C for solar data. The Hamiltonian in the flavor basis is given by eq. (C.1), which in the mass basis becomes: As before, we can simplify the analysis by assuming that all the mass-squared differences involving the "heavy" states ν h with h ≥ 4 can be considered as infinite: ∆m 2 hi → ∞ and ∆m 2 hh → ∞ for any i = 1, 2, 3 and h, h ≥ 4. In leading order, the matrixĤ takes the effective block-diagonal form:Ĥ whereĤ (3) is the 3×3 sub-matrix ofĤ corresponding to the first, second and third neutrino states, and ∆ (s) = diag(∆m 2 41 , ∆m 2 51 , . . . )/2E ν is a diagonal s×s matrix (the matter terms in this block are negligible in the limit of very large ∆m 2 hh ). We are interested only in the probabilities P αβ with α, β ∈ {e, µ, τ }. Taking into account the block-diagonal form ofĤ, we obtain: whereŜ (3) = Evol Ĥ (3) and U (3) is the 3 × 3 sub-matrix corresponding to the first three lines and columns of U , and as such it is not a unitary matrix.
JHEP05(2013)050
Let us now focus on the three-neutrino system described byĤ (3) (3) . The matter termV (3) contains both the "standard" contribution from ν e charged-current interactions, and a "non-standard" part induced by the absence of neutral-current interactions for sterile neutrinos. In practical terms, this is just the same as the problem of neutrino propagation in the presence of non-standard neutrino-matter interactions (NSI) described in ref. [140]. Following the approach discussed there, we make a number of simplifying assumptions: • we assume that the neutron-to-electron density ratio, R ne , is constant all over the Earth. We set R ne = 1.051 as inferred from the PREM model [143]; • we set ∆m 2 21 = 0, thus forcing the vacuum term ∆ (3) to have two degenerate eigenvalues; • we impose that the matter termV (3) also has two degenerate eigenvalues.
The first two assumptions are very well known hand have been discussed in detail in the literature. For example, in section 5.2 of ref. [142] it was noted that the different chemical composition of the Earth mantle and core has very little impact on NSI results. The last approximation is adopted here for purely practical reasons, since (together with the other two) it allows to greatly simplify the calculation of the neutrino evolution [140,144]. Unfortunately, in the present context the matter termV (3) cannot be fixed a priori, but it arises as an effective quantity determined by the angles and phases of the mixing matrix U . Finding the points in the general parameter space for whichV (3) has two degenerate eigenvalues (the only points for which our numerical analysis is technically feasible) is not an easy task. To solve this problem, we have considered here two alternative cases, both based on physically motivated scenarios: (a) decouple the electron flavor from the evolution and include the NC matter effect; (b) allow the electron flavor to participate in oscillations but neglect the NC matter effect.
Option (a) requires to set U ei = 0 for i ≥ 3 (in addition to ∆m 2 21 = 0), whereas option (b) is equivalent to setting R ne = 0 ("hydrogen-Earth" model). Both approximations have been previously discussed in appendix C of ref. [33]. Although we know that none of these options corresponds to Nature, we can make a sensible choice of when it is safe to use each of them. It turns out that for constraining the mixing of the ν µ with eV-scale neutrinos the NC matter effect plays no role at all (see discussion in appendix C2 of [33]), whereas the participation of the electron flavor may have some impact. 13 Therefore, whenever we are mainly interested in constraining |U µh | (h ≥ 4), as in the case of the global analysis combined with SBL data, we adopt assumption (b). This is important since non-zero |U ei | leads to slightly relaxed constraints on |U µ4 |, although the effect is small once external JHEP05(2013)050 constraints on |U ei | are taken into account. With respect to our former analysis presented in ref. [33], the explicit NSI formalism adopted here is more general since it allows to fully include |U ei |-related effects without further approximations.
On the other hand, when exploring the sensitivity to the fraction of sterile neutrinos participating in atmospheric and long-baseline neutrino oscillations, the contribution of neutral-current neutrino-matter interactions is essential. Indeed, under approximation (b) no limit on |U τ i | (i ≥ 4) would be obtained. Therefore, when exploring constraints on |U τ i | we adopt assumption (a) above. This is relevant for figures 4 (right) and 5.
E Technical details on the simulation of SBL and LBL experiments
Here we provide technical details on the simulation of some of the experiments included in our fit. All the simulations described in this appendix make use of the GLoBES software package [145,146].
Both LSND and KARMEN have measured the reaction ν e + 12 C → e − + 12 N, where the 12 N decays back to 12 C + e + + ν e with a lifetime of 15.9 ms. By detecting the electron from the first reaction, which has a Q-value of 17.33 MeV, one can infer the neutrino energy.
Here we describe how we use these data to constrain ν e disappearance [94,95].
For KARMEN, we use the information from the thesis [94], which uses more exposure than the original publication [92]. The number of expected events can be calculated by multiplying the 12 C cross section (Fukugita et al. [93]: 9.2±1.1×10 −42 cm 2 ), the number of target nuclei (2.54×10 30 ), the absolute neutrino flux (5.23×10 21 ), the efficiency (27.2%, flat in energy), and the inverse effective scaled area (1/[4π(17.72m) 2 ]). 846 neutrino candidates are observed, with an expected background of 13.9 ± 0.7, which are mainly accidentals and cosmic induced. The systematic errors are dominated by a 6.7% uncertainty in the absolute neutrino flux and 3% in the Monte Carlo efficiency. The total systematic error is 7.5% plus a 12% cross section error. We take the latter to be correlated between the KARMEN and LSND ν e -carbon analyses. In figure 3.2 (upper panel) of [94], the data are shown as the energy distribution of the prompt e − spectrum in 26 bins where the visible prompt electron energy is within the range 10 MeV < E e < 36 MeV. Modulo energy reconstruction the neutrino and electron energies would be related by the Q-value as E ν = E e + Q. For the simulation we assume 30 MeV < E ν < 56 MeV. To properly fit the data, we assume σ e = 25%/ E (MeV) for the energy resolution. With the 26 data points we obtain a two-neutrino best fit with χ 2 min /dof = 30/24. For LSND [91] we compute the expected number of events by multiplying the 12 C cross section (9.2 × 10 −42 cm 2 [93]), the number of target nuclei (3.34 × 10 30 ), the neutrino flux at the detector (10.58 × 10 13 cm −2 ), and the efficiency (23.2%, flat in energy). 733 neutrino candidates are observed, with a negligible expected background. The systematic error is dominated by a 7% uncertainty in the neutrino flux and a 6% uncertainty in the effective fiducial volume. The total systematic error, not including the theoretical cross section error, is 9.9%. The 12% cross section error is correlated between the LSND and JHEP05(2013)050 KARMEN ν e -12 C analyses. In figure 6 of ref. [91], the data are shown as the energy distribution of the prompt e − spectrum, where the visible prompt electron energy is within the range 18 MeV < E e < 42 MeV and divided into 12 bins of width 2 MeV. In terms of the neutrino energy E ν , this energy range corresponds to 35.3 MeV < E ν < 59.3 MeV. In our analysis, we combine the 12 energy bins into only 6 bins. To properly fit the data, we assume σ e = 2.7 MeV for the energy resolution. With the 6 data points we obtain a two-neutrino best fit with χ 2 min /dof = 3.81/4. For the combined KARMEN+LSND ν e -carbon fit, we have 32 bins and we find a two-neutrino best fit point with χ 2 min /dof = 34.17/30.
E.2 E776
The pion beam experiment E776 at Brookhaven [40] employed a 230 ton calorimeter detector located at approximately 1 km from the end of the 50 m long pion decay pipe. The ( -) ν e energy of order GeV was measured with an energy resolution of 20%/ E [GeV]. E776 used ( -) ν µ disappearance data to obtain the overall normalization of the neutrino flux. In our fit, we do not explicitly include ( -) ν µ data, but instead use the normalization as an input. The main backgrounds in E776 came from intrinsic ( -) ν e contamination in the beam and from π 0 's produced in neutral current interactions and misidentified as electrons. The systematic errors were 11% for the intrinsic background and 27% (39%) for the π 0 background in neutrino (anti-neutrino) mode. In 1986, E776 collected 1.43 × 10 19 (1.55 × 10 19 ) protons on target, and a total of 136 (56) ν e (ν e ) candidate events were observed with an expected background of 131 (62) events for neutrino (anti-neutrino) mode. E776 present the observed and predicted electron energy spectra using 14 equidistant energy bins per polarity, covering the energy range from 0 GeV to 7 GeV. In our fit we omit the first bin and combine the second and third ones because modeling the detection efficiency at these low energies is very difficult. Hence, we have a total of 24 data points. We checked that we are able to reproduce well the exclusion curve shown in figure 4 of ref. [40] (if we also use a two-flavor oscillation model), and we obtain χ 2 min /dof = 31.08/22 at the best fit point. In the combined analysis with other experiments we take into account oscillations for the ( -) ν e background.
E.3 ICARUS
The ICARUS experiment [147] is a neutrino beam experiment at Gran Sasso. The CNGS facility at CERN shoots 400 GeV protons at a graphite or beryllium target, producing a hadronic shower which is focused by a magnetic horn system. The resulting neutrino beam is mainly composed of ν µ , having only a 2%ν µ contamination and a < 1% intrinsic ν e component. The neutrino spectrum ranges approximately from 0 − 50 GeV, with a wide peak at 10 − 30 GeV. After traveling 732 km, they are detected in the ICARUS T600 detector, a 760 ton liquid argon time projection chamber. Between 2010 and 2012, the ICARUS detector observed 839 neutrino events with energy below 30 GeV, to be compared with the expectation of 627 ν µ and 3 ν τ charged current events, as well as 204 neutral current events. While a Monte Carlo simulation of the experiment predicts 3.7 ν e background events, only two were identified [41]. For our ICARUS simulation, we took the ν µ JHEP05(2013)050 and in SciBooNE. Since kaon-induced backgrounds make only a small contribution to MiniBooNE's total error budget, the effect of this rescaling on the fit results is, however, very small.
Due to the correlation with the muon-like events it is not straight forward to assign a number of degrees of freedom to the appearance search without double counting the muon data, which are used also in the separate disappearance analysis. We have adopted the following prescription. Eq. (E.1) can be written as where (S µµ ) −1 is the inverse of the µµ sub-block of S. 14 Hence, we have block-diagonalized the covariance matrix. The shift δ corresponds to the impact of the µ-like data on the normalization of the e-like flux. The two terms in eq. (E.2) should be statistically independent and approximately χ 2 distributed. For the MiniBooNE appearance analysis we therefore use χ 2 MB,app ≡ χ 2 − C, and assign 22 dof to it (for combined neutrino/antineutrino data). The last equality in eq. (E.3) shows explicitly that C does not depend on oscillation parameters, since we neglect the effect of oscillations on d µ . With this method we obtain GOF values which are in reasonable agreement with the numbers obtained by the collaboration: our results for χ 2 min /dof (GOF) for neutrino, anti-neutrino, combined data are 14.2/9 (11%), 6.5/9 (69%), 32.9/20 (3.5%), respectively, compared to the numbers obtained in [16] 13.2/6.8 (6.1%), 4.8/6.9 (67.5%), 24.7/15.6 (6.7%). Note that in [16] the number of dof and GOF have been determined by explicit Monte Carlo study and also a different energy range has been used to obtain those numbers.
For our MiniBooNE ν µ disappearance analysis, we use the neutrino mode data from [44]. As in appearance mode, we compute the expected event spectra for each parameter point by using MiniBooNE's Monte Carlo events. Since backgrounds are very small for this analysis, we do not need to take them into account. For each set of oscillation parameters, we choose the overall normalization of the spectrum in such a way that the total predicted number of events matches the number of observed events in MiniBooNE, i.e., we fit only the event spectrum, not the normalization. The log-likelihood is obtained in analogy to eq. (E.1) and thus takes into account systematic uncertainties and correlations between different energy bins.
In the analysis ofν µ disappearance data, we follow the combined Mini-BooNE/SciBooNE analysis from [45]. As for the appearance and ν µ disappearance analyses, we use public Monte Carlo data to compute the predicted event spectra. We take into account oscillations of both the signal and the background, and we compute the loglikelihood again in analogy to eq. (E.1).
E.5 MINOS
Our analysis of MINOS neutral current (NC) and charged current (CC) interaction is based on 7.2 × 10 20 protons on target of NC data presented by the collaboration in ref. [150] (see also [42,43]) and 7.25 × 10 20 protons on target of CC data published in [151]. All data was recorded in neutrino mode, i.e., the beam consists mostly of ν µ and only small contaminations ofν µ , ν e andν e .
We have implemented the properties of the NuMI beam and the MINOS detector within the GLoBES framework [145,146], using results from the full MINOS Monte Carlo simulations as input wherever possible. In particular, we use tabulated Monte Carlo events [152] to construct the detector response functions R ND (E true , E rec ) and R FD (E true , E rec ) for neutral current events in the near detector (ND) and the far detector (FD), respectively. R ND and R FD describe the probability for a neutrino with true energy E true to yield an event with reconstructed energy E reco . We include the CC ν µ and CC ν e backgrounds to the NC event sample (beam intrinsic as well as oscillation induced), as well as the small NC background to the CC ν µ event sample. The number of charged current interactions is predicted using the simulated NuMI flux [153], the cross sections calculated in [154][155][156], and a Gaussian energy resolution function with width σ CC E /E true = 0.1/ E true /GeV for the CC event sample and σ CC-bg E /E true = 0.16 + 0.07/ E true /GeV for the CC background in the NC event sample. The parameters of the energy resolution function, as well as the efficiencies, have been tuned in order to optimally reproduce the unoscillated event rates predicted by the MINOS Monte Carlo. We emulate the baseline uncertainty due to the non-vanishing length of the MINOS decay pipe by smearing the oscillation probabilities with an additional Gaussian of widthσ 2 = (2.0 GeV − 3.0 × E true ) 2 and setting the effective distance between near detector and neutrino source to 700 m. This value as well as the parameters of the smearing function have been obtained numerically from the requirement that 2-flavor oscillation probabilities at the near detector computed with a more accurate treatment of the decay pipe are well reproduced for various mass squared differences of order eV.
We compute neutrino oscillations in MINOS numerically using a full 4-or 5-flavor code that includes all relevant mixing parameters as well as CC and NC matter effects. In the fit, we predict the expected number of events at the far detector for a given set of oscillation parameters by multiplying the observed number of near detector events in each energy bin with the simulated ratio of far and near detector events in that bin. Each event sample is divided into 20 energy bins with a width of 1 GeV each, covering the energy range from 0 to 20 GeV. As systematic uncertainties, we include in the analysis of the NC (CC) sample a 4% (10%) overall normalization uncertainty on the far-to-near ratio, separate 15% (20%) uncertainties in the background normalization at the far and near detectors, a 3% (5%) error on the energy calibration for signal events and a 1% (5%) error on the energy calibration for background events. For the NC analysis, the systematic uncertainties are based on the information given in [42], for the CC analysis they have been tuned in order to reproduce the collaboration's fit with reasonable accuracy, while still remaining very conservative.
JHEP05(2013)050
Open Access. This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
|
v3-fos-license
|
2019-04-27T13:08:14.489Z
|
2017-02-28T00:00:00.000
|
55053712
|
{
"extfieldsofstudy": [
"Economics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://juniperpublishers.com/ijesnr/pdf/IJESNR.MS.ID.555572.pdf",
"pdf_hash": "bbd3f3c459f21fb2218ed0665935845af8182d34",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43075",
"s2fieldsofstudy": [],
"sha1": "bc7cf07a861b17ecb48108c3a33bff0e62d0dde0",
"year": 2017
}
|
pes2o/s2orc
|
Streamlining Sustainability: A Principal Component Reduction for Regionally Based African-Centric Indicators
Ecological indicators were created to measure human consumption of Earth’s finite resources. Since 1992, hundreds of indicators have been created at the global scale. These indicators reveal that, while there might be similarities between regions of the world, each region has its own distinctive characteristics. This article concentrates on the forty odd created for the regions of Africa. The statistical outliers from twenty plus ecological indicators were subjected to a Principal Component Analysis to reduce and create composite indicators that would better reflect the regional variability. The data reduction – or streamlining – resulted in the creation of three indicators per region (fifteen in all) that accounted for, on average, 77.6 percent of the variance in the ecological data. Out of the fifteen variables extracted, four from the original stock of indicators made it through the reduction process indicating that those particular indicators measured exactly what they were supposed to measure.
Introduction
Over the course of millions of years, human kind has evolved from simplistic organisms growing as part of the environment to sophisticated organisms dominating the environment. This evolution resulted in humans becoming self appointed caretakers of Earth as the dominant sentient species. In within a relatively short time span geologically speaking we discovered how to harness the Earth's natural resources to improve our quality of life. Unfortunately, when one considers the path human population growth has taken, it becomes obvious that the growth facilitated a greater demand on the planet's finite resources to maintain the progressive comfort that enhances our quality of life. The spreading of human habitats all over the Earth came with a "conquer nature" mentality that owed itself to new technologies and new materials this maintenance of comfort, unfortunately, came at the expense of the natural environment [1]. Thus, the preeminent question remains -as it has for the past 40 years -how long can we continue our destructively unsustainable ways.
The expanding conceptualization of sustainability is only a recent phenomenon, but the underlying notion has been around for well over two centuries. Early re collections begin with the alarms set off by Malthus's An Essay on the Principle of Population from 1798 and John Stuart Mill's Principles of Political Economy with some of their Applications to Social Philosophy from 1848 [2]. In 1972, however, it was the book Limits to Growth that garnered worldwide attention as the authors used computer modeling to predict the demise of our ever increasing standard of living within 50 to100 years [3].
International Journal of Environmental Sciences & Natural Resources
Thus -barring behavioral intervention on a global scale -a weakened version of the Dark Ages would return between 2022 to 2072 as we fall back on the notion of survival of the fittest. Many believe it was the lead author -Donella H. Meadowswho first detailed the concept of sustainability and urged that a changing of paradigms or worldviews would be the only way to bring about a more sustainable society [3]. The very same year, out of happenstance, The United Nations (U.N.) called together The Human Environment Summit in Stockholm which involved the major industrialized countries. The general assembly would create the United Nations Environment Program (UNEP) who would, in turn, establish the Brandt Commission which was made up of politicians and scientists. The Brundtland Commission (or UN World Commission on Environment and Development) was formed in 1983 and was named after the Norwegian Prime Minister who was the chair. The mandate of the commission was to determine how humans could define and achieve sustainable development. Critics considered this task to be unattainable, but in 1987, the meaning behind the term 'sustainable development' was officially adopted from the Brundt land Commission's report [4].
In the following decade, the U.N. Conference on Environment and Development (UNCED) also known as the Earth Summit was held in Rio de Janeiro. Here, a non binding, voluntary action plan called Agenda 21 was formulated. All 172 countries in attendance approved an action plan to become more sustainable for the sake of future generations. Subsequently, there have been numerous other conferences concerning various aspects of sustainability to include the Millennium conference in New York City. The outcome of this gathering was the Millennium Development Goals (MDG). Out of the eight goals identified to be achieved by 2015, Goal 7 was to ensure environmental sustainability. What transpired from the various conferences was the creation of numerous measuring tools called ecological indicators. The purpose of the quantitative indicators is to measure the health of the environment; and, as a whole, the indicators are held as the key to establishing a more comprehensive knowledge base on Earth's ecology. The use of indicators is nothing new, but ecological indicators tend to be more inclusive than other performance measures and focus more on showing trends concerning critical environmental and social problems [3].
Objective
The objective of this paper is to develop composite ecological indicators that can accurately measure the environmental health of some of Africa's more diverse sub regions. As the term 'composite' implies, we are looking to develop new African specific indicators based on pre existing indicators by way of a data reduction technique. Most African sub regions have particular environmental deficiencies -indicators will be developed that pertain specifically to the region based deficiencies. Hundreds of indicators have already been created which might make this research redundant, but the long range goal is to develop African specific indicators that can accurately monitor African sustainability. This text will first explore the concept of sustainable development and environmental indicators. Then, we will explore the focus on Africa. Next, the methodology will be elaborated on followed by the results and a conclusion.
Sustainability and Sustainable Development
The concept of sustainability emerged in response to the increased understanding that contemporary development practices were leading to crisis in a social and environmental sense. The term "sustainable development" thus became the buzzword for alternative development strategies that could be "envisioned as continuing far into the future" [3]. There are several definitions of sustainability, but some are more inclusive than others. The official definition adopted by the United Nations (UN) came from the Brundtland report which defined sustainable as "development that meets the needs of the present generation without compromising the ability of future generations to meet their own needs" [5]. There are commonly three associated dimensions of sustainable development as identified by ( Figure 1. The dimensions address economic, social and environmental aspects [6]. The social dimension has also been called 'equity' making the trio known as the Three E's. Within this text, we will briefly describe the economic and environmental dimensions as it is said that the economic dimension tends to overshadow the equity and environmental dimensions [3]. Economic sustainability deals with capital which means it is commonly measured by money. The four types of capital are: manufacturing, natural, social and human capital. Manufacturing capital is traditionally defined as assets used to make goods and services like tools and machines. Natural capital deals with resources such as timber, water, fossil fuels, biodiversity, and ecological services. Social capital references human wellbeing at an organizational level. Ideal examples include neighborhood associations, cooperatives and civic groups. The fourth type of capital -Human capital -is similar to social capital, but differs in that it deals with human welfare at an individual level. According to Ekins [7], this is where a person's health, education, job skills and motivations are measured. Taken into an African context, these dimensions
International Journal of Environmental Sciences & Natural Resources
are bound to reveal contradictory results because of the extreme variedness of not only the people, but also the environment ( Figure 1).
There are two measures of environmental sustainabilityweak and strong. Weak sustainability is evident when natural resources can be substituted whereas in strong sustainability, there is no resource substitution [7]. Weak sustainability infers that the depletion of one form of capital can be offset by the surplus from another; conversely, strong sustainability suggests a complimentary relationship between the various forms of capital negating any possibilities of substitution [8]. When considering a common focus, sustainability encompasses the following areas: [9].
The concept is also grounded by a few basic principles. The first principle is, do not go over the carrying capacity of natural resources for example, CO 2 levels should not surpass natural carbon sequestration levels. The second principle focuses on increasing efficiency; in this vain, technology can be used as a substitute for a resource. The third principle states that, when using renewable resources, the extraction rate shall not exceed replenishment rate. Finally, non renewable resources should not be used at a rate greater than the resource's rate of creation [7]. Ecological indicators are the main tools in measuring sustainable development. The United Nations mandates, through Agenda 21, encouraged the creation of more indicators that can help countries progress towards sustainable development [10]. Consistent monitoring and evaluation of the progress is a necessity for two primary reasons: (1) to isolate emerging issues before they become costly problems; and (2), to assess plan implementation so they can be adjusted and improved [11].
Ecological Indicators
Sustainability measures are used in real world situations where the need is critical. In order to measure and monitor the health of the environment effectively, the indicators need to be aligned with certain criteria. The first criterion stipulates that an indicator should be simple to measure and easy to understand [12]. The second criteria state that the indicator must be sensitive to environmental stress. The third criterion is for the indicator to be predictable and not unambiguous. The remaining criteria are as such: the indicator should give an early warning of significant change in ecosystem; the indicator should be able to allow management to act by predicting change; the indicator must be comprehensive covering all the key section of an ecosystem; the indicator should have key responses to both natural and anthropogenic stresses on the ecosystem; and finally, the indicator should have a small range of variability [12]. Ideally, the indicators developed should satisfy all the criteria mentioned above; however, this is not always the case. Indicators have the power to demonstrate problems, motivate actions, and highlight the positive effects of sustainability policies -some of which are tied to state and national policies [3]. Sometimes, however, indicators have been developed for and used in regions for which it has no merit or relevance. The assumption within this text is that -errant measures (or outliers) are a sign of inadequacy within the group of indicators for a particular region, thus composite indicators can be extracted from the group to cover the inadequacies.
The African Focus
Africa, as the study area, was chosen for a variety of reasons. Africa is the second biggest and most populous continent in the world with about 1.1 billion people and the only continent to be represented in all four hemispheres. There are 53 countries and one disputed claim of sovereignty (Western Sahara). Many of these countries are under developed economically (poor) but extremely rich in mineral resources (which presents a contradiction) as 30 percent of the world's minerals are found in Africa. The landscapes are vast and heavily influence how people live. Africa is the hottest continent on the planet as 60 percent of land surface is dominated by desert. Only 10 percent is considered to be prime agricultural land [14]. Additionally, Africa is losing 4 million hectares of forest each year. This is double the rate when compared to the rest of the world. This is primarily due to logging, agriculture, building new houses, and road construction. Farmers are either forced to grow their crops on marginal lands that are not as productive, or quit and move to the cities because 65 percent of agriculture land and 31 percent of the pasture land is degraded. Africa is the second driest continent in the world after Australia. Water scarcity is a major problem and about 400 million people experience this due to natural conditions, desertification, land use change, and variable rainfall from 0 mm to 9500 mm [14]. Table 1 presents the major issues affecting the various countries in Africa which are grouped by the established regions. The distribution of the regions as well as the accompanying land cover is represented by the map in ( Figure 2). The land cover clearly expresses the regional variability.
Africa as Place
Although Africa is urbanizing at an extremely fast rate (2.32% between 2000 and 2005), most Africans still live in rural areas and 56.6 percent work in agriculture sector. At least 31 percent of Africa's population lives in urban areas and 72 percent live in slums. It is projected that by 2050 the population will reach between 1.9 and 2.5 billion people. By that time, over 60 percent of the population will live in urban areas. Only 2.7 percent live within 100 km of the coasts [15] (Table 1
International Journal of Environmental Sciences & Natural Resources
Africa is the poorest continent in world. Despite its reference as the birthplace of man, it has fallen behind the rest of world in every social and economic category. Most countries in Africa are last in basic human welfare. The AIDS epidemic is alarmingly high, average life expectancy is low and most governments lack the ability to feed their people. Yet, Africa is rich in mineral, natural, and energy resources [16]. Africa has amazing landscapes which are habitats for a multitude of large mammals. Unfortunately, in an effort to modernize, the one category that Africa has in abundance (the environment) is being degraded at the fastest rate in the world. Africans are now being challenged by the world [17] to find a way to develop sustainable without permanently damaging their best asset [18].
There are several theories as to why Africa is so poor. The first has to do with climate and weather. Africa is the second driest continent in the world and it is also the hottest -about 60 percent of the land is desert. Only ten percent of land is suitable for growing crops [14]. Most of Africa suffers from long dry seasons and heavy rain seasons which, again, is not suitable for many crops thus soil quality in generally is poor [18]. Second, the geography of Africa's coastline is straight this means that there are few natural inlets or natural areas to port along the coast making trade difficult. The third reason from a historical perspective is the Berlin Conference of 1884. This conference is noted for allowing the European countries to divide up Africa amongst themselves. They created countries without counsel from the native population resulting in the separation of once cohesive tribes. The outcome has manifested itself in today's post Colonial world. For example, Nigeria and many of the West African countries have two different cultural and physical regions. Muslims live in the northern savannahs while the Christians in the south live in tropical forests. The clash between these two cultures is just one example of the contributors to the slow growth of Africa -this is because conflict tends to cause destabilization when the disputes evolve into long protracted civil wars. The conflicts slow down the economy as trade and commerce are disrupted. Finally, colonization is implicated as it served to strip the continent of the easy access resources that were available. Knowing the history of the study area tends to help researchers understand why some regions within Africa are suffering from environmental deficiencies. It is important that they know the social, economic, institutional, and environmental characteristics of a region.
Environmental Aspect
The ecological footprint (EF) is an indicator of the environmental burdens that we put on the planet by representing the area of land needed to provide the raw materials, energy and food we consume as individuals or as a community [19]. In Africa, the EF has doubled since 1961 and is now over the regenerative capacity by 50 percent. Between 1961 and 2008, the EF increased by 238 percent. The bio capacity increased by 30 percent during this same time due to the increase in agriculture production. Unfortunately, with the increase in demand for resources, the bio capacity has decreased to 37 percent of its 1961 value. At least half of the countries in Africa are deficient in bio capacity. A combination of high rates of deforestation, a growing population, and continuous civil conflicts has impacted Africa's rich biodiversity negatively. Most of Africa's footprint is carbon based. Africa uses about 80 percent biomass as trees and charcoal is used to create energy. Deforestation rates are high because of the over reliance on biomass. Only 3 percent of electricity is used which is only 3 percent of the energy Africans use in general. The U.N. projects electricity usage to increase 6 fold with 80 percent of demand from growing urban areas [15] ( Table 2). This biannual report would assess the progress of African states in implementing the MDG and NEPAD planned at the WSSD. Table 2 presents the actual indicators that were developed by the UNCSD for Africa. There were a total of some forty odd indicators developed. Out of the fifty four nations on the continent, only about ten were tested; thus, the indicators used only represent 20% of Africa [6].With this notion in mind, a plausible focus of research could pertain to answering questions about the effectiveness of the indicators on the other 80 percent. The lack of testing for the appropriateness of the indicators could possibly account for the presence of significant outliers that would form the basis for this research.
Methodology
The goal of this research is to construct new viable composite indicators for the various African regions that can measure their progress towards sustainability. This will be done via principal component analysis -a multivariate data analysis tool used for data reduction. This research seeks to reduce the number of environmental indicators from twenty plus (Table 3) specially chosen indicators to a smaller number that can handle the same task more efficiently. The principal component technique is preferred because the method is a little more precise and stable than your straight forward Factor Analysis in reducing variables into reliable dimensions (Table 3).
Ecological Wellbeing
The PCA concentrates on the shared variance between each of the variables and delves into their correlation structure identifying the hidden components. This tool allows for the retention of more variables without succumbing to correlation bias -also known as multicollinearity [20]. The PCA is optimal for dealing with the issues of multicollinearity as it transforms the set of correlated variables into a set of uncorrelated principal components [21]. A prime example of variables that could have contributed to the bias would be the Health and Sufficient Food indicators (r=0.895). The reduced orthogonal components -also known dimensions or synthetic variables -reflect the underlying similarities of the initial variables [22]. This is aided by a Varimax rotation which associates each variable to, at most, one component. The rotation maximizes the sum of the variances simplifying the interpretation of the results. The following is the general formula for computing the first component created in a PCA: C 1 = b 11 (X 1 ) + b 12 (X 2 ) +…b 1P (X P ) where C 1 equals the subject's score on component 1, b1p equals the coefficient for the observed variable p as used in creating principal component 1, and X p equals the subject's score on observed variable p [23].
The data for this research has been compiled from various databases to include the Nation Master [26], the Sustainable Society Index, and the World Bank Africa Development Indicators. The Africa Development Indicators are also available in time series from 1960 to 2012. The data for this research is divided into five groups representing the five African regions as previously seen in (Figure 2) and then standardized. In this analysis, the Z scores are used to determine how far a value is from the mean. If a variable's Z score falls within the normal range, that variable would not be chosen for the PCA. The outliers would be chosen. Any variable that has a value over or under (negative) two is considered an outlier as they are too high or too low (Figure 3). However not all outlier values will be picked -only the values having a negative effect on people and the environment was chosen. The results are reported based on region. Additionally, the new composite variables with their subjectively derived names will be reported. As stated above, the PCA is a data reduction method; thus, the resulting components (or synthetic variables) represent the characteristics of the underlying variables and are subjectively named on that basis ( Figure 3). North Africa is home to the largest desert on Earth the Sahara Desert. The desert spans the entire width of this region and covers the majority of the land area of the countries in this region (Figure 4). This results in the clustering of the populations in the associated countries along the semi arid Southern Mediterranean Sea coast. All indicator scores where standardized to make measurements equal. North Africa's HDI and HWI are above average when compared to the rest of Africa's HDI and HWI results, yet given the natural physical terrain the EWI is below average. Since this research concerns data reduction, the goal is to limit the number of indicators but still be able to accurately measure the specific phenomenon in question. The first round of elimination was done by way of Z scores. The indicators that had Z scores close to zero were eliminated. This ensures that the phenomenon the indicator was measuring was not a big issue compared to the others. For example, compared to the rest of the continent, the Air Quality and CO2 is not a big issue; as a result, these indicators were not included in the PCA data reduction analyst test. In contrast, the Environment Wellbeing (EWI) was included because the North Africa has the worst environmental levels. Results for North Africa are presented in Table 4. The indicators with extreme scores that are antithetical to sustainability are included while the indicators with average scores are excluded. This preliminary round of data reduction was done for all five regions. A correlation matrix showed that the correlation between variables was week in general. This explains the low KMO test readings. Ideally, the closer to 1. The numbering of the components indicates how many were extracted in the PCA process (for our regions, three components were extracted from each test) and the high loading variables (>5) under each component. The extracted components represent a new (or synthetic) variable / indicator. Based on the analysis, it can be concluded that the three components (or indices) extracted can explain a substantial proportion of the environmental problems of North Africa. Thus, the indicators have been reduced in this analysis from six to three (Table 4). North Africa is believed to have a population density problem. The population is found in clusters along the coast. Most the inland landscape (terrain) is dry and harsh to live on. The first component illustrates this problem. The arable and population variables are highly correlated. Based on those two high loadings, we aptly named Component 1 the Human Land Index (again, the naming convention is a subjective procedure because it is based on the research's interpretation of the high loading high loading variable under the component).Component two can be called Unhappy Planet Index. Ecological Footprint (EF) and Urbanization have a strong correlation whereas EWI has a strong negative correlation to both. EF and Urbanization are detrimental to the North African environment. Component three can be called Urban Resource Depletion Index because this is where Urbanization most likely causes natural resource depletion of land in North Africa.
East Africa
The landscape in East Africa is as diverse as its human population. Contained within its boundaries are world wonders like the Congo, Lake Victoria, the Serengeti plains and Mt. Kilimanjaro. Unlike the northern lands, however, East Africa has vast regions of lawlessness and non functional governments which means infrastructure is almost nonexistent. The countries of this region include Eritrea and Somalia and recently created South Sudan. This region is infamously known for food shortages due to ongoing civil strife which commonly result in famine. The desert is expanding southward causing mass migration, and the region is geologically active as the rift valley in Ethiopia is gradually pulling apart ( Figure 5) (Table 5). With respect to the analysis, the KMO Bartlett test was mediocre, but still registered as significant (
West Africa
This part of Africa's environment varies from semi arid in Northern part to tropical in the Southern section ( Figure 6). The area is rich in oil and minerals, and carries the largest population in Africa. Ironically, the Atlantic slave trade started in this region. The countries here were controlled or influenced by Europeans, but in contemporary times, this region has been plagued by seemingly perpetual civil conflicts that causes political instability and poor infrastructure. The arable land is used for cash crops for exports rather than for feeding the population ( Figure 6). From the start, there were six outlier variables. This test barely passes the KMO test as the correlation was low. It did, however, pass the significance test, and the total variance accounted for was over 76 percent (Table 6). Once again, three components were produced. After rotation, the first component consisted of the Arable and Total population variables (Table 6).
International Journal of Environmental Sciences & Natural Resources
In similar fashion as the North Africa component, this one too will be called The Human Land Index. Component 2 has two high loading variables. Although the correlation between the World Risk Index and the Environmental Performance Index is negative, the relationship is actually positive when referencing disaster preparedness. The World Risk Index (WRI) measures the potential risk of natural disaster and how prepared a country is to deal with this potential disaster. The risk of disaster is low which means that the environment has a much better chance of remaining healthy; thus, this component was labeled Environmental Disaster Response Index. Finally, the third component contained one high loading variable Homicide Rate. Since this is the best descriptor, there is no need to change the name.
Central Africa
This region is the smallest out of the five U.N. designated regions in Africa. The environment includes tropical forest in South which gradually changes to semi arid as you progress northward (Figure 7). All the countries in this region are poor and it is said to be the birthplace of acquired immune efficiency syndrome (AIDS) in humans. Nine variables were found to be outliers. The region is prone to health, air quality and lose of natural resource issues. The variables past the KMO and Bartlett test and the three components extracted explain 77% of the variance. Component 1 reveals a multitude of issues with the quality of the atmosphere -both in urban and rural areas; therefore, an appropriate name for this component is the Total Atmospheric Quality Index. Component 2's major correlated variables were Health, Life Expectancy, Urbanization and HPI. It can be deducted that in Central Africa, the more urbanized areas have better health care systems. Unfortunately, there are not many urbanized areas in Central Africa, thus, a suitable name for this component was Human Urbanized Health Index. Component 3's only viable variable was Natural Resource Depletion; thus, there was no need to attempt to interpret a name for the indicator (Table 7). This part of Africa is home to varied landscapes and climate zones. From the Coastal temperate climate in the South to semi arid plateau and mountains to very dry deserts in the Western section to tropical forests in Northern parts of this region ( Figure 8). The climate in the Southern part attracted European settlers in the age of Exploration beginning in the 1600s. Additionally, the discovery of huge valuable mineral deposits such as gold and diamonds and vast fertile land further encouraged more European settlers to migrate. Unlike the rest of Africa, the settlers called this are home. It was not until recently that the
International Journal of Environmental Sciences & Natural Resources
true natives were allowed to re govern their own homeland. The Europeans brought their form of government to the region thus the region developed the same way European countries did. As a result, the area became Africa's most modern, industrialized and richest country. The entire region is politically stable, and the ecological problems now resemble that of European countries. The country of South Africa dominates the region. There were six variables that were found as outliers in the in this region (Table 8). The KMO was moderate but significant, and the extracted components explained 79 percent of the variance in the data. Component 1 suggest that WRI increases when both GHG and Air Quality worsen. Curiously, CO 2 Emissions were negatively correlated. This could be indicative of a conscious effort to limit CO 2 emissions. South Africa was able to lower their CO 2 emissions before the 2002 U.N. conference on the environment; however, after the conference -the restrictions were lifted. The readings could also be related to the fact that some of the countries in the region are under populated. A suitable name for this component -based on the loading variables -was the Good Air index. Component 2 and 3 only have one suitable variable, thus the names were not changed.
Conclusion
Africa is a woefully underdeveloped continent in regards to human development compared to the rest of the world, but from an environmental standpoint, is arguably the richest. The goal of this research was to create a reduced set of viable composite indicators that can measure sustainability within the confines of the varied regions in Africa. First, it was necessary to divide the Africa up into five regions because each of the regions has varying physical features as well as cultures. Each of the regions also has their own issues. For example, North Africa's landscape varies from semi arid to arid therefore water and arable land availability was assumed to be the major issues for this region. However, after doing the PCA test it was revealed that -while water availability is a problem, it doesn't compare to arable land availability, population density, and urbanization problems. The population is found largely on the Mediterranean Coast where there is limited land to grow food and live on. The interior is barren and uninhabitable. Given the still growing population, the aforementioned issues could easily become catastrophic. As another example, we have East Africa. Here, the major problem is urbanization and land degradation, while health, life expectancy and SSI are high. While this is a good trend at present, the region's cities are growing thus causing more interaction between wildlife and people. There are many subcategories when referring to environmental problems; thus, it is not just about land degradation or wildlife loss.
Overall, the accurate application of the PCA for data reduction (or in this case indicator reduction) proved that the number of the indicators currently used in the African regions can be reduced into new composite indicators that would better reflect Africa's variability. The analysis reduced the regional indicators down to three per region. Additionally, the components extracted from each region accounted for between 75.4 to 81.4 percent of the variance in the data for the regions. It should also be noted that, out of the fifteen indicators extracted for the five regions, four were from the original list of indicators meaning that -eleven truly new composite indicators were derived. Another way of looking at this at the regional level is -where some twenty odd indicators could have been applied, only three would have sufficed. While this is encouraging -it is all academic. The indicators will ultimately need to be tested in practice to determine their feasibility. This testing is in progress via cluster analysis and the results will be reported at a future date.
It should be remembered that these African countries are relatively new in terms of governance. Historically, new countries have had to overcome internal conflict before prosperity. There are still vast tracks of untouched forests in Africa. African governments are increasingly becoming aware of the value of these undisturbed places and are putting forward plans to protect and limit human activity in these areas. Thus, the many problems of Africa are correctable and it would be wise to have adequate indicators to truly chart the changes for better or for worse.
|
v3-fos-license
|
2020-04-09T09:27:38.113Z
|
2020-04-06T00:00:00.000
|
216120794
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scholink.org/ojs/index.php/jrph/article/download/2740/2779",
"pdf_hash": "39eee9e204d75f2096ab936aad395873263b3759",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43077",
"s2fieldsofstudy": [
"Philosophy"
],
"sha1": "ae9e35704b0812bb91f3b18d8af58e4474658ef1",
"year": 2020
}
|
pes2o/s2orc
|
The Fundamental Nature of Time
The nature of time is intimately bound up with the nature of energy propagation, which has a long history of its philosophical understanding. Here I propose a new post-Einsteinian view of the nature of time, conceptualized as the outcome of the pure unidimensional rate of change of a process through the infinitesimal operator of differential equations. In this view, time is a local property that is generated by every individual process in the Universe rather than a fundamental dimension in which processes operate. The rate of change has an inherent “arrow of time” that does not depend on the ensemble properties of multiple processes, such as the laws of entropy, but is inherent to the function of each process, by virtue of its genesis in the Big Bang. The conventional view of time may be approximated either by aggregating the operations of large ensembles of diverse processes, or by choosing a process (such as the Atomic Clock) that has demonstrably stable temporal properties. For processes that are sufficiently nonlinear, their iterative progression may in principle lead to solutions describable as fractals, for which the integral derivation of the time variable would fractionate into a form of fractal time.
Time in Physics
Time has long been considered one of the fundamental quantities of Physics (https://en.wikipedia.org/wiki/Time_in_physics), and is treated as something that exists externally to the observer and that can be measured by a variety of devices called "clocks". The present analysis takes a radically different view, that time is a mental or conceptual construct derived from the energetic processes of the physical world, a construct that is specific to each individual process and that is only derivable from such processes. As far as can be determined, this is a novel viewpoint that has never been articulated in its full implications in the history of philosophy, and particularly from the predominant view of Western Philosophy that time is a primary dimension of reality. It is also the only theory of time that is grounded in the Schrodinger Equation of Quantum Physics as the core definition of the nature of the energy process. Furthermore, as through-and-through process theory, it is fully compatible with the Emergent Aspect Dualism view of the universe as emergent hierarchy of processes (Tyler, 2014(Tyler, , 2019.
Philosophical Treatments
Remarkably, the closest philosophical treatment to that being proposed seems to be that of Aristotle, whose view on time are rarely discussed, and who differs from the predominant view in considering that change is primary to the nature of time per se. (The following quotes are from Aristotle's Physics,IV,[10][11][12][13][14] He expresses this by saying that "time is most usually supposed to be motion and a kind of change". Much of his discussion can be interpreted as an analysis of whether time exists as a (Newtonian) dimension of reality. In this context, he explicitly rejects the dimensional concept of time: "One part of it has been and is not, while the other is going to be and is not yet." His conclusion is an equivocal "[time] either does not exist at all, or barely and in an obscure way." (I interpret this to be a reference to the infinitesimal rate of change within which the time denominator is either reduced to zero or to a negligibly small value; see below for details.) Nevertheless, Aristotle implicitly assumes that time is a unitary and universal entity. He concludes that time is not local movement, because then it would be the movement of many things (his treatment seems to consider only heavenly bodies), and "the movement of any of them equally would be time, so that there would be many times at the same time." He concludes that, paradoxically, "It is clear, then, that time is quantity of movement in respect of the before and after", and is continuous since it is an attribute of what is continuous. He goes on to qualify what he means by "quantity": "as the extremities of a line form a quantity", and "In respect of quantity the minimum is one (or two); in regard to extent there is no minimum." Aristotle seems to be trying to say that the instant constitutes the infinitesimal transition between the extended domains of "before" and the "after".
After extended considerations, however, Aristotle accepts what would become the Newtonian position of time as an extended dimension through which change can occur: "The 'now' is the link of time, as has been said (for it connects past and future time), and it is a limit of time (for it is the beginning of the one and the end of the other)." Despite his initial denial of the existence of past and future, he here accepts them as entities, or domains, with an existence on a par with the "now", which acts as the link between the (extended domains of) past time and future time.
But Aristotle then goes further to implicate time in Boltzmannian decay processes: "A thing, then, will be affected by time, just as we are accustomed to say that time wastes things away, that all things grow old through time, and that there is oblivion owing to the lapse of time, but we do not say the same of getting to know or of becoming young or fair. For time is by its nature the cause rather of decay, since it is the quantity of change, and change removes what is." In the statement, "all things grow old through time", Aristotle is going beyond the simple property of movement through space to prefigure the Boltzmann concept of the universality of the Second Law of Thermodynamics. He even attributes time as the causal agent of the decay: "A thing, then, will be affected by time," and "For time is by its nature the cause … of decay." He does not elaborate this causal property in the further development of this treatise, but he seems to be endowing "Time" with the agency of determining the direction of its arrow to the downward tendency to disorder rather than order, which is the classic pre-scientific philosophical error of imputing agency to inanimate forces.
Thus, in an extended treatment of time over four chapters of his Physics, Aristotle manages to espouse in some way most of the diverse historical positions on the nature of time, although he is consistent in highlighting the special nature of the "now", or present instant, that has the unique role of forming the transition between the past to the future. In this sense, he treats change as a transition from a prior state to a following state, and implicitly accepts the "now" as being a compound concept. However, throughout the variants of time that he considers, he adheres to the concept of time as being a unitary and universal entity that underlies all reality as an indivisible, though inchoate, essence-"there is the same time everywhere at once". On this level, Aristotle is adhering to a Platonic view of time as an abstract essence underlying all aspects of the universe, though in emphasizing its Heraclitan nature of change rather than simple being, he is diametrically opposed Plato's view of time as an eternal, immutable dimension of realization. On the other hand, his frequent invocation of "before" and "after" as extended domains comes close to this time-as-eternal position.
In summary, Aristotle takes so many mutually contradictory positions on the nature of time that his treatment amounts to an airing of all the inherent paradoxes of time without providing a convincing resolution for any of them.
A Formal Resolution of the Nature of Time
The concept of time that I will develop here, on the other hand, matches the simplicity and precision of Leonardo da Vinci in the header quote in treating the "instant", or the present moment, as primary and unitary, with time understood as the "flow" or dynamic evolution of this instant. To set the stage, we may consider Zeno's Paradox, which was developed as a refutation of the notion of time as process. In considering how long it would take a frog to jump to the edge of a pond, Zeno points out that there can be no change in an infinitesimally small subdivision of time, and therefore that time is an illusion because in the limit it is static and the domain of time is eternal. This was the view of Zeno's philosophical mentor, Parmenides, and in opposition to his main rival, Heraclitus, for whom the flow of time was, indeed, fundamental.
Zeno's logic can be inverted by considering not time per se but time in a compound as rate of change of space over a given period of time. Now the change in built into the ratio of space to time in the form of the irreducible concept of rate of change. If the rate of change is subdivided into its spatial and temporal components, it may be viewed as expressing the immutability of each of these domains, but if it is taken as a fundamental unit of a process as such, it becomes the essential element of the Heraclitan concept of flux as the core concept of the nature of reality.
If time is the simple entity of rate of change, how can this simple entity of infinitesimal duration have the form of agency, or inbuilt dynamic? This is best understood through the differential calculus of Leibniz and Newton, who indeed start with the same concept of the discrete change-the binary compound discussed by Aristotle. In their mathematical treatment of the derivative operator, dt, both Leibniz and Newton go through the same process as Aristotle does, of considering the before and after states, S 1 and S 2 , bounding the infinitesimal dt operator (although dissociating it from necessarily being the present moment, to being any moment under consideration). They then assume continuity of the state change dS from S 1 to S 2 , through the interval between before and after, allowing them to shrink the interval to its limiting value of zero while retaining the ratio dS/dt as a defined quantity, which Newton symbolizes by the instantaneous notation . That is, rather than being the ratio of two things, each of which is defined by the two ends of its range, the essence of the infinitesimal calculus is that, when the range is shrunk through the infinitesimal to zero, both the range and the ratio lose their differential qualities and become a unitary essence-an instantaneous rate of change that no longer has a defined time interval. Thus in differential equations, time has been transcended by the concept of a pure unidimensional concept of rate of change. In this way, the differential calculus is generally accepted as a valid procedure for defining the instantaneous derivative of a process, from which the process as a whole can be constructed by the inverse procedure of integration.
Expressed in mathematical notation: Here it is proposed to base the analysis of time on the same concept, that the essence of time is, in fact, the instantaneous Newtonian derivative (or what Leonardo calls "the instant"). Rather than thinking of this derivative as a point in the predefined domain of (Platonic) time, the novelty is to consider the instantaneous derivative itself as the fundamental essence, and the concept of extended time as the outcome of the integration process operating through this instantaneous derivative. This is a more formal analysis of Aristotle view that time either does not exist or only barely; it exists only as an aspect of a rate of change, but taken to the infinitesimal limit where it is a unitary concept in which the time aspect evaporates into the instantaneous rate of change.
Just as we can envisage the process S under the condition that the interval of time analysis shrinks to zero, we can equally express the instantaneous time in terms of the condition that the process shrinks to zero connection is that, since S is a variable function of t as normally conceived, is also a variable function of S. Consequently, Leonardo's concept is that time is something generated by a process (just as sausages are generated by a sausage machine). If the generation process is regular, the outcome of that process will be regular, like a string of sausages of the same size. If, however, the process is subject to some form of variable boundary conditions, the resulting outcome will itself be variable (like sausages of varying length).
Lagrangian Intrinsic Coordinates for Time
One form of dependence of the time dimension on the prevailing conditions was famously introduced by Lorenz in 1895 and incorporated by Einstein in his Theory of Relativity, in which the definition of time depended both on the velocity of travel and on the gravitational field. In the present view, the time dependence on the prevailing conditions is not restricted to the process of the propagation of electromagnetic radiation, as in the Theory of Relativity, but is applicable to all processes of any description, each proceeding at their own pace defined by the instantaneous derivative of the process, and defining their own time their progression is expressed.
Of course, the Einsteinian view of time that, though uniform, it is distorted by the velocity of a moving body according to the nonlinearity specified by the Lorenz equation, where v is the velocity of the moving body and c is the velocity of light, which applies independently in every local motion frame. This formulation may be regarded as a particular case of the general dependence of time on the local process of motion, where that is the particular case for uniform motion. In the general case of any kind of process, the equations are of similar form: (4) where f( ) is any form of nonlinear function that characterizes the process and is the environment of S.
To reiterate, eqs. 2b and 4 specify the concept of a function that can depend on its derivative, and can be expressed in the coordinates of the derived function itself. This concept is not novel, but was developed by Joseph Louis Lagrange in the 18 th century. Lagrangian mechanics consist of Newtonian mechanics translated into the local spatial coordinate frame of the function being specified, (i.e., the viewpoint of a traveler along the path of the function, not of some external coordinate frame).
This equation specifies that the Lagrangian derivative of the function S is some function of S itself and the environment with which it interacts. the crest of the wave. This conception takes the Lagrangian to the next level, freeing it not only from the spatial coordinate frame, but also from the linear domain of Newtonian/Einsteinian time to generate its intrinsic function of both space and time as the instant unfolds.
Time as Process
We may extend the daVincian focus on time per se by realizing that the "instant" is still an abstract entity that retains a mystical aura of a generative power that rolls out the flow of time as we experience it. Just as Aristotle views time as an abstract essence that permeates the universe, so Leonardo in his capsule statement does not go beyond the abstract notion of time per se to probe its full essence. This same issue is, in fact, a widespread in philosophy in general, that by naming some concept it reifies that concept into a status corresponding to other named concepts, thus imbuing it with parallel qualities that may be inappropriate when considering it fundamental essence. A good example is the Aristotle's arbitrary and unsupported assumption that time is both unitary and universal. Just as Einstein's Relativity revolutionized physics by recasting those assumptions for the nature of space (while retaining the dimensional notion of time), we may take the radical path of by recasting those assumptions for the nature of time.
This viewpoint can be seen to derive from Heraclitus, whose philosophy was that the core of everything is "change" and that the fundamental element of nature is fire. While the other three elements of ancient philosophy (earth, water and air) are identifiable substances, fire is unique in being a dynamic process rather than any kind of substance per se. Heraclitus' view of nature has therefore been characterized by the phrase "All is flux (πάντα ῥε)" (Simplicus, ~540). This view comes closer than that of any philosopher of seeing the fundamental essence of the universe as energy, and the inherent flux of that energy as the defining process of our existence.
This reconceptualization is already formalized in eq. 2a,b, in which time t is derived from the concept of the process governed by . Here, therefore, the nature of the process defines the nature of the time derived from it. If it is a rapid process, the time derived from it will pass rapidly, and conversely for a slow process. Importantly, if it is a variable process, the time derived from it will be variable in nature, as in the Lagrangian extension to the time domain, and conversely for an invariant process. The implication of this conceptualization is that time is entirely relative to each process in the universe.
That is, it is relative to unfolding activity of each subatomic particle, atom, molecule, cell, organism, star, and coherent body of any kind.
Universal Time
Thus, time is the process of unfolding of the energetic process of any defined entity in the universe, as expressed in eq. 2a,b, which can be different for every different kind of energy in the context of its energy landscape. Nevertheless, in a universal sense this particularized process could be considered to be the universe as a whole. This is, in fact, a viewpoint considered by Aristotle: "Some assert that [time] "This view is too naive for it to be worthwhile to consider the impossibilities implied in it." With the expansive logic of Physics, we can nevertheless resuscitate this view to define the S in eq. 2a,b as the universe as a whole, to give us a cosmic definition of time that could approximate the generic concept of time as employed throughout conventional physics. In other words, all the subprocesses in the universe would average out to an essentially uniform overall process, approximating the abstract concept of time on which conventional physics is based. In this way, we could return both Aristotle and contemporary physics to their core assumptions of a universal concept of time, but on the firm philosophical basis of their derivation from the daVincian framework of a generative conception of time as the "flow of the instant".
Relation to the Schrödinger Equation
The primary form of energetic process in the universe is light, which has a long history of its where S is the upper limit of the action integral of the system taken along the minimum action trajectory of the system, q is its coordinates, and the Lagrangian, , is defined by (7) with T being the total kinetic energy and V the total potential energy of the system.
The Hamilton-Jacobi Equation is a particular form of eq. 1a that specifies how the (instantaneous) rate of change energy depends on the immediately preceding energy state. As reformulated in quantized form by Erwin Schrödinger in 1925, it forms the underlying basis of Quantum Physics, from which all energetic processes are derived.
From the present viewpoint, the key aspect of the equation is that it is recursive, in that the derivative of the energy function is defined in terms of the current net energy state, which in turn is derived from applying the derivative to determine the infinitesimal increment towards the subsequent energy state (as represented in the general form by eq. 1b). Thus, the whole process is fundamentally an evolution from any given initial landscape through the subsequent states defined by the recursive nature of the equation.
As an aside, it is important to realize that the quantized nature of the Schrödinger Equation is
applicable only to the detection process of energy absorption by matter. As the basis for energy propagation and the standing-wave structure of the energy that constitutes matter, the universe would grind to a halt if these processes were governed by quantized energy packets (like a cart with square wheels!). It must use the continuous form of integration of the derivative, as in eq. 1b, in order to account for the fundamentally continuous oscillatory nature of atomic structure. Quantum physics thus has an inherent paradox in its defining equation, beyond the commonly express paradox of light being analyzable both as a particle and a wave. The wave nature itself is only possible in time if the wave function is a continuous rather than quantized energy function.
The Arrow of Time
The standard approach of physics is to maintain that there is no fundamental arrow of time. The equations of physics, such as General Relativity (Einstein, 1915) or Quantum Field Theory (Witten,
1988) treat time as an unsigned dimension and operate equivalently forwards and backwards in time.
The usual approach to the arrow of time is to consider that it is not a property of individual particles but of the organization of ensembles of particles, as governed by the Second Law of Thermodynamics (Lebowitz, 1993). This law implies that average disorder always increases, such that it is the ensemble processes instantiate a directional arrow of time toward increasing entropy. This formalism is, however, inherently problematic in that, while overall disorder increases, it is subject to local fluctuations such that some regions of material configurations experience increasing order (i.e., decreasing entropy), at least for certain periods. An example is the organization of living organisms, which take advantage of the entropic metabolic processes to build cellular structures of increasing order, such as pumping hearts and thinking brains. Does this mean that the arrow of time reverses in those regions? This is an absurd notion in relation to the space-time continuum at the heart of contemporary physics, in which time is a uniform dimension independent of the processes taking place within it. Moreover, the boundary between symmetrical time is fuzzy and indefinite. Can two particles have disorder? Three? At what point does the entropy concept kick in? Given these indeterminacies, the conventional view of the arrow of time is incoherent and self-contradictory.
In the present conceptualization of time, conversely, the arrow is provided by the process of the derivative operator operating within any system. While the differential equation specifying the behavior of any system is reversible in principle, each process gets started at some point and can only continue in that starting direction. Although each process may have evolved out of a prior process, the fundamental direction of the sequence of processes is set by the initial conditions of the whole sequence, namely the Big Bang. Thus, according to the present conceptualization, all subatomic particles, atoms, molecules, cells, organisms, astronomical bodies, galaxies, and superclusters each have an individual arrow of time inherited from their origin in the Big Bang, regardless of how they are related to an ensemble of increasing or decreasing disorder among its components (Note 1). It may be a long haul of billions of years back to the origin that determined the direction of the differential equation roll-out that defines the particular process, but that process is nevertheless ineluctably directional and cannot be reversed, once started. Thus, in this conception, time's arrow does not depend on some arbitrary construct of what elements to include in the ensemble, but is specific to each individual process.
This conceptualization helps to provide a formal basis for new views of the arrow of time promulgated by philosophers such as Maudlin (2012), that it is a fundamental asymmetry of the time dimension, distinguishing it from the symmetry and interchangeability of the spatial dimensions. His derivation of this directionality of time's arrow is essentially the Moorean position that it is the common-sense view that everyone would maintain from everyday experience (Moore, 1925). It is self-evident that time flows by its nature, but nothing more can be said to derive the essence of the directional flow. The concept proposed here, that all of reality consists of the flow of energy in its variety of forms puts this evidence from human experience of a firm philosophical foundation, that reality is flow "from the ground up". In this view, the equations of physics are symmetric abstractions of an inherently asymmetric process that devolves to a symmetric equation for analysis of the simplest cases, but the general case.
The Multiplicity of Process Time
On the other hand, the process concept of the time derivative as the core generator of time, as developed herein, carries with it the negation of the concept of time as a unitary dimension so central to the conventions of contemporary physics. If time, as we understand it through the integral form of eq 4b, is derived from the processes generating the value of the instantaneous derivative, the relativity of time so derived applies to every process that generates it, and in particular to every consciousness; indeed, to every level of description of the process, since processes are complex and subject to multiple levels of description. Moreover, the time defined by each process will vary with the current rate of that process level, as it speeds up or slows down with the various influences at play in the process. Thus, the common understanding that subjective time as experienced by conscious humans passes more quickly or more slowly in particular circumstances represents the valid definition of time for that (subjectively accessed) process. This is the full relativity of the inherent concept of time. From the point of view of each process, time derives from the fluctuating activity of that process, as experienced by that process (if it is capable of internal experience, as are the processes of our brains).
Thus, time is not the uniform dimension of the physics abstraction, but the concrete playing-out of the vagaries of each local process throughout the Universe. As above, one can attempt to recover that universal abstraction by considering the net functioning of all the processes throughout the universe, but this is defeated in practice by the limitations of light transmission, and does not remove its inherently process-defined basis. As in the practice of physics, one can also attempt to "measure" time by focusing on highly regular processes that are subject to minimal perturbation by external influences, but this is merely imposing the Platonic concept of time as the regular basis domain of other physical processes on the empirical paradigm. It is not removing the fact that this is a theoretical choice of how to proceed in understanding the chaos of processes that constitute the Universe. Ultimately, Newtonian, or Einsteinian, time is no more real than any other abstract concept (such as God, or morality); the time specified by Physics is just a convenient metric in which to characterize processes, not an absolute external reality.
The unavoidable consequence of this derivation is that the entire corpus of large-scale General Indeed, the perturbation by the various forces would be a basis for some kind of relativistic distortion that could be developed into an alternative theory of relativity. But, rather than a distortion of some notional-and one could say implausible-concept of the "fabric' of space-time, it would now become a distortion of each kind of syncytium of local energy fields, giving a more concrete basis from which to develop a formal theory than the implausible basis of a distortable "fabric" of space-time.
The standard approach to the "measurement" of time (a phrasing that assumes it to be a physical property that can in principle be measured), is to choose some stable process such as the resonance of a cesium atom as an "Atomic Clock". It may seem that the concept of temporal stability implies an understanding of time to be the underlying variable within which to assess the stability over time.
However, this is not the case, because multiple processes may be compared with each other. If two people are asked to "keep time" independently of each other, implying that at least one of them is an unstable process. However, if the procedure is repeated with two stable processes such as atomic resonators, they will be shown to remain in phase with each other over long intervals of time (as defined by the processes of the human measurers). Thus, their process stability can be established without reference to an extended time dimension (only to a local phase estimation, and we can say that we will choose to use their signature events (the number of phase peaks) as a yardstick for the stability of any other process (such as the human stream of consciousness) without recourse to the concept of an underlying domain of universal time.
Time as Fractal
This concept of process time leads to surprising structural implications of a fractal nature. The essence of fractals is that they are recursive, folding back on themselves in the domain that they inhabit. The classic example of a fractal is a 1D process that moves forward or backward in equal steps at random (or drawn from a symmetric statistical distribution of step sizes), known as a 1D random walk. The binary (1, -1) random walk has the classic property of a 1/p distribution of run-lengths p of going forward and backward, and the corresponding 1/f frequency distribution when transformed to the frequency domain (spatial frequency, or temporal frequency, according to whether the dimension is space or time). The process tends to drift away from its starting point with the square root of the number of steps but is certain to eventually return to its starting point, and to do so repeatedly as it progresses (Polya, 1921). Thus, the 1D random walk is essentially recursive in nature.
A 2D random walk is the classic case, and is still certain to ultimately return to its starting point.
However, the general case in nature as we experience it is for processes to proceed in three dimensions (such as atoms in space, or objects in our interactive environment). For a 3D random walk, the probability of returning to any given starting point falls to only 34%, with the complementary probability of 66% of the path drifting away from the starting point forever. And this is an upper bound for the probability of finding any other point in the process space. With increasing dimensionality of the process space for more complex processes, such as a brain, the probability of reaching any particular state progressively diminishes; this probability decreases (slowly) to 0.19, 0.13, 0.10, 0.09, and 0.07 as the dimensionality of the process space increases from 4 to 8 dimensions (Montroll, 1956).
In other words, if a turbulent process has more than 5 independent parameters, there a greater-than-90% chance that its behavior will get lost in its parameter space and never exhibit any particular form of behavior.
Figure 2. A One-Dimensional Random-Walk Fractal
(http://www.turingfinance.com/hacking-the-random-walk-hypothesis/) To reiterate, the idea is that time is not an independent dimension but is an outcome of the evolution of a process specified by a differential equation from an infinitesimal change kernel, dt. Time is derived from the integral of dt. In the simplest form of process, the integral of dt is just t, so a simple process can generate a linear dimension of time as it evolves through the integral, in the classic math formula.
But the world is full of nonlinear processes, such as the Navier-Stokes equation for turbulence. As these processes roll out to generate their time "integral", they do all kinds of wild things such as generating vortices that wrap around themselves in self-similar spirals and fractal structures.
Figure 3. Simulation of the Fractal Nature of Turbulence in Three Dimensions (Projected to Two)
So rather than viewing time as a Cartesian dimensional basis in which things take place, the implication of the daVincian concept is to view time as the coordinate-free Lagrangian that tracks the evolution within the process and is defined by each process as it evolves. The Lagrangian is a key transformation to coordinate-free specification that operates "inside" the process rather than outside it, so that rather than just the process being fractal in a Cartesian space-time, the Lagrangian space-time becomes itself fractal through the process evolution.
The best-known fractal is the Mandelbrot set (Brooks & Matelski, 1981), generated by the limits of recursive equation: with the arrows indicating the recursion relations embedded in the equation.
A sample from the resulting map is shown in Figure 4, illustrating that the hypercomplex form of the fractal solution to this extremely simple recursion equation has numerous forms of self-similarity over indefinitely many scales of comparison. Although examples may be generated of the solution structure for any choice of initial parameters, it is impossible to write the equation for the complete solution. It simply has to be generated through iteration to determine its properties. The key to its complexity is the nonlinear operation of the squaring term on the z variable. Without this nonlinearity, the solution space would be a simple second-order feedback system.
The fractal function is thus an object lesson as to the nature of the ramifications of a recursive equation such as the Schrödinger Equation, which has essential the same feedback structure as shown for the Mandelbrot equation (eq. 8). The implication is that the iterative progression of the Hamilton-Jacobi/Schrödinger Equation, which is often considered to underlie the whole of physics (Feynman, 1985), is liable to generate fractal solutions under appropriate initial conditions (Domany et al., 1983;Rodnianski, 2000;Johnson & Ordonez, 2011;Chen & Olver, 2013). One example is reproduced in Figure 4.
Fractal Time in Context
There have been many speculations about the elaborated nature of time, that it may evaporate at the event horizon of a black hole, or be subject to wormhole shifts allowing the velocity of light to be transcended, or have higher-dimensional loops that allow cyclic return to the same point in time (the Ground-Hog-Day Effect). However, none of these time aberrations have the complexity of fractal time, since they all operate on time as a unitary, one-dimensional local entity (even though it may be a one-dimensional curved line). Processes in general are not so locally restricted. The quantum-physical process of a propagating wavefront, though generally analyzed in terms of a plane wave in a uniform medium, can readily encounter inhomogeneous media and split into multiple differential components that cross paths and intersect. Although the intersections would be purely linear (additive and non-interacting) in the vacuum, such intersections in nonlinear media can indeed interact and generate further wavefront subcomponents, reminiscent of the fractal behavior in Figure 4. Whereas a process consisting of a defined particle can only take a single path, however elaborate, a wavefront can exhibit true fractal behavior (reminiscent of the proliferating broomsticks in the Disney "Sorcerer's Apprentice", which multiply progressively with every blow of the axe). While this may not be the typical form of behavior of wavefronts in everyday experience, it is part of the vocabulary of their potential behavior under sufficiently nonlinear conditions, even when defined by as simple a nonlinearity as the Mandelbrot equation.
Again, since time is defined as the integral of any developing process over the space traversed by the wavefront (eq. 5), the time defined by such a fractal process will itself take on a three-dimensional fractal character, potentially splitting into multiple temporal subcomponents propagating through 3D space with independent individual behaviors. Thus, the elaboration of this view of time as deriving from the instantaneous derivative of each potentially complex, nonlinear process in the universe brings time itself into the fractal domain of three-dimensional turbulent dynamic processes.
Conclusion
Although time has long been considered one of the fundamental quantities of Physics and is treated as something that exists externally to the observer and that can be measured by a variety of devices called "clocks", the present analysis takes a radically different view processes as fundamental, with time derivable from the integral of the differential equations specifying each process. In this view, time is a mental or conceptual construct derived from the energetic processes of the physical world, a construct that is specific to each individual process and that is only derivable from such processes. As far as can be determined, this is a novel viewpoint that has never been articulated in the Western Philosophical view of time as a primary dimension of reality. Furthermore, the present concept of time as inherently a process (rather than an extended dimension). In this respect, it is fully compatible with the Emergent Aspect Dualism view (Tyler, 2014(Tyler, , 2019 of the universe as constituted entirely of processes, in an emergent hierarchy from the subatomic level to the conscious processing of the human mind. It is also the only theory of time that is grounded in the Hamilton-Jacobi/Schrodinger Equation of Quantum Physics, which, when applied to a sufficiently nonlinear physical process, can lead to the fractionation of outcomes that is describable as "fractal time".
|
v3-fos-license
|
2020-11-05T09:10:46.717Z
|
2020-11-02T00:00:00.000
|
228832096
|
{
"extfieldsofstudy": [
"Geology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4433/11/11/1182/pdf",
"pdf_hash": "668cc5831a7eb24ba083efca0e912641b3f28c6f",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43080",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "45101996b48f37181975b888ee06b862496c8bec",
"year": 2020
}
|
pes2o/s2orc
|
The Role of Samalas Mega Volcanic Eruption in European Summer Hydroclimate Change
In this study, the role of AD 1258 Samalas mega volcanic eruption in the summer hydroclimate change over Europe and the corresponding mechanisms are investigated through multi-member ensemble climate simulation experiments based on the Community Earth System Model (CESM). The results show that the CESM simulations are consistent with the reconstructed Palmer Drought Severity Index (PDSI) and the historical records of European climate. Europe experiences significant summer cooling in the first three years after the Samalas mega volcanic eruption, peaking at −3.61 °C, −4.02 °C, and −3.21 °C in year 1 over the whole Europe, Southern Europe, and Northern Europe, respectively. The summer surface air temperature (SAT, °C) changes over the European continent are mainly due to the direct weakening of shortwave solar radiation induced by volcanic aerosol. The summer precipitation over the European continent shows an obvious dipole distribution characteristic of north-south reverse phase. The precipitation increases up to 0.42 mm/d in year 1 over Southern Europe, while it decreases by −0.28 mm/d in year 1 over Northern Europe, respectively. Both simulations and reconstructions show that the centers with the strongest increase in precipitation have always been located in the Balkans and Apennine peninsulas along the Mediterranean coast over Southern Europe, and the centers with the strongest precipitation reduction are mainly located in the British Isles and Scandinavia over northwestern Europe. The negative response of North Atlantic Oscillation (NAO) with significant positive sea level pressure (SLP) anomaly in the north and negative SLP anomaly in the south is excited in summer. The low tropospheric wind anomaly caused by the negative phase of NAO in summer affects the water vapor transport to Europe, resulting in the distribution pattern of summer precipitation in Europe, which is drying in the north and wetting in the south. The knowledge gained from this study is crucial to better understand and predict the potential impacts of single mega volcanic eruption on the future summer hydroclimate change in Europe.
Introduction
Volcanic eruption is a major natural cause of interannual to multiannual climate change and has affected human society [1]. Understanding the climate response to volcanic eruptions is important for explaining past climate change [2] and predicting seasonal climate after future volcanic eruptions [3,4]. Especially, the mega volcanic eruption has potentially serious impacts on climate change and ecosystems, needing to attract more attention [5].
The AD 1258 Samalas mega volcanic eruption in Indonesia is the largest sulfur-rich volcanic eruption of the Common Era [5][6][7][8][9]. Sedimentological analyses of the deposits confirm the exceptional scale of this event, which had both an eruption magnitude and a volcanic explosion index of 7 [6]. Previous climate simulations suggested that the Samalas mega volcanic eruption and the following three large sulfur-rich volcanic eruptions in the 13th century triggered the Little Ice Age [10,11]. Recent research also showed that if other external forcings and new volcanic disturbances are excluded, it takes almost two decades for the global and hemispheric cooling caused by Samalas mega volcanic eruption to completely disappear [5]. Based on proxy records and historic evidence, previous studies suggested that the Samalas mega volcanic eruption has a great impact on climate and social economy in Europe [6,7,9,12,13]. Using historical documents, ice core data, and tree ring records, Guillet et al. [6] reconstructed the spatial and temporal climate response of Samalas volcanic eruption and found that Western Europe experienced strong cooling. The medieval chronicles highlight an unusually cold summer, with continuous rain, accompanied by devastating floods and poor harvests [12]. A large number of medieval European documents, including the 'Monumenta Germaniae Historica,' 'Rerum Britannicarum Medii Aevi Scriptores,' and 'Recueil des historiens des Gaules et de la France,' all reported that the cold, incessant rainfall and unusually high cloudiness in AD 1258 prevented crops and fruits from ripening [6]. The Norman 'Notes of Coutances' recorded: 'There was no summer during summer. The weather was very rainy and cold at harvest time, neither the crop harvest nor the grape harvest were good. Grapes could not reach maturity; they were green, altered and in poor health' [6]. The medieval chronicles in Northern Europe recorded the initial warming in the early winter of AD 1258 after the Samalas eruption, followed by extensive wet and cold climatic conditions in AD 1259, which may have affected crops and contributed to the beginning and severity of famines in some regions of the Northern Hemisphere at that time [7]. Archaeologists determined that the mass burial of thousands of medieval skeletons in London dates back to AD 1258 [14], which may be related to the disturbance in the Northern Hemisphere climate caused by the Samalas mega volcanic eruption [9]. The most serious socio-economic consequences reported at the time of the Samalas mega volcanic eruption came from England, where the famine caused by two consecutive years of poor harvests (AD 1256-1257), high prices, and speculation in AD 1258 may have killed about 15,000 people in London alone [14]. Several sources indicate that in AD 1258 and 1259, parts of Europe (Kingdom of France, Kingdom of England, Holy Roman Empire, Iberian Peninsula) experienced severe food shortages and survival crises [6]. However, the research about the Samalas eruption stated above is mainly based on proxy data and reconstructions. There is currently a lack of research on the response of the European summer climate to the Samalas mega volcanic eruption based on simulation. Further research is needed on the impact mechanisms of Samalas mega volcanic eruption on the European climate.
Large volcanic eruption injects sulfur gases into the stratosphere, which will be converted into sulfate aerosols and cool the surface by blocking the incoming solar radiation, thus affecting the climate changes [15]. In Europe, the response of winter hydroclimate to volcanic eruption resembles a positive phase of the North Atlantic Oscillation (NAO) [16][17][18]. However, the impact of volcanic eruption on the summer hydrological climate over Europe is not fully understood, especially for the mega volcanic Atmosphere 2020, 11,1182 3 of 17 eruption, such as Samalas. Reconstruction results show that larger volcanic eruptions are more likely to cause wetting and drying responses in Europe than smaller eruptions [19]. In addition, multi-proxy data show that the climate in Europe at the end of the 20th century and the early 21st century may be warmer than at any period over the past 500 years [20]. Therefore, it is very important to have an in-depth understanding of how stratospheric aerosols generated by the Samalas mega volcanic eruption change the hydrological conditions in Europe and potentially offset warming.
What role does the Samalas mega volcanic eruption play in the summer hydroclimate changes in Europe? What are the differences in the response of the summer climate to the Samalas event in different sub-regions of Europe? What are the mechanisms behind these influences? To address these questions, we carried out multi-member ensemble climate simulation experiments on the AD 1258 Samalas mega volcanic eruption based on Community Earth System Model (CESM). Therefore, we will also offer a more comprehensive understanding of the historical impacts and mechanisms of Samalas mega volcanic eruption on European summer hydroclimate changes from the perspective of simulation research.
The organizational structure of this paper is as follows: Section 2 introduces the model and data. Section 3 shows the detailed response and mechanisms of European summer hydroclimate to Samalas mega volcanic eruption. Section 4 presents the main conclusions and discussion of this study.
Firstly, we carried out a 2400-year control experiment (CTRL), which used fixed external forcing conditions of AD 1850, with the first 400 years as a spin-up run [27]. Then, the Samalas mega volcanic experiments (VOL), which contained 8-member ensemble simulations with a length of 20 years, that were performed using the 8 different initial conditions adopted from the CTRL. The volcanic forcing was the only changing external forcing in the VOL. Samalas volcanic forcing was added to the 11th model year of each VOL member, i.e., there was no volcanic disturbance in the first 10 model years of VOL, and the 11th model year in VOL was the Samalas mega volcanic eruption year. The volcanic forcing used to drive the VOL in this study was the reconstructed AD 1258 Samalas mega volcanic forcing based on the Ice-core Volcanic Index 2 [30].
In this study, the Samalas mega volcanic eruption was assumed to begin on April 1st. The eruption year and the first year following the Samalas mega volcanic eruption were named year 0 and year 1, respectively, with the same naming scheme for other years. The summer referred to June-August (JJA) in this study. That means the summer of year 0 referred to the JJA of the eruption year. The anomalies in each VOL experiment were calculated with respect to the 10 years before the Samalas mega volcanic eruption.
Reconstruction Data
The 'Old World Drought Atlas' (OWDA), an annual tree-ring-based June-August reconstruction of self-calibrating Palmer Drought Severity Index (PDSI) over Europe and the Mediterranean Basin during the Common Era [31] was used as the hydrological index to validate the simulated summer hydroclimate changes in Europe. Instead of being based purely on precipitation, the PDSI was based upon a primitive water balance model. The basis of the index was the difference between the amount of precipitation required to retain a normal water-balance level and the amount of actual precipitation [32]. The OWDA data used the PDSI unit, which used positive values to indicate wet conditions and negative values to indicate dry conditions.
Comparison of Reconstruction and Simulation
In order to validate the summer hydroclimate change in the historical period simulated by the CESM, we compared the ensemble mean precipitation simulation results in VOL with the reconstructed OWDA PDSI index ( Figure 1). We focused on the ensemble mean results of multi-member simulations when comparing with the reconstructed data. By averaging out the individual simulations, the internal variability was removed, and the impact of Samalas mega volcanic forcing was highlighted, which affects the comparison with reconstructions that are influenced by the combined effects of external forcing and internal variability. However, the magnitude of the Samalas mega volcanic eruption was extremely large, and its impact on the climate was also very strong. Compared with the internal variability, Samalas mega volcanic forcing was likely to dominate the summer hydroclimate changes in Europe after the eruption, at least in the short term. Furthermore, Zanchettin et al. [33] clarified the relative role of forcing uncertainties and initial-condition unknowns in spreading the climate response to volcanic eruptions, and they suggested that forcing uncertainties can overwhelm initial-condition spread in boreal summer. This means that it is relatively reasonable in this study to compare the dry and wet changes in European summer between ensemble mean simulation results and the reconstructed data. Figure 2 shows the spatial response of European precipitation after the Samalas mega volcanic eruption in VOL from year 0 to year 2. As can be seen from the ensemble mean results in the VOL (Figure 2), after the Samalas mega volcanic eruption, the summer precipitation over the European continent shows an obvious dipole distribution characteristic of the north-south reverse phase. There was a significant wetting response in year 0 and year 1 over Southern Europe (10° W-40° E, 35° N-50° N), especially along the Mediterranean coast, while a significant drying response was found over Northern Europe (10° W-40° E, 50° N-70° N). This spatial pattern of summer The reconstructed JJA OWDA PDSI reflects soil moisture conditions and primarily represents the warm-season hydroclimate over Europe [31]. As a relative index of drought, OWDA PDSI has a high degree of comparability across a broad range of precipitation climatologies [32]. Therefore, the OWDA provides a reconstruction of hydroclimatic variability that allows us to compare the simulated hydroclimate variability with the reconstructed dry and wet changes in Europe after Samalas mega volcanic eruption. In this study, we used JJA precipitation changes to represent the simulated summer hydroclimate changes in Europe. Due to the different units between the simulated precipitation and the reconstructed PDSI, we have standardized these two data for the convenience of comparison. As can be seen from Figure 1, after the Samalas mega volcanic eruption, the fluctuation changes of precipitation in VOL and reconstructed PDSI index were relatively consistent. The simulation results were always within the range of two times the standard deviation of PDSI. The correlation coefficient between the standardized JJA precipitation anomaly in VOL and OWDA PDSI was 0.75 (at the 95% confidence level) over Europe (10 • W-40 • E, 35 • N-70 • N). Both reconstruction and simulations show that the hydrological changes in Europe were humid in the first two years after the Samalas mega volcanic eruption and then turned into drought. CESM has well simulated the dry and wet changes in Europe after the Samalas mega volcanic eruption. Figure 2 shows the spatial response of European precipitation after the Samalas mega volcanic eruption in VOL from year 0 to year 2. As can be seen from the ensemble mean results in the VOL (Figure 2), after the Samalas mega volcanic eruption, the summer precipitation over the European continent shows an obvious dipole distribution characteristic of the north-south reverse phase. There was a significant wetting response in year 0 and year 1 over Southern Europe (10 • W-40 • E, 35 • N-50 • N), especially along the Mediterranean coast, while a significant drying response was found over Northern Europe (10 •
Summer Precipitation Response to Samalas Mega Volcanic Eruption over Europe
. This spatial pattern of summer precipitation change still existed in year 2, but it was weakened. This summer precipitation spatial patterns were in agreement with Fischer et al. [34], Wegmann et al. [35], and Gao et al. [19], although their results also showed an averaged summer wetting response in northeast Europe after the tropical eruptions. Judging from the spatial distribution of precipitation anomaly, the wetting in year 0 and year 1 in Southern Europe was significant, especially the precipitation, which increased most in year 1. Although Northern Europe shows the characteristics of precipitation reduction during year 0-2, the Northern Europe land area only shows significant drought mainly in year 1. Interestingly, after the Samalas mega volcanic eruption, the simulation results of VOL show that the centers with the strongest increase in precipitation have always been located in the Balkans and Apennine peninsulas over Southern Europe, and the centers with the strongest precipitation reduction are mainly located in the British Isles and Scandinavia over northwestern Europe. The results drawn from the VOL simulations are in general agreement with proxy data and reconstructions. Based on seasonal paleoclimate reconstructions, Rao et al. [36] found that wet conditions occur in the eruption year and the following three years in the western Mediterranean. Conversely, northwestern Europe and the British Isles experienced dry conditions in response to volcanic eruptions. This good consistency once again indicates that the simulation results in VOL can well explain the characteristics and causes of summer hydroclimate changes in Europe after the Samalas event recorded in historical documents and reconstructions.
In order to better quantify the summer precipitation response in different European regions to Samalas mega volcanic eruption, we divided Europe into two sub-regions: Southern Europe In order to better quantify the summer precipitation response in different European regions to Samalas mega volcanic eruption, we divided Europe into two sub-regions: Southern Europe (10° W-40° E, 35° N-50° N) and Northern Europe (10° W-40° E, 50° N-70° N) according to the spatial distribution characteristics of precipitation changes after the Samalas event ( Figure 2), and calculated the average precipitation responses of the whole Europe and each sub-region ( Figure 3). As shown in the time series in Figure 3, for the whole of Europe, the ensemble mean precipitation increased significantly in year 0 (peaking at 0.11 mm/d) and year 1, reaching the 95% confidence level, and the European mean precipitation anomaly returned to a normal state in year 2 ( Figure 3a). Gao et al. [19] also found similar European precipitation changes after tropical volcanic eruptions using the reconstructed Old World Drought Atlas (OWDA) [31]. After the Samalas mega volcanic eruption, the precipitation in Southern Europe increased significantly (at 95% confidence level) in the first three years (year 0-2) and peaked with a value of about 0.42 mm/d in year 1 ( Figure 3b). A large number of medieval European chronicles and documents also reported that the incessant rainfall in AD 1258 prevented crops and fruits from ripening, resulting in severe food shortages and survival crises in AD 1258 and 1259 in parts of Europe such as the Holy Roman Empire and Iberian Peninsula [6,7,12]. In contrast, the precipitation in Northern Europe decreased significantly (at 95% confidence level), especially, the drought was most severe in year 1 with the precipitation anomaly peaking at −0.28 mm/d (Figure 3c). The different precipitation responses of Northern and Southern Europe to Samalas mega volcanic eruption can explain the precipitation changes in the whole of Europe to some extent. In year 0 and year 1, the overall wetting in the whole of Europe was due to the fact that the significant precipitation increase in Southern Europe was stronger than the precipitation decrease in Northern Europe, which dominated the increase of precipitation in the whole of Europe in time series. However, the precipitation increase in Southern Europe was almost equivalent to the precipitation decrease in Northern Europe in year 2, which made the change of spatial average precipitation in the whole of Europe less obvious.
Summer Temperature Response to Samalas Mega Volcanic Eruption over Europe
The ensemble mean changes of summer surface air temperature (SAT, °C) anomaly after the Samalas mega volcanic eruption in VOL are shown in Figure 4. Europe experienced significant cooling in the first three years (year 0-2) after the Samalas eruption (Figure 4a-c). These results were in agreement with Fischer et al. [34], who analyzed the European summer climatic signal following 15 major tropical volcanic eruptions over the last 500 years based on multi-proxy reconstructions, and suggested that the average influence of 15 major volcanic eruptions was a significant European continental scale summer cooling during the first and second post-eruption years. The strongest and highly significant cooling signal was found in the summer of year 1 after the eruption (Figure 4b). Spatially, the sharp cooling response was concentrated in Southern Europe, which was significantly stronger than in Northern Europe in year 0. Then, the entire European land area experienced a sharp drop in temperature in year 1. In the summer of year 2, the cooling over the European continent was still significant, but it was obviously weaker than before, and the areas with the significant cooling retreated to the interior of Eastern Europe. As shown in the time series in Figure 3, for the whole of Europe, the ensemble mean precipitation increased significantly in year 0 (peaking at 0.11 mm/d) and year 1, reaching the 95% confidence level, and the European mean precipitation anomaly returned to a normal state in year 2 ( Figure 3a). Gao et al. [19] also found similar European precipitation changes after tropical volcanic eruptions using the reconstructed Old World Drought Atlas (OWDA) [31]. After the Samalas mega volcanic eruption, the precipitation in Southern Europe increased significantly (at 95% confidence level) in the first three years (year 0-2) and peaked with a value of about 0.42 mm/d in year 1 (Figure 3b). A large number of medieval European chronicles and documents also reported that the incessant rainfall in AD 1258 prevented crops and fruits from ripening, resulting in severe food shortages and survival crises in AD 1258 and 1259 in parts of Europe such as the Holy Roman Empire and Iberian Peninsula [6,7,12]. In contrast, the precipitation in Northern Europe decreased significantly (at 95% confidence level), especially, the drought was most severe in year 1 with the precipitation anomaly peaking at −0.28 mm/d (Figure 3c). The different precipitation responses of Northern and Southern Europe to Samalas mega volcanic eruption can explain the precipitation changes in the whole of Europe to some extent. In year 0 and year 1, the overall wetting in the whole of Europe was due to the fact that the significant precipitation increase in Southern Europe was stronger than the precipitation decrease in Northern Europe, which dominated the increase of precipitation in the whole of Europe in time series. However, the precipitation increase in Southern Europe was almost equivalent to the precipitation decrease in Northern Europe in year 2, which made the change of spatial average precipitation in the whole of Europe less obvious.
Summer Temperature Response to Samalas Mega Volcanic Eruption over Europe
The ensemble mean changes of summer surface air temperature (SAT, • C) anomaly after the Samalas mega volcanic eruption in VOL are shown in Figure 4. Europe experienced significant cooling in the first three years (year 0-2) after the Samalas eruption (Figure 4a-c). These results were in agreement with Fischer et al. [34], who analyzed the European summer climatic signal following 15 major tropical volcanic eruptions over the last 500 years based on multi-proxy reconstructions, and suggested that the average influence of 15 major volcanic eruptions was a significant European continental scale summer cooling during the first and second post-eruption years. The strongest and highly significant cooling signal was found in the summer of year 1 after the eruption (Figure 4b). Spatially, the sharp cooling response was concentrated in Southern Europe, which was significantly stronger than in Northern Europe in year 0. Then, the entire European land area experienced a sharp drop in temperature in year 1. In the summer of year 2, the cooling over the European continent was Atmosphere 2020, 11, 1182 8 of 17 still significant, but it was obviously weaker than before, and the areas with the significant cooling retreated to the interior of Eastern Europe. millennium reported by Luterbacher et al. [37] using the Paleo Model Intercomparison Project Phase 3 (PMIP3) climate model simulations. The largest significant SAT reduction appears in year 1 over the whole of Europe, Southern Europe, and Northern Europe with values of −3.61 °C , −4.02 °C , and −3.21 °C , respectively. From year 0 to year 2, the cooling in Southern Europe has always been stronger than that in Northern Europe. Based on historical documents, ice core data, and tree ring records, previous studies also found that Europe experienced an unusually cold summer after the Samalas mega volcanic eruption [6,7,12]. The time series of summer SAT anomaly over the whole of Europe, Southern Europe, and Northern Europe are shown in Figure 5. The whole of Europe and each sub-region show similar SAT variations, with significant cooling anomaly (at 95% confidence level) in the first three years after the Samalas eruption (Figure 5a-c). For the whole of Europe (Figure 5a), the summer cooling peaks during the eruption year and the first year after the eruption, which was in agreement with the European summer temperature response to the strong tropical volcanic events over the last millennium reported by Luterbacher et al. [37] using the Paleo Model Intercomparison Project Phase 3 (PMIP3) climate model simulations. The largest significant SAT reduction appears in year 1 over the whole of Europe, Southern Europe, and Northern Europe with values of −3.61 • C, −4.02 • C, and −3.21 • C, respectively. From year 0 to year 2, the cooling in Southern Europe has always been stronger than that in Northern Europe. Based on historical documents, ice core data, and tree ring records, previous studies also found that Europe experienced an unusually cold summer after the Samalas mega volcanic eruption [6,7,12].
Mechanisms of European Summer Hydroclimate Changes after the Samalas Mega Volcanic Eruption
The surface heat flux was analyzed to understand the mechanisms underlying the response of the European summer hydroclimate changes to the Samalas mega volcanic eruption. (Figure 6a-c). The spatial pattern variation of shortwave radiation was similar to that of temperature. The FSNS decrease in Southern Europe was larger than that in Northern Europe in year 0, and it was significantly reduced in both Southern Europe and Northern Europe in year 1. Then, the FSNS decrease was weakened and retreated to Eastern Europe in year 2. The pattern correlation coefficients between the SAT ( Figure 4) and FSNS (Figure 6a-c) are 0.94, 0.86, and 0.39, respectively, in year 0, year 1, and year 2 over the European land area. This suggests that the summer SAT changes over the European continent are largely due to the release of a large amount of sulfur dioxide into the stratosphere and its conversion into sulfate aerosol after the eruption. Volcanic aerosols weaken the shortwave solar radiation reaching the surface in a short period of time, resulting in a significant decrease in the net shortwave radiation on the surface of the European land area, which lasts for 2-3 years (Figure 6a-c). The decrease of solar radiation directly leads to the drop in temperature in year 0-2. Compared with shortwave radiation, the reduction of surface net longwave radiation (Figure 6d-f) over the European continent was much weaker, with a reduction of −4.74 W/m 2 , −2.56 W/m 2 , and 1.30 W/m 2 in year 0, year 1, and year 2, respectively, after the Samalas mega volcanic eruption.
Following the mega volcanic eruption of Samalas, the sensible heat flux (Figure 6g-i) over the European land area mainly decreased, especially over Southern Europe, but did not decrease over the Atlantic Ocean, which indicates that the decrease of SAT on the land is larger than that on the Atlantic Ocean, thus leading to the decrease of the land-sea thermal contrast. From year 1 to year 2, SHFLX continued to decrease in Southern Europe, while it gradually showed a slight increase in Northern Europe (Figure 6h,i). This may be the potential reason for the spatial difference between the summer precipitation increase in Southern Europe and the precipitation decrease in Northern Europe after the Samalas volcanic eruption. Using a range of climate modeling results, Myhre et al. [38] also pointed out that over the historical period, the sensible heat at the surface was gradually reduced in the CMIP5 models, and hence contributes to an increase in precipitation. Furthermore, (Figure 6a-c). The spatial pattern variation of shortwave radiation was similar to that of temperature. The FSNS decrease in Southern Europe was larger than that in Northern Europe in year 0, and it was significantly reduced in both Southern Europe and Northern Europe in year 1. Then, the FSNS decrease was weakened and retreated to Eastern Europe in year 2. The pattern correlation coefficients between the SAT ( Figure 4) and FSNS (Figure 6a-c) are 0.94, 0.86, and 0.39, respectively, in year 0, year 1, and year 2 over the European land area. This suggests that the summer SAT changes over the European continent are largely due to the release of a large amount of sulfur dioxide into the stratosphere and its conversion into sulfate aerosol after the eruption. Volcanic aerosols weaken the shortwave solar radiation reaching the surface in a short period of time, resulting in a significant decrease in the net shortwave radiation on the surface of the European land area, which lasts for 2-3 years (Figure 6a-c). The decrease of solar radiation directly leads to the drop in temperature in year 0-2. Compared with shortwave radiation, the reduction of surface net longwave radiation (Figure 6d-f) European coast, especially in year 1 and year 2, indicated an increase in Atlantic evaporation, which may lead to an increase in precipitation in Southern Europe. Figure 7 shows the sea level pressure (SLP) anomalies in summer after the Samalas mega volcanic eruption. The ensemble mean result shows a dipole-type response, with significant positive pressure anomaly in the north and negative pressure anomaly in the south. This is an obvious negative phase of NAO. These anomalies are similar to the negative tropospheric Arctic Oscillation pattern in late winter and early spring after the Laki eruption found in Zambri et al. [39], which was manifested as an equatorward shift of the mid-latitude tropospheric jet. NAO is a large-scale meridional oscillation of atmospheric mass between the subtropical high-pressure system near the Azores and the subpolar low-pressure system near Iceland [40]. It is widely considered to be the most important mid-latitude source of temperature and precipitation changes over Europe [41,42]. Previous studies [16][17][18]34] have shown that a positive phase of NAO in boreal winter will be excited after the large tropical volcanic eruption. Here, our results further show that NAO's negative response to the large tropical eruption in summer is totally different from that in winter. Based on extratropical North Atlantic-European mean sea level pressure anomalies for 1881-2003, Folland et al. [43] proposed the summertime parallel of the winter NAO known as the summer North Atlantic Oscillation (SNAO). This SNAO is defined as the first empirical orthogonal function (EOF) of observed summertime extratropical North Atlantic pressure at mean sea level and has a smaller spatial extent than the winter NAO and is located farther north [43]. Compared with Folland et al. [43], we found that the SLP anomalies in summer after the Samalas mega volcanic eruption was more like the typical definition of NAO-like negative phase, which shows that the meridional Following the mega volcanic eruption of Samalas, the sensible heat flux (Figure 6g-i) over the European land area mainly decreased, especially over Southern Europe, but did not decrease over the Atlantic Ocean, which indicates that the decrease of SAT on the land is larger than that on the Atlantic Ocean, thus leading to the decrease of the land-sea thermal contrast. From year 1 to year 2, SHFLX continued to decrease in Southern Europe, while it gradually showed a slight increase in Northern Europe (Figure 6h,i). This may be the potential reason for the spatial difference between the summer precipitation increase in Southern Europe and the precipitation decrease in Northern Europe after the Samalas volcanic eruption. Using a range of climate modeling results, Myhre et al. [38] also pointed out that over the historical period, the sensible heat at the surface was gradually reduced in the CMIP5 models, and hence contributes to an increase in precipitation. Furthermore, the increase in surface latent heat flux (Figure 6j-l) over the North Atlantic region along the European coast, especially in year 1 and year 2, indicated an increase in Atlantic evaporation, which may lead to an increase in precipitation in Southern Europe. Figure 7 shows the sea level pressure (SLP) anomalies in summer after the Samalas mega volcanic eruption. The ensemble mean result shows a dipole-type response, with significant positive pressure anomaly in the north and negative pressure anomaly in the south. This is an obvious negative phase of NAO. These anomalies are similar to the negative tropospheric Arctic Oscillation pattern in late winter and early spring after the Laki eruption found in Zambri et al. [39], which was manifested as an equatorward shift of the mid-latitude tropospheric jet. NAO is a large-scale meridional oscillation of atmospheric mass between the subtropical high-pressure system near the Azores and the subpolar low-pressure system near Iceland [40]. It is widely considered to be the most important mid-latitude source of temperature and precipitation changes over Europe [41,42]. Previous studies [16][17][18]34] have shown that a positive phase of NAO in boreal winter will be excited after the large tropical volcanic eruption. Here, our results further show that NAO's negative response to the large tropical eruption in summer is totally different from that in winter. Based on extratropical North Atlantic-European mean sea level pressure anomalies for 1881-2003, Folland et al. [43] proposed the summertime parallel of the winter NAO known as the summer North Atlantic Oscillation (SNAO). This SNAO is defined as the first empirical orthogonal function (EOF) of observed summertime extratropical North Atlantic pressure at mean sea level and has a smaller spatial extent than the winter NAO and is located farther north [43]. Compared with Folland et al. [43], we found that the SLP anomalies in summer after the Samalas mega volcanic eruption was more like the typical definition of NAO-like negative phase, which shows that the meridional pressure gradient between Iceland and the Azores was weakened in summer (Figure 7). In addition, during year 1-year 2, this summer NAO negative phase gradually coincided with the SNAO negative phase pattern in Folland et al. [43] (Figure 7). Although the low-pressure center of the summer NAO was located further south in our results (Figure 7), this may be due to the fact that the short-term impact intensity and spatial scale of the external forcing of Samalas mega volcanic eruption on summer NAO were much greater than the impact of the gradual increase of greenhouse gases. Combining reconstructions with ECHAM5.4 simulation results, Wegmann et al. [35] also found that volcanic-induced cooling leads the summer NAO toward a negative phase, which causes the southward movement of the storm track and enhances moisture advection toward southern Europe.
Mechanisms of European Summer Hydroclimate Changes after the Samalas Mega Volcanic Eruption
In addition, the model simulated summer NAO in Wegmann et al. [35] was also located further south than the SNAO suggested by Folland et al. [43].
To further investigate the relationship between the summer hydroclimate over Europe and the circulation changes, the 850-hPa wind patterns associated with the negative NAO phase are also analyzed as shown in Figure 7 (vector). In the eruption year (year 0), the 850-hPa wind anomalies are characterized by a strong cyclone over the Balkans and Apennine peninsulas near the Mediterranean Sea (Figure 7a). Southwesterly airflow in the lower troposphere transports a large amount of warm and moist air from the Mediterranean to the Balkans, which is conducive to precipitation and makes Southern Europe near the Balkans form a precipitation increase center in year 0. There is also a cyclonic circulation over the Atlantic Ocean, but its location is to the west, which transports less warm and wet Atlantic air to the Iberian Peninsula in southwest Europe (Figure 7a). At the same time, Northern Europe is controlled by a high-pressure anticyclone, resulting in the cold polar air to be blocked by the high pressure and brought to northern and central Europe, while warm air from the south cannot be transported to Northern Europe, which is not conducive to precipitation (Figure 7a). In year 1, the surface latent heat flux over the Atlantic Ocean increases (Figure 6k), indicating an increase in evaporation. At the same time, a strong low-pressure cyclonic circulation over the Atlantic Ocean extends eastward to the land area of Southern Europe, and southwest wind anomalies prevail in Southern Europe (Figure 7b). This circulation change helps to transport a large amount of warm and moist air from the Atlantic Ocean to the Iberian Peninsula in southwest Europe and more moist air from the Mediterranean Sea to the Balkans and Apennine peninsulas, resulting in a significant increase in precipitation throughout Southern Europe. In addition, as can be seen from Figure 7b, the southwest wind anomaly in the Mediterranean is stronger than that in the Atlantic Ocean, indicating that the Mediterranean Sea may be a more important source of water vapor for the increase of summer precipitation in Southern Europe than the Atlantic Ocean after the Samalas mega volcanic eruption. During the same period, Northern Europe is controlled by a strong high-pressure anticyclone, with anomalous strong northerly winds, and warm air could not be transported northward to Northern Europe, resulting in a significant reduction in precipitation there. In year 2, Southern Europe is basically controlled by low pressure, and the high pressure over Northern Europe weakens (Figure 7c). The circulation anomalies in year 2 is similar to that in year 1, however, the northerly winds over Northern Europe and the southerly winds over Southern Europe are both weakened (Figure 7c), which contributes to the weakening of drought in Northern Europe and wetness in Southern Europe.
summer NAO was located further south in our results (Figure 7), this may be due to the fact that the short-term impact intensity and spatial scale of the external forcing of Samalas mega volcanic eruption on summer NAO were much greater than the impact of the gradual increase of greenhouse gases. Combining reconstructions with ECHAM5.4 simulation results, Wegmann et al. [35] also found that volcanic-induced cooling leads the summer NAO toward a negative phase, which causes the southward movement of the storm track and enhances moisture advection toward southern Europe. In addition, the model simulated summer NAO in Wegmann et al. [35] was also located further south than the SNAO suggested by Folland et al. [43]. To further investigate the relationship between the summer hydroclimate over Europe and the circulation changes, the 850-hPa wind patterns associated with the negative NAO phase are also analyzed as shown in Figure 7 (vector). In the eruption year (year 0), the 850-hPa wind anomalies are characterized by a strong cyclone over the Balkans and Apennine peninsulas near the Mediterranean Sea (Figure 7a). Southwesterly airflow in the lower troposphere transports a large amount of warm and moist air from the Mediterranean to the Balkans, which is conducive to precipitation and makes Southern Europe near the Balkans form a precipitation increase center in Sufficient water vapor supply provides favorable conditions for continuous precipitation. The spatial patterns of the ensemble mean vertically integrated whole-level water vapor transport and its divergence in summer are shown in Figure 8. After the Samalas mega volcanic eruption, the anomalous convergence center was mainly distributed in Southern Europe, especially over the Balkans and Apennine peninsulas (Figure 8). Anomalous southerlies over the east of this convergence center enhanced the northward transport of water vapor to Southern Europe, increasing precipitation there (Figure 8). Anomalous southerlies from the Mediterranean carry more water vapor into Southern Europe. Comparatively speaking, the water vapor transported from the Atlantic Ocean to the Iberian Peninsula is not as strong as that in the Mediterranean Sea (Figure 8). In addition, Northern Europe is dominated by easterlies from the inland, and the anomalous divergence center is mainly distributed over this region, which suppresses precipitation there (Figure 8).
Conclusions and Discussion
In this study, the CESM multi-member ensemble climate simulations are used to analyze the role of AD 1258 Samalas mega volcanic eruption in the summer hydroclimate change over Europe and understand the corresponding mechanisms behind that, which intends to provide a more comprehensive understanding of the historical impact of Samalas mega volcanic eruption on the summer hydroclimate change in Europe from the perspective of simulation research. The major findings are as follows.
Both the reconstructed OWDA PDSI index and our simulations show that the hydrological changes in Europe were humid in the first two years after the Samalas mega volcanic eruption and then turned into drought. CESM has well simulated the dry and wet changes in Europe after the Samalas eruption.
The whole Europe and each sub-region experience significant summer cooling in the first three years (year 0-2) after the Samalas eruption. The largest significant SAT reduction appears in year 1 over the whole Europe, Southern Europe, and Northern Europe with values of −3.61 °C , −4.02 °C ,
Conclusions and Discussion
In this study, the CESM multi-member ensemble climate simulations are used to analyze the role of AD 1258 Samalas mega volcanic eruption in the summer hydroclimate change over Europe and understand the corresponding mechanisms behind that, which intends to provide a more comprehensive understanding of the historical impact of Samalas mega volcanic eruption on the summer hydroclimate change in Europe from the perspective of simulation research. The major findings are as follows.
Both the reconstructed OWDA PDSI index and our simulations show that the hydrological changes in Europe were humid in the first two years after the Samalas mega volcanic eruption and then turned into drought. CESM has well simulated the dry and wet changes in Europe after the Samalas eruption.
The whole Europe and each sub-region experience significant summer cooling in the first three years (year 0-2) after the Samalas eruption. The largest significant SAT reduction appears in year 1 over the whole Europe, Southern Europe, and Northern Europe with values of −3.61 • C, −4.02 • C, and −3.21 • C, respectively. Previous reconstruction studies also found that Europe experienced an unusually cold summer after the Samalas mega volcanic eruption. The summer SAT changes over the European continent after the Samalas eruption is mainly due to the direct weakening of shortwave solar radiation-induced by Samalas volcanic aerosol.
After the Samalas mega volcanic eruption, the summer precipitation over the European continent shows an obvious dipole distribution characteristic of the north-south reverse phase. The precipitation increases up to 0.42 mm/d in year 1 over Southern Europe, while it decreases by −0.28 mm/d in year 1 over Northern Europe. Both simulations and reconstructions show that the centers with the strongest increase in precipitation have always been located in the Balkans and Apennine peninsulas along the Mediterranean coast over Southern Europe, and the centers with the strongest precipitation reduction are mainly located in the British Isles and Scandinavia over northwestern Europe. Medieval European chronicles and documents also reported that the incessant rainfall in AD 1258 prevented crops and fruits from ripening, resulting in severe food shortages and survival crises in AD 1258 and 1259 in Southern Europe, such as Holy Roman Empire and Iberian Peninsula.
The negative response of NAO with significant positive SLP anomaly in the north and negative SLP anomaly in the south is excited in summer after the Samalas mega volcanic eruption. The low tropospheric wind anomaly caused by the negative phase of NAO in summer affects the warm and moist air transport to Europe, resulting in the distribution pattern of summer precipitation in Europe, which is drying in the north and wetting in the south. Although the overall precipitation in Southern Europe increases after the Samalas mega volcanic eruption, humid summer in the Balkans and Apennine peninsulas are easier to maintain than those in the Iberian Peninsula over southwest Europe. Precipitation in southwest and southeast Europe comes from different sources of water vapor. Southwest Europe is due to anomalous southwesterlies that transport water vapor from the Atlantic Ocean to the land, while southeast Europe benefits from a steady supply of water vapor from the Mediterranean Sea. This implies that after the Samalas mega volcanic eruption, the Mediterranean Sea may be a more important source of water vapor for the increase of summer precipitation over Southern Europe than the Atlantic Ocean.
In addition to the influence of summer NAO changes after the Samalas mega volcanic eruption, the hydroclimate response in Europe may be further complicated by the other influences of various modes of climate variability, such as the East Atlantic Pattern (EAP) and winter NAO teleconnection. Rao et al. [36] suggested that the impacts of volcanic eruption on spring-summer European hydroclimate are likely modulated by predisposing the EAP toward its negative phase. While the mechanisms for this are still uncertain, one possibility examined in model simulations suggests that its causes may originate in the tropics [36]. The role of these other variability modes in the summer hydroclimate changes in Europe after a single mega volcanic eruption (such as Samalas magnitude) will require further study.
At present, little attention is paid to the summer hydroclimate change in Europe after the single mega volcanic eruption, especially the non-uniform precipitation changes between different sub-regions of Europe. Our results suggest that a single mega volcanic eruption (such as Samalas magnitude) can significantly change the summer hydroclimate in Europe. The knowledge gained from this research is crucial to better understand and predict the potential impacts of single mega volcanic eruption on future European summer hydroclimate changes. Our study has referenced significance for the implementation of stratospheric geoengineering in Europe.
|
v3-fos-license
|
2023-08-03T15:36:54.359Z
|
2022-03-01T00:00:00.000
|
260398950
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.17977/um031v45i12022p50-56",
"pdf_hash": "453cda493c33434299cb1b58a30329853e063a7e",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43081",
"s2fieldsofstudy": [
"Education",
"Computer Science",
"Business"
],
"sha1": "fd82976982ae6cd1ff20c5f55023b92630fe2992",
"year": 2022
}
|
pes2o/s2orc
|
Students Perception of Virtual Class-Based E-Learning as a Medium for Financial Accounting Learning
. The pandemic of COVID19 has affected education system, especially since the government issued the lockdown policy. However, it does not suspend learning ativities, which constitutes a formidable challenge for both educators and students. The purpose of this research was to know the effectiveness of the distance learning process through the use of Google Classroom in financial accounting class at Public Vocational High School 1 of Pagerwojo, Tulungagung. This was quantitative research using descriptive method and focusing on the evaluation of the learning process through online media. The research population was the eleventh-grade students, majoring in Accountung and Financial Institutions (AFI). The research involved 32 of AFI 1 students as the research sample. The reserach data were colleted using questionnaires, and the data were analyed using descriptive statistics. The research found that the distance learning using Google Classroom is effective enough. This is corroborated by the fact that students prefer Google Classroom to other online media , because they feel that the features of Google Classroom are easy to understand and run.
INTRODUCTION
The academic world after the outbreak of the COVID-19 pandemic has changed since the government established the lockdown policy. In order to break the chains of COVID-19 transmission, the government has stopped the traditional learning process, at least on a temporary basis, and replaced it with the online one, namely the distance learning. Information and comminiation tehnology (TIK) facilitate not only educative inlusion but also social inlusion through learning (Requena, 2015) because it has a deep meaning for learning processes (Swain & Hammond, 2011;and Wood & Cattell, 2014). The recent learning system has to immediately adapt to the condition of the pandemic. Mahalakshmi & Radha (2020) stated that the pandemic has given a rise to the establishment of online learning because learning processes have reached the phase of involving students in internet-based learning. E-learning which is an important part of the 21th century's educational system (Sangrà, Vlachopoulos, & Cabrera, 2021), can be seen as a natural evolution toward the distance learning which means adopting electronic educational technologies in teaching and learning processes (Clark & Mayer, 2011;Sangrà et al., 2021;and Grabinski, et al., 2020). In an integrated system, e-learning portals containing any learning objects are enriched by multimedia and combined with the academic information system, communication, discussion, and other educational tools (Back et al., 2019& Rachman, et al., 2021. According to Back et al. (2019), e-learning is presenting subjet matters through eletroni media, internet, intranet/extranet, satellite broadcasts, audio/videotapes, interactive TV, CD-ROM, and computer-based training (CBT). Back et al. (2019) stated that e-learning is the process of delivering subjet matters to anyone anyplace and anytime which involves using open technologies and creating flexible and distributed learning environments. E-learning is an innovative approach to fulfilling students' needs (Morgan, 2015& Sohrabi, Vanani, & Iraj, 2019. Some recent research finds that traditional, online, and hybrid learning systems make no essential differences in the learning outcomes of acounting, and in fact, most of the reserach expresses a clear preference for online learning over face-to-face learning (Fortin, et al., 2019& Mccarthy, et al., 2018. Some research on e-learning shows statistically significant results. For instance, the research by Abbad & Jaber (2014), which velauates the effectiveness of the e-learning, shows that on a whole students have positive perceptions about the adoption of e-learning system. Diab & Elgahsh (2020), who focused on the adoption of e-learning in nursing students during the pandemic of COVID-19, found a significant negative correalation between the obstacles they were facing and their attitudes towards e-learning. Mohammed, et al. (2020) conducted reserach on the adoption of online learning using various digital platforms in an effort to ensure the continuation of learning processes dring the pandemic of COVID-19. This research found that a minor part of the respondents felt satisfied with the adoption of e-learning because it offered them new experiences. In contrast, although 40% of the respondents liked the adoptions of online learning, more than half of the total respondents felt that the adoption of e-learning fell short of their expectation.
The learning media used during the COVID-19 pandemic is Google Classroom. Why should Google Classroom be used? Mahalakshmi & Radha (2020) mentioned pre-eminence of Google Classroom: 1) Google Classroom is a display of online learning platform; 2) it goes through an easy process of installation; 3) it offers easy access to subject matters; 4) it provides document storege on Google Drive; 5) it is money-saving; and 6) it allows teachers to monitor students' progress. Google Classroom is an effective alternative for those teachers who need a digital platform, in which they can present subject matters and hand out assignments during their adoption of distance learning. The application can be also a medium in which students hand in the assignments their teachers gave. It facilitates an interative process of learning, especially in the distance learning system.
The learning ativities of Public Vocational High School 1 of Pagerwojo are conducted online during the pandemic, by using Google Classroom as the learning media. In the beginning, the use of Google Classroom as the learning media posed many obstacles for both teachers and students. It greatly facilitated the process of teacehers' giving assignments to their students, but thus far this has not received appropriate feedback from students. This oncrete fact suggests a strong need for research on the effectiveness of Google Classroom as a medium for online learning.
As a matter of fact, the effectiveness of Google Classroom has been a topic of some previous reserach, one of whih is the research by Sabran and Edy Sabara, students of Faculty of Engineering of Makassar State University. The research shows that the use of Google Classroom is effective enough to be a medium for learning (Sabran & Sabara, 2019). Although sharing the same topic as the previous research, this research has different researh subjects: the subjects of the previous research are those students familiar with digital media, while the subjets of this research are vocational students majoring in accounting and financial institutions of Public Vocational High School 1 of Pagerwojo, who are by definition less familiar with digital media than those university students majoring in eletronics engineering.
The quality of a product can be assessed acording to some criteria, namely the product's being useful, efficient, effective, satisfying, learnable, and accessible (Asnawi, 2018). One of the criteria for quality mentioned above is the effectiveness of a product. For the purpose of assessing the quality of Google Classroom, the research aims to examine the effectiveness of Google Classroom as learning media with the students majoring in accounting and financial institutions as the reserach subjects. Basically, the reserah tries to notice whether or not Google Classroom is an excellent choice as an effective medium for learning during the pandemic, in which learning activities are conducted online. In particular, this research aims to presnt an empirial finding about the effectiveness of the use of Google Classroom in the online course in financial acounting in Public Vocational High School 1 of Pagerwojo, Tulungagung.
METHODS
The method used in this research was the descriptive method. Hennink, Hutter, & Bailey (2020) stated that descriptive research methods are used to identify free variables, either in one or more variables (independent variables or free variables) without comparing those variables and disovering their correlation with other variables. The research subjets were the eleventh-grade students majoring in AFI. The research involved the sample population of 32 eleventh-grade students of AFI 1 as the respondents. The data collected were analyed using descriptive statistics with the help of the SPSS software.
RESULTS AND DISCUSSIONS 1. Online Learning Using Media Google Classroom
Learning activities for the subject of accounting in the expertise of acounting and financial institutions in Public Vocational High School 1 of Pagerwojo were conducted online during the pandemic by using the media Google Classroom. The data collected from respondents filled in Google form showed that the learning process makes 50% of the students more comfortable as the subjet matters can be accessed everywhere. Meanwhile, 24,4% of them felt that there was nothing special in the use of Google Classroom, 23,3% felt it was interesting, and 2,3% of them considered it highly interesting. The data also showed that of all the online media used in distance learning, Google Classroom, was the most preferred media (68,8%), which was consecutively followed by WhatsApp (16,3%), school webs (8,1%), and Zoom Meeting (5,8%). However, 1,2% of the respondents preferred face-toface learning to online learning. In case learning activities are still conducted online in the near future, Google Classroom will be still in use. The data showed that 62,1% of the respondents wanted Google Classroom to be still used in learning activities; 27,6% of them wanted that the use of the media is interspersed with the use of other media; 8% of them wanted Google Classroom not to be used anymore;, 1,1% of them wanted learning ativities were conducted offline (face-to-face learning); and another 1,1% of the respondents hoped no more distance or online learning, for they considered face-to-face learning to be more effective.
The Readiness of Teachers and Students
The effective use of online learning media like Google Classroom requires not only internet connection but also the readiness of both teachers and students. Their active participation during the learning process also determines whether the media they are using is effective or not. The data showed that 94,3% of the respondents were ready to undergo the learning processthey were well prepared to go the online learning process suing the media Google Classroom. Only a few of them (5,7%) had not prepared themselves to do so. Additionally, online learning also required teachers to be more active and innovative in presenting their lessons because in this learning system students received the subjet matters without teachers' further explanation. This was highly important for the achievement of the learning outcome, especially for such subjects as financial accounting. For more satisfactory outcomes, the learning process using Google Classroom needs to be supported by the use of other media. For example, teachers can use WhatsApp to give online student counseling in addition to the the course contents they deliver on Google Classroom. The data showed that 79,5% of the teachers provided positive motivation for their students through WhatsApp; 18,2% of them offered students a chance to ask questions about the lesson, and only 1,1% of them offered nothing other than course contents on Google Classroom. The research found that the online learning could produce significant result, just in line with the previous studies by Abbad & Jaber (2014) ;Bahasoan, et al. (2020);Besser, et al. (2020); Diab & Elgahsh (2020); and Kusumaningrum, et al. (2020). Figure 6. The percentage of teachers's giving motovation during the online learning However, unfortunately, the establishment of online learning did not attain the optimal level since most of the students lived in the neighboring villages, which are montainous regions with limited acess to inteernet connection. Besides, some students were not ready for online learning because their handphones were not compatible with online learning media. Many of the respondents hoped that learning activities were conducted face-to-face, but both teachers and students had to accept the fat that the learning activities would be still conduted online as a consequence of the policy of work from home. This was in line with the findings of Ebaid (2020) and Kamalia (2021) about obstacles to the optimal level of online learning.
CONCLUSION
Despite its weaknesses, online learning is considered to be the available alternative to be provided during the pandemi of COVID-19. Online learning inlvolves using online learning media. In this case, the media Google Classroom is more greatly preferred than any other media like Zoom Meeting or school websites. Online learning requires both teachers and students to acquire tehnological familiarity and literacy, although some of them has not ready for online learning as yet.
In particular, online learning using Google Classroom is still an effective alternative for the delivery of the subject financial accounting in Public Vocational High School 1 of Pagerwojo, by making maximum use of other digital media to maintain and strengthen the interpersonal communication between teachers and students so that students' learning interest and motivation remain perfectly intact during the pandemic.
|
v3-fos-license
|
2021-10-21T15:05:39.914Z
|
2021-10-18T00:00:00.000
|
239261344
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.researchsquare.com/article/rs-964991/latest.pdf",
"pdf_hash": "66d9004f3e199e28352b22da6100c460394ad35b",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43083",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "51d381087554f280e0fd8b9a8ea78e0dd22388c7",
"year": 2021
}
|
pes2o/s2orc
|
Long Noncoding RNA AL161729.4 Acts as an miR-760 Sponge to Enhance Colon Adenocarcinoma Proliferation via Activating PI3K/Akt Signaling
Background: Colon adenocarcinoma is one of the most common gastrointestinal malignancies with poor prognosis and high mortality. The mRNA-miRNA-lncRNA regulatory network mediated by m6A methylation plays an important role in a variety of cancers including colon adenocarcinoma. Methods: We integrated and analyzed the gene expression data and clinical information of 473 patients with colon adenocarcinoma and 41 normal samples in The Cancer Genome Atlas database. The luciferase reporter gene experiment is used to detect the targeting effect between gene, miRNA and lncRNA. Real-time PCR and Proliferation assays were performed to detect the biological function of gene, miRNA and lncRNA. Results: A risk model and Nomogram that could accurately predict the survival time of patients were constructed through informatics analysis. HYOU1, AL161729.4, miR-760 are differentially expressed in COAD patients and normal samples and are signi�cantly related to survival, and there is a targeted binding effect between the three. Conclusions: HYOU1-AL161729.4-miR-760 ceRNA regulatory network could regulate the proliferation of SW620 cells through the PI3K/Akt signaling pathway.
Background
Colorectal cancer (CRC) is one of the most common gastrointestinal malignancies with poor prognosis and high mortality; it is also the second leading cause of cancer-related deaths worldwide [1].More than 2.2 million new cases and 1.1 million deaths are predicted by 2030, and the global burden caused by this is also increasing year by year [2].Approximately 20%-25% of patients with CRC metastases to distant organs at the time of initial diagnosis [3].As a key tool for early detection, biomarkers have made great progress in the last few decades and have had a positive impact on the treatment of patients with CRC [4].Colon adenocarcinoma (COAD) is the most common histological subtype of CRC [5].
N6-methyladenosine (m6A) methylation is one of the most common RNA modi cations and plays an important role in various life activities and diseases [6][7][8][9].The m6A modi cations mainly include methyltransferase (m6A "writer"), demethylase (m6A "eraser"), and m6A "reader" protein [10,11].Recent studies have shown that m6A methylation plays a key role in cancer through various mechanisms and can be used as a marker for early cancer diagnosis [12,13].m6A-related lncRNA and miRNA play an important role in a variety of diseases, including cancer [14,15].MicroRNA (miRNA) is a small RNA that regulates the expression of complementary messenger RNA.It plays an important role in developmental timing, cell death, cell proliferation, hematopoiesis, and patterning of the nervous system [16].As RNAs that are longer than 200 nucleotides and do not encode proteins, lncRNAs also play an important role in a variety of life activities and diseases [17].Recent studies have shown that competing endogenous RNA (ceRNA) networks based on mRNA-miRNA-lncRNA play an essential role in various diseases, including cancer, and are expected to become new early cancer diagnostic markers and therapeutic targets [18][19][20].Taking m6A-related genes and lncRNA as the entry point, constructing and verifying a new mRNA-miRNA-lncRNA regulatory network is urgent for the early diagnosis and treatment of COAD.
In this study, we downloaded and compiled the gene, miRNA, and lncRNA expression data of patients with COAD and their corresponding clinical information from The Cancer Genome Atlas (TCGA) database.A new COAD prognostic risk nomogram and mRNA-miRNA-lncRNA regulatory network were constructed through bioinformatics analysis.The experiment veri ed that HYOU1-AL161729.4-miR-760passed through the PI3K/Akt signal to mediate the occurrence of COAD.
Patients and dataset
The gene expression pro les and clinical data of 473 patients with COAD and 41 normal samples were obtained at TCGA.Then, the gene transfer format le with re-annotated gene expression data was used and integrated with clinical information.Gene expression pro les mainly included gene, lncRNA, and miRNA.Clinical information mainly included survival time, survival status, age, sex, and tumor stage.
Identi cation of prognosis-related lncRNAs
We rst analyzed the co-expression of m6A-related genes and lncRNAs through R packages of "limma" to nd the lncRNAs related to m6A and used R packages of "igraph" to draw the co-expression network.
Then, we used R packages of "survival" to analyze the survival of m6A-lncRNA and obtained m6A-lncRNA signi cantly related to the survival of patients with COAD.Finally, the difference in the expression of survival-related m6A-lncRNA was analyzed using the R packages of "limma" in patients with COAD and normal samples.
Construction and veri cation of the prognostic risk model
We used the least absolute shrinkage and selection operator (LASSO) method to analyze survival-related m6A-lncRNA and selected key lncRNAs to construct a prognostic risk model for patients with COAD.The formula for predicting the risk score of patients using the prognostic risk model was as follows: where Coef denotes the coe cient of this gene and x denotes the expression level of this gene.
All patients with COAD were randomly divided into training and test groups.In each group, patients with COAD were divided into high-risk and low-risk groups according to the median value of the risk score of the training group.The receiver-operating characteristic (ROC) curve of 1-year, 3-year, and 5-year survival curves, single-factor independent prognostic analysis, multi-factor independent prognosis, and correlation analysis of the survival time and risk score were used to evaluate the constructed prognostic risk model.
The area under the curve (AUC) value in the ROC curve was greater than or equal to 0.6, indicating that the model had good prediction accuracy.
Nomogram
We performed Cox proportional-hazards analysis on the gene expression data and information such as age, sex, cancer stage, risk score, and so forth, of patients to construct a nomogram so as to more conveniently and accurately predict the prognostic survival time of patients.At the same time, the ROC curve and calibration curve were drawn to evaluate the accuracy of the analysis.
Functional analysis of the prognostic risk model lncRNA
We used the R packages of "limma" to analyze the correlation between the expression of lncRNA and the risk score of the risk model so as to explore the relationship between the two, and performed survival analysis on these lncRNAs one by one to explore the relationship between these lncRNAs and overall survival.We found the lncRNA that was positively correlated with the risk score and negatively correlated with the overall survival as the follow-up research object (interest lncRNA).Finally, we used the lncATLAS (https://lncatlas.crg.eu/) and lncLocator (http://www.csbio.sjtu.edu.cn/bioinf/lncLocator/)databases to perform subcellular location of the interest lncRNA so as to evaluate whether lncRNA gene knockdown experiments could be performed in the future.
Prediction and functional analysis of lncRNA target miRNA and gene First, we used The Encyclopedia of RNA Interactomes (ENCORI) (http://starbase.sysu.edu.cn/)online database to predict the target miRNA of the interest lncRNA, and crossed it with the low-expressed miRNA in patients with COAD to obtain the target miRNA of the interest lncRNA.Second, we predicted the target gene of target miRNA through miRDB (http://mirdb.org/),miRTarBase (http://starbase.sysu.edu.cn/starbase2/index.php), and TargetScan (http://www.targetscan.org/vert_71/)databases.Then, Gene Expression Pro ling Interactive Analysis (GEPIA, http://gepia.cancer-pku.cn/)database was used to analyze the expression difference of the target genes so as to nd the target genes highly expressed in patients with COAD.Finally, an lncRNA-miRNA-gene ceRNA regulatory network signi cantly related to the survival of patients was constructed based on the principle of base complementary pairing.
Gene expression vector construction
The base sequence of the CD region of HYOU1 [full length 3000 base pairs (bp)] was chemically synthesized using the whole gene synthesis technology, and then constructed into the pIRES2-EGFP vector (Tsingke, Beijing, China) to obtain the overexpression vector of HYOU1.The binding site of miR-760 in the 3′-untranslated region (UTR) of HYOU1 was found, 500-bp upstream and downstream of the binding site were chemically synthesized, and the site was constructed into the pSi-Check2 vector to construct the HYOU1 wild-type 3′-UTR vector.In the same way, a wild-type vector of AL161729.4(length 476 bp) was constructed.Using the point mutation method, HYOU1 mut-type 3′-UTR vector and AL161729.4mut-type vector were constructed.HYOU1 siRNA1#, HYOU1 siRNA2#, AL161729.4siRNA1#, AL161729.4siRNA2#, siRNA negative control, miR-760 mimics, mimics negative control, miR-760 inhibitors, and inhibitor negative control were chemically synthesized (Gene Pharma, Shanghai, China).The sequence is shown in Table 1.
Cell culture and luciferase reporter gene experiment HT29 and SW620 cells were purchased from the National Collection of Authenticated Cell Cultures (Shanghai, China) and cultured in a 37°C cell incubator containing 5% CO 2 .The medium of HT29 cells was McCoy's 5A medium (HyClone,Waltham, MA, USA), and the medium of SW620 was Leibovitz's L-15 (HyClone).Media were supplemented with 10% fetal bovine serum (Invitrogen, Carlsbad, CA, USA), 100 U/ml penicillin, and 100 ug/ml streptomycin (Invitrogen).Lipofectamine 3000 transfection reagent (Invitrogen) was used to transfect the corresponding gene expression vector and oligonucleotides into the cells, the cell culture was continued for 48 h, and the cells were then lysed.The Dual-Luciferase Reporter Assay System (Promega, WI, USA) was used to detect the uorescence level so as to detect the targeted binding between lncRNA-miRNA-gene.
RNA extraction and quantitative polymerase chain reaction
The gene expression vector and oligonucleotides were transfected into the HT29 and SW620 cells and cultured for 48 h.TRIzol reagent (Invitrogen) was used to lyse the cells and extract total RNA.Then, a RevertAid First-Strand cDNA Synthesis Kit (Thermo Fisher Scienti c, MA, USA) was used for reverse transcription to obtain cDNA.Finally, SYBR Green Polymerase Chain Reaction (PCR) Master Mix (Thermo Fisher Scienti c) was used to perform uorescence quantitative PCR, and the relative expression of genes was determined by the 2 -ΔΔCt method.All primer sequences are listed in Table 2.
Cell proliferation assay
The HT29 and SW620 cells were seeded into 96-well plates at a density of 2 × 10 4 cells per well, and the culture was continued for 24 h.Then, we changed the fresh medium and transfected the gene expression vector and oligonucleotides into cells.These cells were mixed with 10 µL of MTT staining solution and incubated for 4 h on days 1, 3, 5, and 7 after transfection.Then, the MTT cell proliferation assay (Solarbio, Beijing, China) was used to detect cell proliferation.The absorbance was measured at 490 nm.
All experiments were performed in triplicate.
Statistical analysis
In the co-expression analysis of m6A-related genes and lncRNA, Pearson correlation coe cient ≥0.4 and P <0.001 are the criteria for judging whether they were related.In the analysis of survival difference and expression difference, P <0.001 indicated a signi cant difference.All experiments were performed independently at least three times with similar results, and representative experiments were shown.P <0.05 was considered statistically signi cant (NS > 0.05; * P < 0.05; ** P < 0.01; *** P < 0.001).
A signi cant difference in survival was found between the high-risk and the low-risk groups in the training group; the survival rate in the low-risk group was signi cantly higher than that in the high-risk group (Fig. 2D).The AUC value of the 1-year, 3-year, and 5-year survival curves was 0.766, 0.785, and 0.726, respectively, which showed that the survival curve was credible (Fig. 2E).The calibration curve of the survival curve further veri ed its credibility (Fig. 2F).The trend in the test group was the same as that in the train group (Fig. 2G-2I).The survival status results showed that the higher the risk score of patients, the greater the number of deaths (Fig. 2M and 2N).The results of correlation analysis showed that the overall survival rate of patients was negatively correlated with the risk score, although the statistical difference was not signi cant (Fig. 2O and 2P).Single-factor independent prognostic analysis and multivariate independent prognostic analysis showed that whether in the train group or in the test group, the risk score could be used as a key factor to predict the prognostic survival rate of COAD (Fig. 2Q-2T).
Nomogram
By integrating factors such as age, sex, cancer stage, and risk score of patients, we constructed a nomogram that could predict the 1-year, 3-year, and 5-year survival rates of patients (Fig. 3A).For each patient, the overall score could be calculated according to the scores of the four factors of age, sex, cancer stage, and risk score.Finally, the survival rate could be obtained according to the score and the horizontal axis of the survival rate.The AUC values of the ROC curve of 1 year, 3 years, and 5 years were all greater than 0.8, indicating that the constructed nomogram had good prediction accuracy.The calibration curve of the ROC curve also illustrated this point.
Functional analysis of the prognostic risk model lncRNA
The analysis between the risk score and the expression of lncRNA showed a positive correlation between AC003101.2,LINC02657, AL161729.4,AP006621.2,AC156455.1,ZKSCAN2-DT, AC245041.1,and the risk score.The higher the level of expression, the higher the risk score.This implied that these lncRNAs might be involved in the occurrence and development of COAD (Fig. 4A-4G).The results of survival analysis showed that L161729.4,AP006621.2,and AC156455.1 were signi cantly related to the survival of patients, and the higher the expression of lncRNA, the lower the survival rate of COAD (Fig. 4H-4N).The full length of AL161729.4was only 476 bp with only one transcript; therefore, we determined it as the target of follow-up research.The results of subcellular localization showed that AL161729.4mainly existed in the cytoplasm in GM12878, MCF7, and other cells (Fig. 4O).The subcellular localization results of the prediction model showed that about 40% of lncRNA AL161729.4was located in the cytoplasm (Fig. 4P).The results of subcellular localization showed that the knockdown vector designed and constructed for lncRNA could decrease the content of AL161729.4 in vivo.Then, the function of lncRNA AL161729.4was examined.
AL161729.4-miR-760-HYOU1 was involved in regulating the PI3K/Akt signaling pathway
In the detection of the knockdown e ciency of the knockdown vector, it was found that the knockdown effect of AL161729.4siRNA 1# was signi cantly better than that of siRNA 2# (Fig. 7A), and the knockdown effect of HYOU1 siRNA 2# was better than that of siRNA 1# (Fig. 7B).In subsequent studies, the knockdown vectors with better effects were used for experiments.MiR-760 mimics could signi cantly reduce the mRNA levels of AL161729.4,HYOU1, PI3K, and Akt (Fig. 7C), and miR-760 inhibitors could increase their mRNA levels (Fig. 7D).
Discussion
The poor prognosis caused by the inability of the early diagnosis of COAD has always been a medical challenge.Hence, more comprehensive and accurate diagnostic markers and therapeutic targets are urgently needed [21][22][23].m6A modi cations regulate the occurrence and development of a variety of cancers through lncRNAs and miRNAs [24][25][26][27].For example, m6A writer-METTL14 suppresses the proliferation and metastasis of colorectal cancer by downregulating oncogenic long noncoding RNA XIST [28].m6A writers-METTL3 and eraser-ALKBH5 promoted the invasion and metastasis of cancer cells [29,30].However, the current research on the occurrence and regulation of COAD mediated by the mRNA-miRNA-lncRNA regulatory network based on m6A modi cations has not been in depth.In this study, we used bioinformatics methods to construct a new mRNA-miRNA-lncRNA regulatory network based on m6A modi cations, and experimentally veri ed how it regulated the occurrence and development of COAD.
Through single-factor Cox regression analysis, we determined that 11 m6A-related lncRNAs were signi cantly related to the prognosis of COAD, and further used LASSO regression analysis to nd seven most critical lncRNAs.The prognostic risk model and the nomogram constructed based on this were also veri ed by survival analysis, ROC curve, and calibration curve.We constructed the HYOU1-AL161729.4-miR-760regulatory network through expression difference analysis, survival analysis, and target combination prediction.HYOU1, AL161729.4,and miR-760 were all signi cantly related to the survival of COAD or a signi cant difference in expression existed between patients with COAD and normal samples.
The protein encoded by HYOU1 belongs to the heat shock protein 70 family.It is found to be highly expressed in a variety of tumors and is associated with tumor aggressiveness and poor prognosis [31][32][33][34][35]. AL161729.4has 476 nucleotides [36], which is suitable for constructing an overexpression vector.It is mainly located in the cytoplasm [37,38], and hence it is also convenient for knockdown vectors to knock it down.Currently, no report exists on the function of AL161729.4.It is a brand new lncRNA with unknown function.Studies have shown that microRNA-760 can inhibit the proliferation and invasion of colorectal cancer cells through the PTEN/Akt signaling pathway [39].Studies have shown that HYOU1 promotes cell growth and metastasis by activating PI3K/Akt signals and leads to poor prognosis [40].Combined with the targeting relationship between HYOU1-AL161729.4-miR-760,we predicted that the HYOU1-AL161729.4-miR-760regulatory network will regulate the occurrence of COAD through the PI3K/Akt signaling pathway.
We constructed overexpression vectors and knockdown vectors of HYOU1, AL161729.4,and miR-760 through genetic engineering and chemical synthesis.The experimental results of the luciferase reporter gene proved a targeted binding effect among the three.The results of qPCR experiments further con rmed the reliability of the HYOU1-AL161729.4-miR-760regulatory network and its relationship with the PI3K/Akt signaling pathway.Finally, the cell proliferation experiment con rmed that the HYOU1-AL161729.4-miR-760regulatory network was involved in the proliferation regulation of SW620 cells.
Our study involved bioinformatics analysis.We constructed a COAD prognostic risk model and nomogram based on m6A-related lncRNA and experimentally veri ed the latest mRNA-miRNA-lncRNA regulatory network.However, the ndings were still inadequate.First, because of the research conditions, we only explored the effect of the HYOU1-AL161729.4-miR-760regulatory network on the proliferation of SW620 cells, and did not conduct research on invasion and in ltration.In addition, the effect of the HYOU1-AL161729.4-miR-760regulatory network on the occurrence and development of COAD has not been investigated in vivo.
Conclusions
In this study, we integrated the COAD gene, lncRNAs, miRNAs, and clinical information to construct a risk model and nomogram that could accurately predict the prognosis of patients with COAD.The HYOU1-AL161729.4-miR-760 regulatory network was constructed, and experiments proved that it regulated the proliferation of SW620 cells by mediating the PI3K/Akt signaling pathway. Figures
Figure 2 Construction
Figure 2
Table 1 :
Our study provided new biomarkers for the early diagnosis of COAD and also a new target for further in-depth study of the occurrence and development of COAD.Declarations 37. Mas-Ponte D, Carlevaro-Fita J, Palumbo E, Hermoso Pulido T, Guigo R, Johnson R. LncATLAS database for subcellular localization of long noncoding RNAs.RNA.2017;23(7):1080-7. 3 .Cao Z, Pan X, Yang Y, Huang Y, Shen HB.The lncLocator: a subcellular localization predictor for long non-coding RNAs based on a stacked ensemble classi er.Bioinformatics.2018;34(13):2185-94. 39.Li X, Ding Y, Liu N, Sun Q, Zhang J. MicroRNA-760 inhibits cell proliferation and invasion of colorectal cancer by targeting the SP1-mediated PTEN/AKT signalling pathway.Mol Med Rep. Sequences of the primers used for this experiment.
Table 2 :
Sequences of the primers used for this experiment.
|
v3-fos-license
|
2020-07-09T09:04:28.779Z
|
2020-07-01T00:00:00.000
|
220436439
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2077-0375/10/7/142/pdf",
"pdf_hash": "89324509e8975d047255a2e73904306f4dd33a63",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43085",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Chemistry"
],
"sha1": "bf464e4e98c9d8fdef5f1b60ade811fd4fc926d7",
"year": 2020
}
|
pes2o/s2orc
|
Study on the Concentration of Acrylic Acid and Acetic Acid by Reverse Osmosis
In the production of acrylic acid, the concentration of acrylic acid solution from the adsorption tower was low, which would lead to significant energy consumption in the distillation process to purify acrylic acid, along with the production of a large amount of wastewater. Reverse osmosis (RO) was proposed to concentrate the acrylic acid aqueous solution taken from a specific tray in the absorption tower. The effects of operating conditions on the permeate flux and acid retention were studied with two commercial RO membranes (SWC5 and SWC6). When the operating pressure was 4 MPa and the temperature was 25 °C, the permeate fluxes of two membranes were about 20 L·m−2·h−1. The acrylic acid and acetic acid retentions were about 80% and 78%, respectively. After being immersed in the acid solutions for several months, the characteristics of the two membranes were tested to evaluate their acid resistance. After six months of exposure to the acid solution containing 2.5% acrylic acid and 2.5% acetic acid, the retentions of acrylic acid and acetic acid were decreased by 5.7% and 4.1% for SWC5 and 4.9% and 2.2% for SWC6, respectively. The changes of membrane surface morphology and chemical composition showed the hydrolysis of some amide bonds.
Introduction
Acrylic acid (AA) is a versatile monomer which is widely used in the synthesis of plastics, synthetic rubbers, superabsorbent polymers, coatings, detergents, fibers, and specialty resins [1][2][3][4][5]. Currently, the main production technology of acrylic acid is the two-step gas-phase oxidation of propylene [1][2][3]. The mixed gases produced in the oxidation process are introduced into the bottom of the absorption tower. Acrylic acid and multiple by-products, including acetic acid (HAc), formic acid, maleic acid, acrolein, acetaldehyde acetone, etc. [6], are absorbed by the water sprayed down from the top of absorption tower. Then, the acrylic acid aqueous solution can be obtained at the bottom. Limited by the high concentration of water vapor at the inlet of the oxidation reactor and the absorption method, the concentration of acrylic acid in the bottom of the absorption tower is relatively low, which results in a large amount of energy consumption and wastewater production in downstream distillation process [7]. Some methods were proposed to improve the concentration of the acrylic acid solution before the distillation process. Briegel et al. [8] used a condenser tower equipped with multiple external heat exchangers instead of the conventional absorption tower to indirectly cool and recover a higher concentration of (meth)acrylic acid solution from the gaseous steam. Min et al. [9] used the extraction solvent to extract (meth)acrylic acid from the side stream of the absorption tower. Although the concentration of (meth)acrylic acid solution entering the distillation tower was increased, the large amount of extraction solvent was inevitable.
In recent years, membrane technologies, including microfiltration (MF), ultrafiltration (UF), nanofiltration (NF), and reverse osmosis (RO), have attracted great attention during the industrial production process [10][11][12][13]. With the development of a variety of high-performance RO membranes, such as antifouling membranes, acid resistance membranes, ultra-low pressure membranes, and high retention membranes, the application of RO has gradually expanded from seawater desalination to the fields of wastewater treatment [14], food processing [15], petrochemical industry [16], pharmaceutical industry [17], and acid concentration and separation [14,18]. Ricci et al. [19] integrated NF and RO to separate noble metals ions and concentrate sulfuric acid from the gold mining effluent. The rejection of metals ions by the NF membrane was above 90%, and the sulfuric acid could permeate through the membrane. With the recovery of 50%, an increase of 99% in sulfuric acid concentration compared to the feed was achieved. González et al. [18] determined the feasibility of purifying industrial phosphoric acid solution by RO. Moreover, the retention of cationic impurities were 99.3%, and 46.3% of acid permeation were achieved. Zhou et al. [20] separated acetic acid from model lignocellulosic hydrolysates by RO. It was shown that the separation factor of acetic acid over sugars was above 200. Chen et al. [21] developed an ethanol promotion method to facilitate the removal of furfural and acetic acid from hydrolysate. When the ratio of ethanol/acetic acid concentration was 5.80, the acetic acid retention of PA2-4040 RO membrane was only 10%. Ahsan [22] reported that multi-stage RO could recover about 70% acetic acid from prehydrolysis liquor of kraft. Tan et al. [23] immersed the SWC5 and SWC6 membranes in the hydrochloric acid solution with a pH value of 1 for 2 h. Although the hydrolysis of some amide bonds of the treated membranes was observed, the salt retention of the two membranes were decreased by less than 1%. These studies indicated that the RO had great potential in the concentration and separation of acid. However, the above studies did not assess the long-term stability of the membrane in the specific acid solution.
In this study, RO was proposed to concentrate the acrylic acid aqueous solution taken from a specific tray in the absorption tower. The concentrated acrylic acid solution was returned to the absorption tower for continuous absorption. The permeate from RO was reused as absorbent. The utilization of the process would decrease the amount of fresh absorbent and increase the concentration of acrylic acid in the bottom of the absorption tower, thereby decreasing the amount of wastewater production and energy consumption in the distillation process.
The feasibility of using RO to concentrate the synthetic solution containing acrylic acid and acetic acid was evaluated in this paper. The effects of operating pressure, temperature and feed concentration on permeate flux and acid retention were investigated. The stability of RO membrane was also tested by detecting the membrane characteristics after continuous exposure to the acid solution.
Filtration Experiment
All filtration experiments were carried out by a lab-made cross-flow RO filtration apparatus, as illustrated in Figure 1. The apparatus mainly consisted of feed tank, heat exchanger, high-pressure pump, membrane cells, pressure gauges, and rotameter. The effective area of the membrane cell was 31.16 cm 2 .
The permeate flux and acid retention of the RO membranes were tested under different operating conditions (pressure, temperature, and feed concentration). The operating pressure was adjusted by the valve on the outlet of the concentrate. The heat exchanger was used to control the solution temperature. The system was operated under a recirculating flow rate of 1.5 L·min −1 . The permeate flux was obtained by measuring the volume of permeate over a period of time, and the acid retention was determined through analyzing the acid concentrations of permeate and feed. These measurements were repeated three times under the same experimental conditions. The permeate flux and acid retention were calculated by Equations (1) and (2), respectively.
where J is the permeate flux (L·m −2 ·h −1 ), V is the permeate volume (L), A is the effective membrane area (m 2 ), and ∆t is the measuring time (h). R acid is the acid retention and C p and C f are the acid concentrations of the permeate and feed (g·L −1 ), respectively. Acetic acid (≥99.5%) was purchased from Tianjin Kemel Co., Ltd, Tianjin, China. Acrylic acid (≥99.5%) and sodium chloride (≥99.5%) were purchased from Tianjin Damao Co., Ltd, Tianjin, China. Anhydrous grade ethanol was obtained from Aladdin Reagent Co., Ltd, Shanghai, China.
Filtration Experiment
All filtration experiments were carried out by a lab-made cross-flow RO filtration apparatus, as illustrated in Figure 1. The apparatus mainly consisted of feed tank, heat exchanger, high-pressure pump, membrane cells, pressure gauges, and rotameter. The effective area of the membrane cell was 31.16 cm 2 .
The permeate flux and acid retention of the RO membranes were tested under different operating conditions (pressure, temperature, and feed concentration). The operating pressure was adjusted by the valve on the outlet of the concentrate. The heat exchanger was used to control the solution temperature. The system was operated under a recirculating flow rate of 1.5 L·min −1 . The permeate flux was obtained by measuring the volume of permeate over a period of time, and the acid retention was determined through analyzing the acid concentrations of permeate and feed. These measurements were repeated three times under the same experimental conditions. The permeate flux and acid retention were calculated by Equations (1) and (2), respectively.
where J is the permeate flux (L·m −2 ·h −1 ), V is the permeate volume (L), A is the effective membrane area (m 2 ), and ∆t is the measuring time (h). Racid is the acid retention and Cp and Cf are the acid concentrations of the permeate and feed (g·L −1 ), respectively.
Immersion Experiment
To investigate the effects of continuous exposure to the acid solutions on the membrane characteristics, the two RO membranes (C5 and C6) were immersed in the acid solutions (Table 2) for 6 months at room temperature. At two months intervals, the membrane samples were taken and
Immersion Experiment
To investigate the effects of continuous exposure to the acid solutions on the membrane characteristics, the two RO membranes (C5 and C6) were immersed in the acid solutions (Table 2) for 6 months at room temperature. At two months intervals, the membrane samples were taken and washed with deionized (DI) water. Subsequently, they were used for acid filtration experiments (Section 2.2), and their morphological and chemical characteristics were also tested (Section 2.4).
Analytical Methods
The pH value of the solution was tested by a pH-meter (FE28-Standard, METTLER TOLEDO, Switzerland). The concentrations of acrylic acid and acetic acid were measured by Gas chromatography (GC, SP-2100A, Beifen, Beijing, China) with a FID detector and a capillary column (KB-FFAP, 30 m × 0.32 mm × 0.5 µm) using ethanol as an internal standard [24,25]. The temperatures of the injector, detector, and column were 200 • C, 220 • C, and 150 • C, respectively. The flow rates of N 2 , air and H 2 were 20 mL/min, 300 mL/min and 30 mL/min, respectively.
The membrane surface morphology was analyzed by scanning electron microscopy (SEM, S4800, Hitachi, Tokyo, Japan). The roughness of the membrane surface was tested by atomic force microscopy (AFM, Dimension icon, Bruker, Karlsruhe, Germany) using tapping mode. The membrane surface chemical composition was characterized by Fourier transform infrared spectroscopy (FT-IR, 6700, Nicolet, Madison, WI, USA) and X-ray photoelectron spectroscopy (XPS, ESCALAB-250Xi, ThermoFisher, Waltham, MA, USA).
Effect of Pressure
The effect of pressure on permeate flux and acid retention for C5 and C6 at 25 • C was showed in Figure 2. The feed solution contained 2.5% acrylic acid and 1.5% acetic acid based on the concentration of a specific tray in the absorption tower of the acrylic acid production process. The pH value of the feed solution was 2.39.
As shown in Figure 2a, the permeate fluxes of both membranes were linearly dependent on the pressure. When the pressure varied from 2.0 MPa to 4.0 MPa, the permeate fluxes were increased from 7.89 L·m −2 ·h −1 to 19.69 L·m −2 ·h −1 for C5, and 8.36 L·m −2 ·h −1 to 21.12 L·m −2 ·h −1 for C6, respectively. Meanwhile, the retentions of acrylic acid ( Figure 2b) were significantly increased from 67.40% to 81.92% for C5, and 66.73% to 81.32% for C6, respectively. The retentions of acetic acid were slightly lower than those of acrylic acid. The similar trend was also reported by Zhou et al. [20,26] during the separation acetic acid from monosaccharides by RO. This phenomenon was induced by solution-diffusion theory [27]. With the increase of pressure, the water flux increased faster than solute flux, so the solute retention was increased. In addition, the solution-diffusion model was used to calculate the water permeability and the transport coefficients of two acids [28], as shown in Supplementary Figures S1 and Figure S2. For the acid solution containing 2.5% acrylic acid and 1.5% acetic acid, the water permeability coefficients of C5 and C6 were 7.195 L·m −2 ·h −1 ·MPa −1 and 7.694 L·m −2 ·h −1 ·MPa −1 , respectively. The transport coefficients of acetic acid and acrylic acid were 4.275 L·m −2 ·h −1 and 3.921 L·m −2 ·h −1 for C5, and 4.669 L·m −2 ·h −1 and 4.272 L·m −2 ·h −1 for C6, respectively. As shown in Figure 2b, the retentions of acrylic acid and acetic acid were significantly lower than that of NaCl ( Table 1). The similar results were also reported in the previous study [20,27,29,30], as shown in Supplementary Table S1. The lower acid retention may be attributed to the following two factors. According to the surface absorption theory, the surface tension of the solution containing acrylic acid and acetic acid was lower than that of NaCl solution [31,32], which would cause more acid absorption by the membrane. On the other hand, the hydrogen bonding between the acid (AA and HAc) and the PA membrane would also increase the acid absorption [33]. NaCl was a strong electrolyte and existed as Na + and Cl − in the solution. The electrostatic repulsion between the PA membrane and the charged ions would increase the retention of NaCl [34]. Thus, NaCl could be better retained than acetic acid and acrylic acid by the membrane. The difference in hydrophilicity may be one of the reasons for the different retention of the two acids. The hydrophilicity of a compound was often described by the octanol/water partitioning coefficient (log(Kow)), where a lower log(Kow) was corresponding to a more hydrophilic compound [35,36]. So, the acetic acid (log(Kow) = −0.17) [37]with a lower log(Kow) was more permeable than acrylic acid (log(Kow) = 0.36) at the same condition [35].
Effect of Temperature
The effect of temperature on permeate flux and acid retention was investigated from 20 °C to 35 °C at 3.5 MPa for C5 and C6. The feed concentrations of acrylic acid and acetic acid were 2.5% and 1.5%, respectively. The permeate flux (Figure 3a) was increased almost linearly as temperature increased. However, the retentions of acrylic acid and acetic acid (Figure 3b) were significantly declined. As the temperature increased from 20 °C to 35 °C, the retentions of acrylic acid and acetic acid ( Figure 3a) were decreased by 9.6% and 9.5% for C5, and 12.4% and 11.5% for C6, respectively. As shown in Figure 2b, the retentions of acrylic acid and acetic acid were significantly lower than that of NaCl ( Table 1). The similar results were also reported in the previous study [20,27,29,30], as shown in Supplementary Table S1. The lower acid retention may be attributed to the following two factors. According to the surface absorption theory, the surface tension of the solution containing acrylic acid and acetic acid was lower than that of NaCl solution [31,32], which would cause more acid absorption by the membrane. On the other hand, the hydrogen bonding between the acid (AA and HAc) and the PA membrane would also increase the acid absorption [33]. NaCl was a strong electrolyte and existed as Na + and Cl − in the solution. The electrostatic repulsion between the PA membrane and the charged ions would increase the retention of NaCl [34]. Thus, NaCl could be better retained than acetic acid and acrylic acid by the membrane. The difference in hydrophilicity may be one of the reasons for the different retention of the two acids. The hydrophilicity of a compound was often described by the octanol/water partitioning coefficient (log(K ow )), where a lower log(K ow ) was corresponding to a more hydrophilic compound [35,36]. So, the acetic acid (log(K ow ) = −0.17) [37] with a lower log(K ow ) was more permeable than acrylic acid (log(K ow ) = 0.36) at the same condition [35].
Effect of Temperature
The effect of temperature on permeate flux and acid retention was investigated from 20 • C to 35 • C at 3.5 MPa for C5 and C6. The feed concentrations of acrylic acid and acetic acid were 2.5% and 1.5%, respectively. The permeate flux (Figure 3a) was increased almost linearly as temperature increased. However, the retentions of acrylic acid and acetic acid (Figure 3b) were significantly declined. As the temperature increased from 20 • C to 35 • C, the retentions of acrylic acid and acetic acid (Figure 3a) were decreased by 9.6% and 9.5% for C5, and 12.4% and 11.5% for C6, respectively. According to the Arrhenius relation, increasing temperature would promote the transport of water and solute through the membrane due to the increase of diffusion coefficient [38]. Moreover, it was reported that the increase in the mass transfer of solute was more significant than that of water with the increase of temperature [39], which would increase the permeate flux and reduce acid retention. The decrease of acid retention may be also attributed to the increase in membrane pore size caused by the thermal dilation of the polymer in the active layer at higher temperature [38,40].
In the absorption process of acrylic acid production, the temperatures in the top and bottom of the absorption tower are about 43 °C and 70 °C, respectively. The temperature of the solution taken from the absorption tower was about 45 °C. According to the operation conditions of the membrane recommended by the manufacturer, the maximum operating temperature of the two RO membranes is 45 °C. Therefore, the solution taken from the absorption tower was required to cool down before entering into the RO unit.
Effect of Feed Concentration
The effect of feed concentration on the RO membrane performance was assessed at 3.5 MPa, 25 °C with two groups of acid solutions. In group A, the concentration of acrylic acid was 1.5%, 2%, 2.5%, and 3%, respectively, with the 1.5% acetic acid. In group B, the concentration of acetic acid was 1%, 1.5%, 2%, and 2.5%, respectively, with the 2.5% acrylic acid. Figure 4a,c showed that the permeate fluxes of the two membranes were gradually decreased with the increase of the concentration of acrylic acid or acetic acid. The decrease in permeate flux was mainly due to the reduction in effective pressure caused by the increase of osmosis pressure [14,41]. There was no discernable difference in the retentions of acrylic acid and acetic acid in both the two membranes (Figure 4b,d). The similar results were also reported by other researchers [20,29]. For both the two membranes, the retentions of acrylic acid and acetic acid were about 80% and 78%, respectively. According to the Arrhenius relation, increasing temperature would promote the transport of water and solute through the membrane due to the increase of diffusion coefficient [38]. Moreover, it was reported that the increase in the mass transfer of solute was more significant than that of water with the increase of temperature [39], which would increase the permeate flux and reduce acid retention. The decrease of acid retention may be also attributed to the increase in membrane pore size caused by the thermal dilation of the polymer in the active layer at higher temperature [38,40].
In the absorption process of acrylic acid production, the temperatures in the top and bottom of the absorption tower are about 43 • C and 70 • C, respectively. The temperature of the solution taken from the absorption tower was about 45 • C. According to the operation conditions of the membrane recommended by the manufacturer, the maximum operating temperature of the two RO membranes is 45 • C. Therefore, the solution taken from the absorption tower was required to cool down before entering into the RO unit.
Effect of Feed Concentration
The effect of feed concentration on the RO membrane performance was assessed at 3.5 MPa, 25 • C with two groups of acid solutions. In group A, the concentration of acrylic acid was 1.5%, 2%, 2.5%, and 3%, respectively, with the 1.5% acetic acid. In group B, the concentration of acetic acid was 1%, 1.5%, 2%, and 2.5%, respectively, with the 2.5% acrylic acid. Figure 4a,c showed that the permeate fluxes of the two membranes were gradually decreased with the increase of the concentration of acrylic acid or acetic acid. The decrease in permeate flux was mainly due to the reduction in effective pressure caused by the increase of osmosis pressure [14,41]. There was no discernable difference in the retentions of acrylic acid and acetic acid in both the two membranes (Figure 4b,d). The similar results were also reported by other researchers [20,29]. For both the two membranes, the retentions of acrylic acid and acetic acid were about 80% and 78%, respectively.
Permeate Flux and Acid Retention
After immersed in the acidic solutions for several months, the permeate flux and acid retention of the RO membranes were tested to evaluate their acid resistance. The filtration experiments were performed at 3.5 MPa and 25 °C with an acid solution containing 2.5% acrylic acid and 1.5% acetic acid.
As shown in Figure 5, the permeate flux was increased, and the acid retention was decreased with the increase of the exposure time for all samples. The influences were more evident with the higher concentration of acid solution. The permeate fluxes of the samples C5-2.5 and C6-2.5 were increased by 33.1% and 15.3% after six months, respectively. The acrylic acid retentions of those were decreased by 5.7% and 4.9%, respectively. Moreover, the acetic acid retentions of the two samples were decreased by 4.1% and 2.2%, respectively. At the end of six months of exposure, the performance of samples C5-7.5 and C6-7.5 was degraded significantly. Compared with the virgin membranes, the retentions of acrylic acid and acetic acid were decreased by 14.4% and 9.6% for sample C5-7.5, and 11.7% and 6.4% for sample C6-7.5, respectively.
Permeate Flux and Acid Retention
After immersed in the acidic solutions for several months, the permeate flux and acid retention of the RO membranes were tested to evaluate their acid resistance. The filtration experiments were performed at 3.5 MPa and 25 • C with an acid solution containing 2.5% acrylic acid and 1.5% acetic acid.
As shown in Figure 5, the permeate flux was increased, and the acid retention was decreased with the increase of the exposure time for all samples. The influences were more evident with the higher concentration of acid solution. The permeate fluxes of the samples C5-2.5 and C6-2.5 were increased by 33.1% and 15.3% after six months, respectively. The acrylic acid retentions of those were decreased by 5.7% and 4.9%, respectively. Moreover, the acetic acid retentions of the two samples were decreased by 4.1% and 2.2%, respectively. At the end of six months of exposure, the performance of samples C5-7.5 and C6-7.5 was degraded significantly. Compared with the virgin membranes, the retentions of acrylic acid and acetic acid were decreased by 14.4% and 9.6% for sample C5-7.5, and 11.7% and 6.4% for sample C6-7.5, respectively.
Membrane Surface Morphology
The surface morphology of C5-2.5, C5-7.5, C6-2.5 and C6-7.5 before and after being immersed for six months was investigated by SEM. As shown in Figure 6, all samples exhibited a typical ridge-and-valley structure of the PA membrane. There was no apparent difference between the ridge structures of treated samples C5-2.5 and C6-2.5 and the untreated ones (samples C5-Virgin and C6-Virgin). The ridge structures of samples C5-7.5 and C6-7.5 spread out gradually attributed to the membranes swelling in the acid solutions [42]. In addition, as the acid concentration increased, the amount of ridge structures became less, and the area of each ridge structure became larger. The roughness of samples C5-2.5, C5-7.5, C6-2.5 and C6-7.5 after exposure for 0, 4, 6 months was tested by AFM, as shown in Table 3. The AFM images of the RO membranes were presented in Supplementary Figure S3. It was observed that the average roughness of all samples was decreased gradually with the exposure time increasing. For example, at the end of six months, the roughness of samples C5-7.5 and C6-7.5 was decreased by 24.8% and 12.6%, respectively. The results indicated that the membrane surface became smoother, which was in accordance with the SEM micrographs. Table 3. The roughness of RO membranes before and after exposure to the acid solutions.
Sample
Average Roughness (nm)
Membrane Surface Morphology
The surface morphology of C5-2.5, C5-7.5, C6-2.5 and C6-7.5 before and after being immersed for six months was investigated by SEM. As shown in Figure 6, all samples exhibited a typical ridge-and-valley structure of the PA membrane. There was no apparent difference between the ridge structures of treated samples C5-2.5 and C6-2.5 and the untreated ones (samples C5-Virgin and C6-Virgin). The ridge structures of samples C5-7.5 and C6-7.5 spread out gradually attributed to the membranes swelling in the acid solutions [42]. In addition, as the acid concentration increased, the amount of ridge structures became less, and the area of each ridge structure became larger.
Membrane Surface Morphology
The surface morphology of C5-2.5, C5-7.5, C6-2.5 and C6-7.5 before and after being immersed for six months was investigated by SEM. As shown in Figure 6, all samples exhibited a typical ridge-and-valley structure of the PA membrane. There was no apparent difference between the ridge structures of treated samples C5-2.5 and C6-2.5 and the untreated ones (samples C5-Virgin and C6-Virgin). The ridge structures of samples C5-7.5 and C6-7.5 spread out gradually attributed to the membranes swelling in the acid solutions [42]. In addition, as the acid concentration increased, the amount of ridge structures became less, and the area of each ridge structure became larger. The roughness of samples C5-2.5, C5-7.5, C6-2.5 and C6-7.5 after exposure for 0, 4, 6 months was tested by AFM, as shown in Table 3. The AFM images of the RO membranes were presented in Supplementary Figure S3. It was observed that the average roughness of all samples was decreased gradually with the exposure time increasing. For example, at the end of six months, the roughness of samples C5-7.5 and C6-7.5 was decreased by 24.8% and 12.6%, respectively. The results indicated that the membrane surface became smoother, which was in accordance with the SEM micrographs. Table 3. The roughness of RO membranes before and after exposure to the acid solutions.
Sample
Average Roughness (nm) The roughness of samples C5-2.5, C5-7.5, C6-2.5 and C6-7.5 after exposure for 0, 4, 6 months was tested by AFM, as shown in Table 3. The AFM images of the RO membranes were presented in Supplementary Figure S3. It was observed that the average roughness of all samples was decreased gradually with the exposure time increasing. For example, at the end of six months, the roughness of samples C5-7.5 and C6-7.5 was decreased by 24.8% and 12.6%, respectively. The results indicated that the membrane surface became smoother, which was in accordance with the SEM micrographs. The effect of continuous exposure to the acid solutions on membrane surface chemical composition was characterized by FT-IR and XPS. The FT-IR spectra detected in the range of 800-3500 cm −1 were shown in Figure 7, which contained both the bands of the active layer (PA) and the support layer (polysulfone, PSU) [43]. Two additional peaks were observed at 3300 cm −1 (NH + and OH − groups) [44] and 1723 cm −1 (C=O of carboxylic acid) [45] for the samples C5-2.5 and C6-2.5 after exposure for six months, which may be attributed to the following two factors. Firstly, the oxygen was more electronegative than carbon. Therefore, the electro cloud of oxygen was higher than that of carbon in the acid solutions [46]. The carbon was more vulnerable to nucleophilic attack when the H + attacked the amide bond, leading to the hydrolysis of the amide bond to the-NH 2 and -COOH [47]. Secondly, the acrylic acid and acetic acid in the acid solutions were easy to be absorbed onto the RO membranes, which also could result in the appearance of the peak at 1723 cm −1 . However, no significant disappearance of the peak was observed after six months of exposure to the acid solutions. The effect of continuous exposure to the acid solutions on membrane surface chemical composition was characterized by FT-IR and XPS. The FT-IR spectra detected in the range of 800-3500 cm −1 were shown in Figure 7, which contained both the bands of the active layer (PA) and the support layer (polysulfone, PSU) [43]. Two additional peaks were observed at 3300 cm −1 (NH + and OH − groups) [44] and 1723 cm −1 (C=O of carboxylic acid) [45] for the samples C5-2.5 and C6-2.5 after exposure for six months, which may be attributed to the following two factors. Firstly, the oxygen was more electronegative than carbon. Therefore, the electro cloud of oxygen was higher than that of carbon in the acid solutions [46]. The carbon was more vulnerable to nucleophilic attack when the H + attacked the amide bond, leading to the hydrolysis of the amide bond to the-NH2 and -COOH [47]. Secondly, the acrylic acid and acetic acid in the acid solutions were easy to be absorbed onto the RO membranes, which also could result in the appearance of the peak at 1723 cm −1 . However, no significant disappearance of the peak was observed after six months of exposure to the acid solutions. Table 4. The atomic percent of carbon and oxygen was increased which of nitrogen was decreased attributed to the absorption of acrylic acid and acetic acid and the hydrolysis of the amide bond [44]. The atomic ratio of C/O was reduced and that of O/N was increased. The atomic ratio of O/N was 2 for the fully linear PA, while O/N was 1 for the fully cross-linked PA [43]. The increase of O/N suggested that there were more additional oxygen atoms without being bonded to nitrogen atoms. Consistent with the FT-IR results, the lower atomic ratio of C/O and the higher atomic ratio of O/N indicated more free carboxylic groups due to the hydrolysis of the amide bond [44]. Table 4. The atomic percent of carbon and oxygen was increased which of nitrogen was decreased attributed to the absorption of acrylic acid and acetic acid and the hydrolysis of the amide bond [44]. The atomic ratio of C/O was reduced and that of O/N was increased. The atomic ratio of O/N was 2 for the fully linear PA, while O/N was 1 for the fully cross-linked PA [43]. The increase of O/N suggested that there were more additional oxygen atoms without being bonded to nitrogen atoms. Consistent with the FT-IR results, the lower atomic ratio of C/O and the higher atomic ratio of O/N indicated more free carboxylic groups due to the hydrolysis of the amide bond [44]. Furthermore, the narrow scans of carbon and nitrogen were detected, as shown in Table 5. The corresponding C1s spectra and N1s spectra of the membranes before and after exposure were presented in Supplementary Figures S4 and S5. There were four peaks of C1s in the results of all samples, which were observed at 284.3 eV (C-C, C-H), 285.1 eV (C-O, C-N), 287.7 eV (C=O, O=C-N), and 290.4 eV (π-π bonds) [44,48]. For nitrogen, the peak at 399.8 eV represented C-N and O=C-N [49]. Compared to the virgin membranes (samples C5-Virgin and C6-Virgin), an additional peak of the membranes exposed to acid solutions (samples C5-2.5, C5-7.5, C6-2.5 and C6-7.5) appeared in N1s scanning at 401.4 eV (-NH 3 + , -NH 2 R + ). The component at 401.4 eV was increased with the increase of the acid solution concentration. It was indicated that a higher concentration of acidic solution could result in more severe degradation of the membranes. The results also verified the hydrolysis of the amide bond caused by acrylic acid and acetic acid.
Conclusions
In this work, two RO membranes were used to concentrate the solution containing acrylic acid and acetic acid under different operational conditions. With the pressure of 4 MPa and the temperature of 25 • C, the permeate fluxes of SWC5 and SWC6 were 19.69 L·m −2 ·h −1 and 21.12 L·m −2 ·h −1 , respectively. For both membranes, the retentions of acrylic acid and acetic acid were around 80% and 78%, respectively. The stability of the membranes in the acid solutions was also assessed. The longer exposure time and the higher acid concentration would decline the membrane performance. After six months of exposure to the acid solution containing 2.5% acrylic acid and 2.5% acetic acid, the acrylic acid retentions of SWC5 and SWC6 were decreased by 5.7% and 4.9%, and the acetic acid retentions of those were decreased by 4.1% and 2.2%, respectively. With the acid solution containing 7.5% acrylic acid and 7.5% acetic acid, the retentions of acrylic acid and acetic acid were decreased by 14.4% and 9.6% for SWC5, and 11.7% and 6.4% for SWC6, respectively. The increase of membrane surface roughness and the hydrolysis of some amide bonds were consistent with the changes of membrane performance. The results showed that it is possible to concentrate a lower concentration side stream of the absorption tower by RO.
Supplementary Materials:
The following are available online at http://www.mdpi.com/2077-0375/10/7/142/s1, Figure S1: Determination of the water permeability of the membranes for the acid solution, Figure S2: Determination of the acid transport coefficients of the C5 (a) and C6 (b) by fitting the acid retention and water flux at different pressures, Figure S3: AFM images of RO membranes: (a,d) prior to exposure and (b,c,e,f) after 6 months of exposure to the acid solutions, Figure S4: The C1s spectra and N1s spectra of C5 membranes: (a,b) prior to exposure and (c-f) after 6 months of exposure to the acid solutions, Figure S5: The C1s spectra and N1s spectra of C6 membranes: (a,b) prior to exposure and (c-f) after 6 months of exposure to the acid solutions, Table S1: Comparison of the acetic retention with those in the literature.
|
v3-fos-license
|
2022-08-01T15:07:44.155Z
|
2022-07-30T00:00:00.000
|
251209980
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/bmri/2022/4168308.pdf",
"pdf_hash": "0e74f65831f980f6dc47676718d3ffd8830b9998",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43086",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "0b44f2f08c4096377f7267539bc3aa384189392e",
"year": 2022
}
|
pes2o/s2orc
|
Assessment of Acupoint Therapy of Traditional Chinese Medicine on Cough Variant Asthma: A Meta-analysis
Acupoint application has been used in China to treat various illnesses for ages. In cough variant asthma (CVA), the main clinical sign is episodic night cough. Acupoint application therapy of traditional Chinese medicine is an e ff ective procedure to treat cough variant asthma. The current study is designed to systematically assess the e ff ectiveness of acupoint application therapy in traditional medicine for patients with cough variant asthma. The comprehensive computer retrieval related to comparison between acupoint application and nonacupoint application therapy for cough variant asthma was carried out in various databases ( n = 8 ) from database establishment until July 4, 2021. Both English and Chinese articles about original investigations in humans were searched. Two independent authors extracted the data, and disagreements were resolved by discussion. ReviewManager 5.3 software provided by Cochrane did a meta-analysis of selected randomized controlled trials (RCTs). Quality of experimentation and risk bias were analyzed by the Cochrane Handbook tool. A total of thirteen randomized controlled clinical articles along with 1237 patients were included in the study. Findings of meta-analysis showed that compared with nonacupoint application treatment, the total e ff ective rate of acupoint application treatment is more e ff ective ( RD = 0 : 13 , 95% CI (0.09, 0.17), Z = 6 : 70 , P < 0 : 00001 ). Besides, acupoint application can improve patients ’ lung function, the lung function index FVC ( mean difference = 0 : 55 , 95% con fi dence interval (0.42, 0.68), Z = 8 : 40 , P < 0 : 00001 ), FEV1 ( MD = 0 : 35 , 95% CI (0.23, 0.47), Z = 5 : 86 , P < 0 : 00001 ), FEV1/FVC (%) ( MD = 12 : 68 , 95% CI (4.32, 21.03), Z = 2 : 97 , P = 0 : 003 ), FEV1 (%) ( MD = 8 : 63 , 95% CI (8.01, 9.25), Z = 27 : 44 , P < 0 : 00001 ), and PEF (day) ( MD = 0 : 62 , 95% CI (0.52, 0.71), Z = 12 : 40 , P < 0 : 00001 ) of patients treated by acupoint application therapy were increased. Moreover, acupoint application might lower the level of immunoglobulin E ( MD = − 54 : 58 , 95% CI ( − 63.54, − 45.61), Z = 11 : 93 , P < 0 : 00001 ) and EOS ( MD = − 0 : 21 , 95% CI ( − 0.35, − 0.06), Z = 2 : 77 , P = 0 : 006 ). The LCQ (Leicester cough questionnaire) total score of CVA patients was also increased ( MD = 2 : 30 , 95% CI (1.55, 3.06), Z = 5 : 98 , P < 0 : 00001 ). Acupoint application therapy is e ff ective in controlling symptoms of CVA. It also has a positive e ff ect in improving lung function and life quality of patients. It can reduce the eosinophil levels and peripheral blood IgE levels of patients as well.
Introduction
Cough variant asthma (CVA) is a particular form with typical common cold symptoms and dry, nonproductive cough. Other manifestations, like dyspnea or gasping, are not generally observed; however, there could be episodic night cough which can be alleviated by a bronchodilator [1]. The symptoms of chronic cough in CVA might result in physiological disorders, psychological nervousness, and disruption of the socialization process. Previous findings revealed that persistent cough is the main risk factor (32.6%) of CVA in five territories of the Chinese Republic [2].
In addition, the prevalence of cough variant asthma is still prominent due to air pollution [3], smoking, allergens, and other reasons.
Current treatments for CVA are generally the same as ordinary respiratory illness, along with bronchodilators, glucocorticoid drugs, antihistamines, and leukotriene receptor antagonists [4]. Although these medications effectively control CVA symptoms and regulate inflammatory responses, the course of treatment is often long, and the patient's long-term compliance with medication is unwarranted. At the same time, some drugs will also cause osteoporosis [5], induce tissue degeneration, and other adverse reactions.
Acupoint application (AP) is a traditional Chinese medicine (TCM) method with a long history. The main steps of this treatment are grinding the herbs into powder and turning them into herbal patches, directly sticking to acupoints or affected areas to treat chronic cough. Studies have shown that acupoint application can affect the level of immunoglobulin and eosinophils in patients with CVA, regulate the proportion of lymphocytes, and influence the proportion of some cytokines, such as TGF, TNF, and IF, to control the symptoms of CVA and achieve long-term relief [6][7][8].
Recently, there have been many studies demonstrating AP's positive results for curing CVA [9]. However, those studies are limited to a few parameters and have a small sample size. Similarly, few researchers have conducted a systematic literature review of acupoint application in children [10]. Still, those studies have limited findings and mainly focus on only children. We, therefore, systemically searched and analyzed the consequences of stimulating acupoints for the cure of cough variant asthma through the available literature from several databases. Randomized controlled trials have been conducted comparing AP-based treatment with non-AP-based treatments. Results have been assessed based on quality and probability of biasness after consulting the Cochrane Handbook [11].
Methodology
2.1. Search Strategy. We systematically searched the literature for the formation of every database to July 4, 2021. Databases include PubMed (https://pubmed.ncbi.nlm.nih.gov/), EMBASE (https://www.embase.com/landing?status=grey), Web of Science (https://mjl.clarivate.com/search-results), the Cochrane Library (https://www.cochranelibrary.com/), Chinese Journal Full-Text Database (CNKI) (http://kns55.en.eastview.com/ kns55/brief/result.aspx?dbPrefix=CJFD), Database of Chinese Sci-Tech Periodicals (VIP) (http://www.nlc.cn/newen/ periodicals/), "Wanfang" Database (http://www.wanfangdata .com/), and China Biology Medicine Disc (CBM)(http://allie .dbcls.jp/pair/CBM;Chinese+BioMedical+Disc.html). The following keywords were used: "stimulating acupoints," "acupoint sticking," "traditional Chinese medicine," "TCM," "acupoint," "CVA," "cough variant asthma," "cough type asthma," "cough-variant asthma," "randomized controlled trial," "random," "control and trial," and "RCT." The search methodology for PubMed is mentioned below: The (4) Data about the trials on animals (5) Trials that were not properly controlled without clinical manifestation were not taken for this study (6) Incomplete literature data (7) Obvious errors such as self-contradiction and fabricating data were also excluded 2.3. Data Extraction and Management. From published articles, the study plan section was screened in the following manner: time of research, methodology, and blinding (including allocation concealment, blinding of research volunteers, health professionals, and assessment of results). These parameters were studied and included in the analysis. From the studied sections, participants of those studied articles were also screened. The following features of the study participants, age limit, gender, disease identification, other signs, and treatment count, as well as control samples, key features of treatment and control groups, and total completed experiments as well as incomplete or withdrawn, were taken for further analysis.
In interventions, site for acupoint application, time of interference, and noninterference were focused.
From the results section of the literature, total effective rate, forced expiratory volume (FVC), forced expiratory volume in one second, FEV1/FVC (%), FEV1/predication (FEV1/pre), peak expiratory flow (PEF), EOS count per milliliter in peripheral blood, IgE level per milliliter in the peripheral blood, and LCQ total score were taken for further study and analysis.
2.4. Quality Assessment. Evaluation of standard of research was done by ReviewManager 5.3 software risk bias assessment tool equipped from Cochrane Collaboration: (a) to generate the randomized data, (b) concealing the allocation, (c) blinding of research individuals, (d) blinding of the evaluation of results, (e) no proper information retrieved, (f) prediction of specific results, and (g) different partialities. Each group was regarded as "high risk of bias" and "low risk of bias"/"unclear risk of bias." 2.5. Statistical Analysis. We used the Cochrane ReviewManager 5.3 software for meta-analysis and assessment of reviewed data. Dichotomous data were displayed as odds ratio/risk ratio having 95% confidence intervals that predict the chance of risk or relative risk. Continuous variable dataset assessment was done by MD odds ratio and 95% Assessment of the experiments was done for clinical heterogeneity (demographic features, features of ailments, and therapies), diversity in methodology (planning, execution, and risk of bias), and statistical diversity. The chisquare test was applied with a P value: if P value was less than 0.10, this showed statistically significant results. I 2 statistic was applied as guided by the Cochrane Handbook for Systematic Reviews of Interventions. IfI-square (I 2 ) was used as a statistical method to assess data heterogeneity, the value of I 2 < 40% was indicative of less heterogeneity, whereas more than 75% indicated significant heterogeneity in experimentation. The funnel plot visually analyzed the "risk of reporting" bias.
Sensitivity analysis was done as described below: assessment of outcomes of two statistical models, i.e., randomeffect model (REM) and fixed model, were applied and compared. If I 2 > 50%, the random-effect model (REM) was applied for assessment.
Literature Survey.
A total of 534 records were analyzed after excluding the duplicated data. This data was further meta-analyzed, and all the information related to the search for knowledge and protocol selection for research is presented in Figure 1.
Key Features of the Research.
Out of the screened articles, thirteen articles were disclosed in 2014-2020 from nine provinces in China about AP in CVA. A total of 1237 volunteers participated in these researches, aged of 4-65 years. In previously published articles, the test groups consisted of patients who had undergone AP along with different treatments. In the control group, no AP was applied, and only Western or traditional Chinese medicines were used for the treatment of CVA.
The acupoint application test and control samples had 621 and 616 cases, respectively. Key features of all elected research are mentioned in Table 1. 3.3. Risk of Bias. Assessment of "risk of bias" is presented in Figure 2. Seven types of research were examined with minimum risk of bias in the assembly of randomly generated sequences, and other studies showed no precise results. Out of screened articles, six were found to carry a low risk of bias in allocation concealment, and others had an uncertain risk of bias in it. All other findings of articles were categorized based on higher risk. The selected researches showed a low risk of bias related to the incomplete dataset, SOR, and other biases.
Analysis of Lung Function Index in Acupoint Application
Treatment of CVA 3.5.1. Analysis of Lung Function Index FVC in Acupoint Application Therapy of Cough Variant Asthma. Three articles [14,16,18] reported lung function index FVC (Table 3). We used a randomized-effect analysis that analyzed the cumulative impact of the amount of research, indicating that acupoint application treatment was more effective to the experimental set to improve lung function index FVC (MD = 0:55, 95% CI [12,14,16,18,22] described cough variant asthma pulmonary function measure FEV1 with acupoint application therapy (Table 4), in the case of 597 cases. The random-effect analysis model was applied for analysis (mean difference = 0:35, 95% confidence interval (0.23, 0.47), Z = 5:86, P < 0:00001). These findings suggested that AP has more ability to improve the pulmonary function index FEV1 compared to the control group ( Figure 6). A test evaluated heterogeneity in research (P = 0:003, I 2 = 76%), and a subgroup analysis was performed. In the subgroup of patients younger than 7, the subgroup evaluation of four researchers reported a statistically small difference (mean difference = 0:30, 95% confidence interval (0.22, 0.37), Z = 8:00, P < 0:00001). In the subgroup of patients older than 7, the subgroup analysis showed a statistically significant difference (MD = 0:57, 95% CI (0.41, 0.73), Z = 6:93, P < 0:00001).
Analysis of the Pulmonary Function Index PEF (day)
for Acupoint Application Therapy of Cough Variant Asthma. Three screened reports [16,18,22] reported lung function index PEF (day) ( Table 7) [13,16,18,19,22] reported the peripheral blood IgE level as a measure of cough variant asthma for the acupoint application therapy (Table 8). This was observed in almost 522 cases. The fixedeffect analysis model was applied to analyze two groups of samples (MD = −54:58, 95% CI (−63.54, −45.61), Z = 11:93, P < 0:00001), which reported that acupoint application treatment could better decrease the peripheral blood IgE level of CVA patients as compared to control samples ( Figure 10).
BioMed Research International
−0.06), Z = 2:77, P = 0:006), indicating that acupoint application treatment could better decrease the peripheral blood EOS count of CVA patients as compared to control samples (Figure 11).
3.7.
Analysis of the LCQ Score in Acupoint Application Treatment of CVA. Two studies [20,23] reported LCQ scores, having a total of 152 cases. Heterogeneity analysis reported a small homogeneity in the research (I 2 = 0%) ( Table 10). The fixed-effect analysis model was applied to analyze the 2 groups of samples (MD = 2:30, 95% CI (1.55, 3.06), Z = 5:98, P < 0:00001), evaluating acupoint application treatment which might lead to better increase of the LCQ score of CVA patients compared with the control group ( Figure 12).
Discussion
It is widely believed that CVA is regarded as a particular form of respiratory illness with a histopathological process,
11
BioMed Research International inflammation in this disorder, indirectly showing characteristic BHR, cough receptor hypersensitivity, inflammatory cell infiltration, and cells having genes expressed for inflammation [25]. This histopathological process leads to chronic cough, which is common in clinical practice. CVA may also be transformed into typical asthma.
Asthma can be treated by inhaled glucocorticoids and leukotriene modulator drugs, etc. These drugs are categorized into a control group and need to be taken for a long period for the therapy to be effective, whereas other drugs include palliative drugs which include short-acting 13β 2 receptor agonists, inhaled anticholinergic drugs, and shortacting theophylline. These drugs are very effective in alleviating symptoms, reducing airway inflammation, and improving quality of life. However, adverse reactions of these drugs have also been observed. Glucocorticoids may cause hoarseness of voice and oral candida infection. The use of β receptor agonists may lead to sympathetic nerve excitation and accelerated heart rate, resulting in palpitations, chest pain, and other symptoms. So, it is advised to use these drugs only in emergencies and for the shorter period of time. Acupoint application is a nice alternative to these drugs [26].
Acupoint sticking therapy has a long history in traditional Chinese medicine. The main steps of this treatment are as follows. First, a variety of herbs were ground into a powder. Second, adhesive materials such as ginger water were prepared. Next, mix the powder with the adhesive to make a pulp salve that may look like "caking agent" and put it on certain acupuncture points of the body. For treating cough variant asthma, we often choose the "TianTu" (RN22), "DaZhui"(DU14), "FeiShu"(BL13), and "ShanZhong" (RN17) acupoints. Acupoint application allows drugs to be absorbed directly through the skin into capillaries without the need for liver metabolism, which preserves the biological activity of some drugs [7,8].
Acupoint application has an advantage over other treatment therapies in treating asthma as acupoint stimulation promotes flow of blood to dispel pathogenic factors. This can stimulate the body's immunity and reduce allergic states [27]. But still, no fully revealed mechanism of acupoint application has been observed in the treatment of cough variant asthma. IgE forms a complex immune interaction with various inflammatory factors like IL-4, IgA, IgE, and IgG to alleviate the symptoms of cough variant asthma [28]. So, it is suggested that the acupoint procedure can treat the CVA via regulating the inflammatory mediators [29].
Findings of this meta-analysis revealed that the rate of effectiveness, lung function index (FVC, FEV1, FEV1/FVC (%), FEV1 (%), PEF (day)) of the CVA sample showed more significant values as compared to the control group, whereas IgE and peripheral blood EOS count showed lower values when compared with control samples. This suggested that AP for cure of CVA has better efficacy than the other drug treatments.
The main advantage of this study is that we conducted a meta-analysis of 13 RCTs involving 1237 participants. Compared to previous systematic reviews of acupoint application for CVA [30], this study included a larger sample size and included age groups including infants, children, and the elderly. In addition, differences in clinical response rate, lung function, LCQ scores, and some biochemical blood indicators were investigated.
However, this study still has some limitations. The foremost limitation is that, although the treatment of CVA by acupoint application is frequently used, random clinical controlled studies are usually single-center studies with a small sample size. There are problems such as having no recognized standard for efficacy evaluation and clinical heterogeneity. These problems suggest the need for high-quality clinical research methods in treating CVA by acupoint application, including correct randomization, double-blind, and allocation concealment methods, as well as large-scale multicenter studies. Second, since the acupoint application requires the application operating on the patient, and the herbs have a special smell, it is impossible to blind the patients during the operation in all employed research. Therefore, based on risk-of-bias assessment software provided by the Cochrane Organization, the "blinding of participants and personnel" in whole reports was evaluated as "high risk." Third, the language of retrieval in the present research was in Chinese and English, and the literature was only from 8 databases. Besides, all the reports used in the meta-analysis were in Chinese, and all the experiments were conducted in China, limiting the present results' specifications because of sample features. Fourth, due to the complexity of acupoint application, this study mainly focused on the treating method of acupoint application but did not explore the influence of different acupoint selections and the type of herbal medicine on treating effectiveness.
Conclusion
The current study concluded that acupoint application is better for the CVA treatment than the control group, which was treated with other traditional medicines. Moreover, it was observed that AP improved respiration and chronic airway inflammation by reducing eosinophil levels and peripheral blood IgE levels.
Data Availability
The data are available from the corresponding author upon reasonable request.
|
v3-fos-license
|
2020-01-22T02:01:13.105Z
|
2020-01-20T00:00:00.000
|
210839221
|
{
"extfieldsofstudy": [
"Physics",
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-020-8111-7.pdf",
"pdf_hash": "c49d5a45106b31ae440c0c0fd22a323cfe0ebbda",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43087",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "7b7fb12175238cc5742cac1fefc0edf9332605aa",
"year": 2020
}
|
pes2o/s2orc
|
Prospect of undoped inorganic crystals at 40 Kelvin for low-mass dark matter search at Spallation Neutron Source
A light yield of $26.0 \pm 0.4$ photoelectrons per keV electron-equivalent was achieved with a cylindrical 1 kg undoped CsI crystal coupled directly to two photomultiplier tubes at 80 K, which eliminates the concern of self light absorption in large crystals raised in some of the early studies. Also discussed are the sensitivities of a 10-kg prototype detector with SiPM arrays at light sensors operated at 40 K for the detection of low-mass dark matter particles produced at the Spallation Neutron Source at the Oak Ridge National Laboratory after two years of data taking.
Introduction
The DAMA/LIBRA experiment observed an annual modulation signal with a very high significance in the 2 to 6 keV electron-equivalent (keVee) region in their thallium-doped sodium iodide, NaI(Tl), scintillation crystals [1]. If it is interpreted with the standard dark matter theory, the observation conflicts with results from experiments using different target materials [2][3][4][5]. In different places in the world, including PICO-LON [6], DM-Ice [7], ANAIS [8], COSINE [9] and SABRE [10], many experiments have been built or are under construction to verify the DAMA result using the same material. No annual modulation signal similar to that in DAMA has been observed yet in these experiments due to mainly two difficulties: first, to lower the energy threshold, and second, to reduce radioactive contamination inside or on surface of scintillation crystals.
The second task has been a focus of all the NaI(Tl)-based experiments mentioned so far due to the simple fact that the energy threshold of the DAMA experiment is not extremely low, while the radio-purity of the DAMA crystals is still the best to date. To verify the DAMA results in the same energy region with the same crystal is certainly important. However, a e-mail: Jing.Liu@usd.edu equally important is to explore the potential of alkali halide scintillation crystals in detecting other possible dark matter candidates, such as the ones that are light enough to be produced in particle accelerators [11]. The single request that signals must appear in the time window when a short pulsed particle beam hits another or a fixed target reduces by orders of magnitudes background events from radioactive contamination, which appear randomly in time. The stringent requirement on the radio-purity of scintillation crystals can be loosened to some degree in such an experimental setup.
A great evidence of the concept is the observation of coherent elastic neutrino-nuclear scatterings in a ∼ 14 kg CsI(Na) crystal in the COHERENT experiment in 2017 utilizing neutrinos produced in the Spallation Neutron Source (SNS) at the Oak Ridge National Laboratory (ORNL), TN, USA [12]. The SNS is the world's premier neutron-scattering research facility. At full beam power, about 1.5 × 10 14 1 GeV protons bombard a liquid mercury target in 600 ns bursts at a rate of 60 Hz. Neutrons are produced in spallation reactions in the mercury target. Interactions of the proton beam in the mercury target produce π + and π − in addition to neutrons. These pions quickly stop inside the dense mercury target. Most of π − are absorbed. In contrast, the subsequent π + decay-at-rest (DAR) produces neutrinos of three flavors. The sharp SNS beam timing structure is highly beneficial for background rejection and precise characterization of those backgrounds not associated with the beam [13], such as those from radioactive impurities in the crystal. Looking for beam-related signals only in the 10 µs window after a beam spill imposes a factor-of-2000 reduction in those backgrounds.
In addition to neutrons and neutrinos, the so-called dark portal particles, V , could also be produced at the SNS in the decay of mesons produced by the interactions between the 1 GeV proton beam and the mercury target. Those portal particles are predicted in sub-GeV dark matter models to mediate interactions between the relic dark matter candidates and the Standard Model particles in order to satisfy the Lee-Weinberg bound for the WIMP mass [14]. Close to the beam direction, the dominant production channel is the decay of π 0 /η 0 particles on the fly, while the nuclear absorption of π − particles may produce portal particles isotropically. The portal particle would subsequently decay to a pair of light dark matter particles, χ † χ, either of which may interact with a detector located near the SNS target.
The benefit of less stringent requirement on radio-purity of scintillation crystals in such a setup is accompanied with a desire for a lower energy threshold, since lighter dark matter particles are less efficient than heavier ones in transferring momentum to nuclei, resulting in less energetic nuclear recoils. Lowing the energy threshold involves reducing radioactive backgrounds and instrumental noises from light sensors and crystals near the threshold, as well as increasing the detection efficiency of the system and the light yield of the crystal. In this work, we focus on the possibility of increasing light yields of NaI and CsI crystals at cryogenic temperatures.
The light yields of undoped NaI and CsI crystals increase rapidly when temperature goes down, and reach the highest point around 40 K [15][16][17] was observed. The light yields at liquid nitrogen temperature (77 K at one atmospheric pressure) are slightly lower, but for convenience, most experiments were done at about 77 K. The observed number of photons varied with the purity of crystals and light readout methods [11,. Nevertheless, all measurements gave similar or higher yields than those of Tldoped NaI and CsI crystals at room temperature. The highest ones [11,18,21,33,35] almost reached the theoretical limit deduced from the band gap energy.
In 2016, one of the authors of this work measured the light yield of a small undoped CsI crystal directly coupled to a 2 inch Hamamatsu PMT R8778MODAY(AR) [41] at 80 K and achieved a yield of 20.4 ± 0.8 photoelectrons (PE) per keVee [40]. The cylindrical crystal used in that study had a diameter of 2 inches and a thickness of 1 cm, corresponding to a mass of only 91.4 gram. Mentioned in the literature [15], there was a concern about strong self absorption of the intrinsic scintillation light in undoped crystals, which might prevent the usage of crystals thicker than 1/2 inch from practical uses. However, later investigations revealed that the scintillation mechanism of undoped crystals [16,39] should be transparent to their own scintillation light. Strong absorptions mentioned in early literature may have been due to impurities in their crystals.
A cylindrical undoped CsI crystal of more than 1 kg was used to test whether the light yield would reduce as the size of the crystal increases. The experimental setup is described first. The light yield achieved with two Hamamatsu R11065 PMTs is reported secondly. After that, the scintilla-tion mechanism of undoped crystals is summarized briefly to support the high light yield observed with the large crystal. Using a 10-kg cryogenic detector located 19.3 meters away from the SNS target to detect the sensitivity of lowmass dark matter particles is discussed last.
Due to mechanical difficulties in operating NaI crystals in cryogenic environment, the experimental investigation was done using only undoped CsI. The discussion, however, was kept generic, involving both CsI and NaI given similar scintillation properties of the two from 4 to 300 K [15][16][17].
Experimental setup
The right picture in Fig. 1 shows an open liquid nitrogen (LN2) dewar used to cool a 50 cm long stainless steel tube placed inside. The inner diameter of the tube was ∼ 10 cm. The tube was vacuum sealed on both ends by two 6-inch ConFlat (CF) flanges. The bottom flange was blank and attached to the tube with a copper gasket in between. The top flange was attached to the tube with a fluorocarbon CF gasket in between for multiple operations. Vacuum welded to the top flange were five BNC, two SHV, one 19-pin electronic feedthroughs and two 1/4-inch VCR connectors. The left sketch in Fig. 1 shows the internal structure of the experimental setup. The undoped cylindrical CsI crystal was purchased from the Shanghai Institute of Ceramics, Chinese Academy of Sciences. It had a diameter of 3 inches, a height of 5 cm and a mass of 1.028 kg. All surfaces were mirror polished. The side surface was wrapped with multiple layers of Teflon tapes. Two 3-inch Hamamatsu R11065-ASSY PMTs were attached to the two end surfaces without optical grease. To ensure a good optical contact, the PMTs were pushed against the crystal by springs, as shown in Fig. 2. The assembly was done in a glove bag flushed with dry nitrogen gas to minimize exposure of the crystal to atmospheric moisture. The relative humidity was kept below 5% at 22 • C during the assembly process. The assembled crystal and PMTs were lowered into the stainless steel chamber from the top. After all cables were fixed beneath it, the top flange was closed. The chamber was then pumped with a Pfeiffer Vacuum HiCube 80 Eco to ∼ 1 × 10 -4 mbar. Afterward, it was refilled with dry nitrogen gas to 0.17 MPa above the atmospheric pressure and placed inside the open dewar. Finally, the chamber was cooled by filling the dewar with LN2. After cooling, the chamber pressure was reduced to slightly above the atmospheric pressure.
A few Heraeus C 220 platinum resistance temperature sensors were used to monitor the cooling process. They were attached to the side surface of the crystal, the PMTs, and the top flange to obtain the temperature profile of the long chamber. A Raspberry Pi 2 computer with custom software [42] was used to read out the sensors. The cooling process could be done within about 30 minutes. Most measurements, however, were done after about an hour of waiting to let the system reach thermal equilibrium. The temperature of the crystal during measurements was about 3 K higher than the LN2 temperature.
The PMTs were powered by a 2-channel CAEN N1470A high voltage power supply NIM module. Their signals were fed into a 4-channel CAEN DT5751 waveform digitizer, which had a 1 GHz sampling rate, a 1 V dynamic range and a 10-bit resolution. Custom-developed software was used for data recording [43]. The recorded binary data files were converted to CERN ROOT files for analysis [44].
Single-photoelectron response of PMTs
The single-photoelectron response of PMTs was measured using light pulses from an ultraviolet LED from Thorlabs, LED370E. Its output spectrum peaked at 375 nm with a width of 10 nm, which was within the 200-650 nm spectral response range of the PMTs. Light pulses with a ∼50 ns duration and a rate of 10 kHz were generated using an RIGOL DG1022 arbitrary function generator. The intensity of light pulses was tuned by varying the output voltage of the function generator so that only one or zero photon hit one of the PMTs during the LED lit window most of the time. A TTL trigger signal was emitted from the function generator simultaneously together with each output pulse. It was used to trigger the digitizer to record the PMT response. The trigger logic is shown in the left flow chart in Fig. 3.
A typical single-photoelectron (PE) pulse from an R11065 working at its recommended operational voltage, 1500 V, is well above the pedestal noise. However, the two PMTs were operated at about 1300 V to avoid saturation of electronic signals induced by 2.6 MeV γ-rays from environmental 208 Tl. The consequent small single-PE pulses hence had to be amplified by a factor of ten using a Phillips Scientific Quad Bipolar Amplifier Model 771 before being fed into the digitizer in order to separate them from the pedestal noise. 4 shows two hundred consecutive waveforms from the bottom PMT randomly chosen from a data file taken during a single-PE response measurement. About 20 of them contain a single-PE pulse within 120 to 160 ns. An integration in this time window was performed for each waveform in the data file whether it contained a pulse or not. The re-sulting single-PE spectra for the top and bottom PMTs are presented in Fig. 5 and Fig. 6, respectively.
The spectra were fitted in the same way as described in Ref. [45] with a function, where H is a constant to match fit function to spectra counting rate, P(n, λ ) is a Poisson distribution with mean λ , which represents the average number of PE in the time window, f n (x) represents the n-PE response, and can be expressed as where f 0 (x) is a Gaussian function representing the pedestal noise distribution, * denotes a mathematical convolution of two functions, and f n * 1 (x) is a n-fold convolution of the PMT single-PE response function, f 1 (x), with itself. The single-PE response function f 1 (x) was modeled as: where R is the ratio between an exponential decay with a decay constant x 0 , and a Gaussian distribution G(x;x, σ ) with a mean ofx and a width of σ . The former corresponds to the incomplete dynode multiplication of secondary electrons in a PMT. The latter corresponds to the full charge collection in a PMT. The fitting result for the top PMT is shown in Fig. 5. The fitting function has eight free parameters as shown in the top-right statistic box in Fig. 5, where "height" corresponds to H in Eq. 1, "lambda" corresponds to λ in Eq. 1, "mean" and "sigma" with a subscript "PED" represents the mean and the sigma of the Gaussian pedestal noise distribution, those with a subscript "SPE" representsx and σ in Eq. 3, respectively, and "ratio" corresponds to R in Eq. 3. Due to technical difficulties in realizing multiple function convolutions in the fitting ROOT script, the three-PE distribution, f 3 * 1 (x), was approximated by a Gaussian function with its mean and variance three times that of the single-PE response. Table 1 lists means of single-PE distributions for both PMTs measured before and after the energy calibration mentioned in the next section to check the stability of the PMT gains. The average mean for the top and bottom PMT is 28.58 ± 0.51 and 33.08 ± 0.47 ADC counts·ns, respectively.
Energy calibration
The energy calibration was performed using γ-rays from a 137 Cs and a 60 Co radioactive source, as well as 40 K within the crystal and 208 Tl from the environment. The sources were sequentially attached to the outer wall of the dewar as shown in Fig. 1. Background data taking was done before those with a source attached. The digitizer was triggered when both PMTs recorded a pulse above a certain threshold within a time window of 16 ns. The trigger logic is shown in the right flow chart in Fig. 3. The trigger rate for the background, 137 Cs and 60 Co data taking was 100 Hz, 410 Hz and 520 Hz, respectively, if the threshold was set to 10 ADC counts above the pedestal level. Each recorded waveform was 8008 ns long. The rising edge of the pulse that triggered the digitizer was set to start at around 1602 ns so that there were enough samples before the pulse to extract the pedestal level of the waveform. After the pedestal level was adjusted to zero the pulse was integrated until its tail fell back to zero. The integration had a unit of ADC counts·ns. It was converted to numbers of PE using the formula: (number of PE) = (ADC counts · ns)/x, wherex is the mean of the single-PE Gaussian distribution mentioned in Eq. 3. Its unit was also ADC count·ns. Its value was obtained from the fittings shown in Fig. 5 and 6. The resulting spectra normalized by their event rates recorded by the bottom PMT are shown in Fig. 7. The spectra from the top PMT are very similar. The γ-ray peaks were fitted using one or two Gaussian distributions on top of a 2 nd order polynomial. A simultaneous fit for the 1.17 MeV and 1.33 MeV peaks from 60 Co is shown in Fig. 8 as an example. The peaks are clearly separated indicating an energy resolution much better than that of a regular NaI(Tl) detector running at room temperature. The means and sigmas of the fitted Gaussian functions are listed in Table 2 together with those from other γ-ray peaks.
Light yield
The light yield was calculated for each PMT using the data in Table 2 The obtained light yield at each energy point is shown in Fig. 9. The light yield of the whole system was calculated as a sum of those of the top and bottom PMTs. The uncertainties of light yields are mainly due to the uncertainties of mean values of the single-PE responses used to convert the x-axes of the energy spectra from ADC counts·ns to the number of PE. The data points in each category were fitted by a straight line to get an average light yield, which was 15.38 ± 0.34 PE/keVee for the top PMT, 10.60 ± 0.24 PE/keVee for the bottom one, and 25.99 ± 0.42 PE/keVee for the system.
To understand the origin of the significant light yield difference between the two PMTs, additional measurements were performed. First, the PMT-crystal assembly were pulled from the chamber and reinserted upside down without any other change. The PMTs kept their yields unchanged. Second, the PMT with the lower yield was replace by another R11065. No significant change could be observed. Last, the crystal was flipped while the PMTs were kept in their original locations. Again, no significant change could be observed. Therefore, the difference in the light yields between the two PMTs was most probably due to the difference in the There seems to be a systematic increase of the light yield as the energy increases as shown in Fig. 9. This may indicate a slight non-linearity in the energy response of the undoped CsI crystal at 80 K. However, limited by the large uncertainty of each data point, no quantitative conclusion can be drawn. Additional studies with low-energy sources will be performed in the future.
Scintillation mechanism
The light yield achieved with this ∼1 kg undoped CsI is even higher than that achieved with the 91.4 gram crystal, which proves that the undoped CsI is at least transparent to its own scintillation light up to a few tens of centimeters. The scintillation mechanism of undoped crystals is summarized here to back up this conclusion.
A scintillation photon must have less energy than the width of the band gap of the host crystal. Otherwise, it can excite an electron from the valence band to the conduction band and be absorbed by the host crystal. This demands the existence of energy levels in between the band gap. Recombinations of electrons and holes in these levels create photons not energetic enough to re-excite electrons up to the conduction band, and hence cannot be re-absorbed. In Tldoped crystals, there exist these energy levels around the doped ions, which are called scintillation centers. Scintillation centers in undoped crystals are understood to be selftrapped excitons instead of those trapped by doped impurities [46]. Two types of excitons were observed in an undoped CsI [16] as demonstrated in Fig. 10. In both cases, a hole is trapped by two negatively charged iodine ions, it can catch an excited electron and form a so-called exciton that resembles a hydrogen atom. These excitons have less energy than the width of the band gap, photons emitted by the de-excitation of which are not energetic enough to be re-absorbed by the host crystal. The energy dispersion among phonons and the two types of excitons dictates the temperature dependence of the light yield of undoped crystals (full and empty circles in Fig. 11), which were experimentally verified [16,39].It is worth noting that if the operation temperature can be lowered from 80 K to 40 K, the light yield can be further increased. Fig. 11 Relative scintillation yields and afterglow rates of various crystals as a function of temperature. The scintillation yield of undoped CsI is taken from Ref. [16], the yields of undoped NaI and NaI(Tl) are from Ref. [17]. The afterglow rates are from Ref. [47]. The best operating temperature is around 40K.
Due to completely different scintillation mechanisms, the scintillation wavelengths and decay times of undoped NaI/CsI are quite different from those of NaI/CsI(Tl), as summarized in Table 3 and Table 4 for room and liquid nitrogen temperatures, respectively. Undoped NaI is a much faster scintillator than NaI(Tl). It permits a narrower coincidence time window that can further suppress steady-state backgrounds. This allows for precise searches for physics beyond the Standard Model, such as low-mass dark matter particles or non-standard neutrino interactions, depending on their timing relative to the beam. no data no data undoped NaI 30 [15,22] 303 [21,38] undoped CsI 1000 [16,32,40] 340 [16,29,32] Compared to deep-underground dark matter experiments, detectors located at the SNS are much shallower. Afterglows of the crystal induced by energetic cosmic muon events may be a serious concern. As shown in Fig. 11, undoped CsI and NaI suffer from afterglow above ∼60 K [47]. However, one of the authors of [47], suggests through private communication that at 40 K (near maximal light yield for undoped CsI and NaI), the afterglow rate at the single-photon level is reduced by a Boltzmann factor to a level that is probably much lower than the dark noise of the light sensors. One can thus maximize the light yield and minimize the afterglow of undoped CsI and NaI by operating them near 40 K. One can also require the coincident observation of light signals in at least two light sensors to suppress both the afterglow from the crystal and the dark noise from light sensors at the single photon level.
Energy threshold
According to Ref. [40], the quantum efficiency of R11065 at 80 K near 300 nm is about 27%, while the photon detection efficiency of some silicon photomultipliers (SiPM) can already reach 56% at around 420 nm [53]. By replacing PMTs with SiPM arrays coated with some wavelength shifting material that shifts 313 nm [38]/340 nm [16,29,32] scintillation light from undoped NaI/CsI to ∼430 nm, it is possible to double the light yield from 25.99 ± 0.42 PE/keVee to about 50 PE/keVee. Such a high yield has recently been almost achieved using a combination of a small undoped CsI and a few large-area avalanche photodiodes (LAAPD) after wavelength shifting [11]. Compared to a SiPM, a LAAPD has generally even higher light detection efficiency (about 90%), but its output signals are too small to be triggered at single-PE level.
To estimate the trigger efficiency of a detector module as shown in the inlet of Fig. 12 that has a light yield of 50 PE/keVee, a toy Monte Carlo simulation was performed as follows: n photons were generated.
-10% of them were thrown away randomly, mimicking a 90% light collection efficiency. -The remaining photons had an even chance to reach individual SiPM arrays, and 56% of chance to be detected. -If both arrays recorded at least one PE, this simulated event was regarded as being triggered.
The value of n changed from 0 to 40. For each value, 10,000 events were simulated. The trigger efficiency was calculated as the number of triggered events divided by 10,000. Fig. 12 shows the simulated 2-PE coincidence trigger efficiency as a function of the number of generated photons. An exponential function (purple curve) with three free parameters was fitted to the simulated results (blue dots). The fitted function was used to convert energy spectra to PE spectra, which is described in detail in the next section. Assuming a constant quenching factor of 0.08 for NaI and 0.05 for CsI in such a low energy region, the threshold is translated to 1 keV for Na recoils, and 1.6 keV for Cs recoils.
8 Sensitivity to low-mass dark matter produced at SNS Given the simulated trigger efficiency near the energy threshold, the sensitivity of cryogenic crystals placed 19.3 meters away from the SNS target for low-mass dark matter detection was estimated.
Two classes of dark matter portal particles can be constrained by such an experiment: a vector portal particle kinetic mixing with a photon, and a leptophobic portal particle coupling to any Standard Model baryon. In addition to the portal and the dark matter particle masses, m V and m χ , the vector portal model has two coupling constants as free parameters, ε and α , while the leptophobic parameter depends on a single α B . The parameters of the vector portal model can be conveniently compared to the cosmological relic density of dark matter through the dimensionless quantity, Y = ε 2 α (m χ /m V ) 4 [54], which can easily be compared to results from direct detection experiments. The sensitivity to the leptophobic portal of the assumed detector is of great interest compared to beam dump experiments, which are frequently most sensitive to ν-e elastic scattering [55,56], and are incapable of testing this model.
The BdNMC event generator [57] was used to determine the energy spectra of Na and I recoils in the assumed detector, parameterized by the dark matter and portal particle masses [58]. Assuming a constant nuclear recoil quenching factor of 0.08, the generated Na and I recoil energy spectra were converted to visible energy spectra in keVee. The 50 PE/keVee system light yield was translated to the crystal's intrinsic light yield of 50/56%/90% ≈ 100 photons/keVee, which was used to convert the visible energy spectra to numberof-photon spectra. A simple Poisson smearing of the number of photons was applied to the latter. At last, the trigger efficiency function fitted to Fig. 12 was applied to convert the number-of-photon spectra to PE spectra, which were summed and shown as the blue histogram stacked on top of others in Fig. 13 labeled as "LDM Signal". The total number of LDM events integrated over the whole spectrum at Y = 2.6 × 10 −11 and m χ = 10 MeV is about 44. The largest component in Fig. 13 colored in orange and labeled as "Neutrino Signal" are the calculated CEvNS spectrum with the detector responses folded in. The total number of events is about 218 in the 0 ∼ 0.8µs prompt neutrino window. Additional 663 CEvNS events can be detected in the delay window (0.8 ∼ 6 µs), which were used to constrain the uncertainty of the orange spectrum in Fig. 13. The bottom two histograms labeled "Beam Neutrons" and "Steady-State bkg" are the SNS beam related and unrelated background spectra measured by the COHERENT CsI(Na) detector [12]. Since the proposed detector has a much lower threshold, there are no measurement of the two backgrounds below 40 PE. The rates of the two were assumed to be flat below 40 PE.
For each m χ and m V , the minimum dark matter coupling constants that are inconsistent with the Asimov prediction [59] was calculated taking into account systematic uncertainties as described in detail in Ref. [58]. The results are shown in Fig. 14 and 15 for an exposure of a 10 kg crystal for 2 years of data taking. The thermal target line indicates the model parameters where dark matter interactions with visible matter in the hot early universe explain the dark matter abundance today.
The nuclear quenching factor of undoped NaI has not been measured. Small or no quenching were observed in undoped CsI for α radiation compared to γ radiation [20,52]. A very preliminary measurement of the nuclear quenching factor of an undoped CsI gives a value of 0.1 [11]. Detailed measurement of the nuclear quenching factors for both undoped NaI and CsI is planned. For the purpose of sensitivity estimation, two extreme cases are considered. The red curves in Fig. 14 and 15 correspond to a constant quenching factor of 0.08. The blue ones are with no quenching at all. The real sensitivity curve should lay in between.
Conclusions
A light yield of 26.0 ± 0.4 PE/keVee was achieved with an undoped CsI crystal directly coupled to two PMTs at 80 K. The cylindrical crystal has a diameter of 3 inches, a height of 5 cm, and a mass of 1.028 kg, which can work as a module of a 10 kg detector for the detection of low-mass dark matter particles produced at the Spallation Neutron Source at the Oak Ridge National Laboratory. The sensitivity of such a detection was investigated assuming a similar setup as the CsI(Na) detector used in the COHERENT experiment where the coherent elastic neutrino-nuclear scattering was first observed. With such a detector, a large amount of phase space that has not been covered by current experiments can be explored given a 20 kg·year exposure.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2015-12-06T00:00:00.000
|
14326427
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1155/2016/5296271",
"pdf_hash": "ed0397f14d35e848543ca383fa869e152c1fdbd1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43088",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "92147c8e57bf0dd70863e2dcf491b0526bc87597",
"year": 2015
}
|
pes2o/s2orc
|
Low-Dose Aronia melanocarpa Concentrate Attenuates Paraquat-Induced Neurotoxicity
Herbicides containing paraquat may contribute to the pathogenesis of neurodegenerative disorders such as Parkinson's disease. Paraquat induces reactive oxygen species-mediated apoptosis in neurons, which is a primary mechanism behind its toxicity. We sought to test the effectiveness of a commercially available polyphenol-rich Aronia melanocarpa (aronia berry) concentrate in the amelioration of paraquat-induced neurotoxicity. Considering the abundance of antioxidants in aronia berries, we hypothesized that aronia berry concentrate attenuates the paraquat-induced increase in reactive oxygen species and protects against paraquat-mediated neuronal cell death. Using a neuronal cell culture model, we observed that low doses of aronia berry concentrate protected against paraquat-mediated neurotoxicity. Additionally, low doses of the concentrate attenuated the paraquat-induced increase in superoxide, hydrogen peroxide, and oxidized glutathione levels. Interestingly, high doses of aronia berry concentrate increased neuronal superoxide levels independent of paraquat, while at the same time decreasing hydrogen peroxide. Moreover, high-dose aronia berry concentrate potentiated paraquat-induced superoxide production and neuronal cell death. In summary, aronia berry concentrate at low doses restores the homeostatic redox environment of neurons treated with paraquat, while high doses exacerbate the imbalance leading to further cell death. Our findings support that moderate levels of aronia berry concentrate may prevent reactive oxygen species-mediated neurotoxicity.
Introduction
Neurodegeneration is a hallmark of numerous neurological disorders such as age-related dementia, Alzheimer's disease, and Parkinson's disease [1]. While several etiologies have been identified leading to the loss of neurons, one possible contributing factor is contact with environmental toxins [2]. A major source of these poisons in rural farming areas is insecticides and herbicides, and exposure to these has been suggested as a major risk factor for neurological diseases such as Parkinson's disease [3,4]. One commonly used compound in herbicides is paraquat (PQ), and extensive research has demonstrated a direct link between neurotoxicity and PQ contact [5][6][7]. PQ is a known redox cycling agent that impacts complex I activity of the mitochondria, increases superoxide (O 2 •− ) production, and decreases endogenous antioxidant capacity leading to increased neurotoxicity through apoptosis [8,9]. Several studies have examined the effects of single antioxidant supplementation in the amelioration of PQ-induced neurotoxicity [10][11][12], but to date it remains unclear how combinations of small molecule antioxidants gained through dietary or nutritional means affect this toxinmediated neuron loss.
Aronia melanocarpa, also known as black chokeberries or simply aronia berries, are small, dark, cherry-like berries belonging to the plant family Rosaceae [13]. Aronia berries are native to Eastern Europe and the Eastern United States but have recently become cultivated in large quantities by Midwest farmers. The berries have garnered much attention by the general public due to their significantly high quantity of polyphenols, in particular anthocyanins and flavonoids, which are estimated at 2-3 times greater amounts than in comparable berries [14,15]. Polyphenols, such as resveratrol and quercetin, have been shown to possess significant 2 Oxidative Medicine and Cellular Longevity antioxidant properties by both directly scavenging reactive oxygen species (ROS) and inducing cellular antioxidant systems to help combat oxidative environments [15]. Aronia berries are no exception, and a widespread literature exists examining the potential beneficial effect of aronia berries on diseases including hypercholesterolemia, cancer, diabetes, and inflammation [16][17][18][19]. However, the vast majority of these studies only examine enriched extracts of the polyphenols from aronia berries and not the effects of the whole berry or berry concentrate in the disease models. Moreover, a dearth of studies exists examining the potential beneficial effects of aronia berries on diseases affecting the nervous system.
Herein, we tested the hypothesis that polyphenolic-rich aronia berry concentrate (AB) has antioxidant protective effects against ROS-induced neurotoxicity by PQ. Utilizing a neuronal cell culture model, we indeed demonstrate AB protects against PQ-induced cellular toxicity and an increase in ROS. However, we show that only low doses of AB demonstrate this protective effect, while high doses potentiate the negative effects elicited by PQ. Overall, this work suggests a proper balance of prooxidants and antioxidants are required for normal neuronal homeostasis, and moderate levels of AB shift the balance in favor of neuronal survival following PQ exposure.
Cell Culture and
Reagents. NG108-15 neuroblastoma cells (ATCC #HB-12317) were cultured and maintained in RPMI 1640 (Gibco #11875-093, Grand Island, NY) supplemented with 10% fetal bovine serum (Atlanta Biologicals #S11150, Lawrenceville, GA) and 1% penicillin/streptomycin (Gibco #15140-122, Grand Island, NY). As per manufacturer's instructions for human consumption, the aronia berry concentrate (Superberries/Mae's Health and Wellness-Westin Foods, Omaha, NE) was diluted to the drinking concentration (1 : 16 in culture media) prior to making serial working dilutions. Paraquat (Sigma-Aldrich #36541, St. Louis, MO) was diluted in double-distilled water and filter-sterilized prior to use. Cells were plated (200,000 cells/60 mm dish) 24 hours prior to counting or treatment at 0 hours. For AB + PQ experiments, AB was started at 0 hours and PQ was started at 24 hours; pretreatment was performed to examine the protective effects of AB to PQ toxicity. Media were made fresh and changed daily.
Growth Curves and Apoptosis
Assays. For growth curve analyses, cells were washed twice to remove unattached dead cells. Remaining live and attached cells were scrape harvested, isolated by centrifugation, and counted using size exclusion on a Beckman Coulter counter [20]. Apoptotic fraction of live cells was performed on the same cell population using the Alexa Fluor 488 annexin V/Dead Cell Apoptosis Kit (Molecular Probes #V13241, Grand Island, NY) as per manufacturer's instructions [21]. Briefly, freshly isolated cells were incubated with an Alexa Fluor 488-conjugated annexin V antibody as well as propidium iodide (PI). Cells were analyzed on a FACSCalibur flow cytometer (Becton Dickinson, Franklin Lakes, NJ) at 488 nm excitation and 535 and 610 emission for annexin V and PI, respectively. Apoptotic fraction was considered as cells that were annexin V positive, while remaining are PI negative.
Hydrogen Peroxide (H
Replicationdeficient recombinant adenoviruses (Ad5-CMV) encoding either HyPer Cyto (Cytoplasm-targeted HyPer construct; Evrogen #FP941, Moscow, Russia) or HyPer-Mito (Mitochondria-targeted HyPer construct, Evrogen #FP942, Moscow, Russia) were purchased from the University of Iowa Viral Vector Core Facility (Iowa City, IA). After plating, cells were transduced with 100 multiplicity of infection (MOI; transduction efficiency measured at 95.4% ± 3.2% by flow cytometry with negligible toxicity) of respective virus for 24 hours in serum-free media prior to treatment with AB or PQ. Following treatment, cells were analyzed immediately on a LSRII Green Laser flow cytometer at 488 nm excitation and 509 nm emission and quantified using FlowJo cytometric analysis software [22].
2.6. Antioxidant Activity Gels. Activity gels were run utilizing whole cell lysates. Samples were separated on 12% nondenaturing gels with ammonium persulfate used as the polymerization catalyst in the running gel and riboflavinlight in the stacking gel. Gels were prerun for one hour at 4 ∘ C prior to sample loading. For superoxide dismutase activity, the gel was stained in a solution containing 2.43 mM nitroblue tetrazolium, 28 mM tetramethylethylenediamine, and 25 M riboflavin-5 -phosphate for 20 minutes at room temperature protected from light. Following this incubation, the gel was rinsed thrice with double-distilled water and allowed to expose under fluorescent light. For catalase activity, the gel was first allowed to incubate in a 0.003% H 2 O 2 solution for 10 minutes prior to staining with 2% ferric chloride and 2% potassium ferricyanide. Gel images were obtained by scanning using a Brother MFC-8870DW scanner [24].
Statistics.
Data are presented as mean ± standard error of the mean (SEM). For two group comparisons, Student's -test was used. For multiple group comparisons, one-way ANOVA followed by Newman-Keuls posttest was used. GraphPad Prism 5.0 statistical and graphing software was used for all analyses. Differences were considered significant at < 0.05.
AB Protects Neurons from PQ-Induced Cell Death.
PQ is a well-established neurotoxin known to induce neuron cell death by ROS-mediated apoptosis [26]. To identify an appropriate dose of PQ required to induce neurotoxicity in our neuronal cell culture model, we performed growth curves in the presence of increasing amounts of PQ and identified the IC 50 of PQ to be approximately 50 M (Figure 1(a)). Additionally, to understand if AB alone had any effects on cellular viability we exposed cells to increasing concentrations of AB in 10-fold serial dilutions (Figure 1(b)). Only the highest dose tested (i.e., 1 : 10 AB) demonstrated significant toxicity to the cells and thus was not used in further studies. Last, to identify if AB had any effect on attenuating PQ-induced neurotoxicity, we treated cells with various dilutions of AB with 50 M PQ (Figure 1(c), left panel). Interestingly, only the lowest concentrations of AB (i.e., 1 : 1000 and 1 : 10000) demonstrated significant rescuing effects on the PQ-treated cells. In contrast, the highest concentration of AB (i.e., 1 : 100) potentiated the PQ-induced cell death at 72 hours. Furthermore, low doses of AB decreased, while high doses of AB exacerbated the apoptotic fraction of PQ-treated NG108-15 cells (Figure 1(c), right panel). Taken together, these data suggest that lower doses of AB have protective effects against PQ-induced neurotoxicity.
PQ-Induced Increase in 2
•− Levels Is Attenuated by Low-Dose AB. The primary and direct ROS generated by PQ is O 2 •− . We first measured total cellular O 2 •− utilizing the O 2 •−sensitive probe DHE (Figure 2(a)). As expected, PQ alone increased DHE oxidation roughly 2-fold. Interestingly, lowdose AB significantly attenuated the PQ-induced increase in O 2 •− levels, while high-dose AB exacerbated this response. In addition, high-dose AB alone significantly increased DHE oxidation in the absence of PQ. Next, because PQ is known to play a role in the direct generation of mitochondriallocalized O 2 •− , we measured mitochondrial-specific O 2 •− levels using MitoSOX Red (Figure 2(b)). Similar to what we observed with total cellular O 2 •− levels, PQ alone also significantly increased mitochondrial O 2 •− levels. Low-dose AB moderately decreased these levels, but these differences were not statistically significant. Additionally, high-dose AB alone increased mitochondrial O 2 •− levels and once again intensified PQ-induced mitochondrial O 2 •− . In summary, these data suggest that low, but not high, doses of AB may have antioxidant effects that reduce the PQ-induced increase in neuronal O 2 •− levels.
AB Alters Steady-State Cellular H
•− is a short lived species that is spontaneously and enzymatically (by superoxide dismutases) converted to H 2 O 2 [27]. To assess intracellular H 2 O 2 levels, we utilized fluorescent proteins that increase in fluorescence when oxidized specifically by H 2 O 2 (i.e., HyPer) [22]. First, using a cytoplasm-targeted HyPer (HyPer Cyto) we observed a dose-dependent decrease in cytoplasmic H 2 O 2 levels with increased concentration of AB alone (Figure 3(a)). PQ treatment led to a small but significant increase in cytoplasmic H 2 O 2 levels, and this response was attenuated with increasing doses of AB. Neither PQ nor AB had any effect on mitochondrial-localized H 2 O 2 levels as measured by the mitochondrial-targeted HyPer construct (Hyper Mito; Figure 3(b)). These data suggest that AB has potent H 2 O 2 scavenging effects under both normal, nonoxidative stress and PQ-induced oxidative stress conditions.
AB Has a Minimal Effect on Prooxidant and Antioxidant
Enzyme and Activity Levels. The decrease in ROS observed by the addition of AB may be due to direct scavenging of ROS or by the alteration of endogenous antioxidant or prooxidant enzyme systems. First, we performed western blot analyses on whole cell lysates and observed no significant changes in the protein levels of cytoplasmic CuZnSOD, mitochondrial MnSOD, or the peroxisomal H 2 O 2 removing enzyme catalase (Figure 4(a)). Because polyphenolic compounds like those found in AB have been shown to activate the sirtuin class of enzymes [28], which may alter the activity of endogenous antioxidant enzymes [29], we further examined antioxidant enzyme activities for both SOD and catalase and observed no significant differences in any treatment group (Figure 4(b)). In addition to exploring endogenous antioxidant systems, we also investigated the prooxidant NADPH oxidase (Nox) family of enzymes, which contribute to the production and steady-state levels of cellular O 2 •− and H 2 O 2 levels. Examining the catalytic subunits of the two major Nox enzymes found in neurons (i.e. Nox2 and Nox4) we observed a substantial reduction in the amount of immunoreactivity for Nox2 with high-dose AB independent of PQ treatment (Figure 4(a)), but no changes were observed with lower doses. Taken together, while high-dose AB appears to have an effect on Nox2 levels, overall, AB does not appear to have a significant impact on the endogenous antioxidant or prooxidant enzyme systems in our neuronal cell culture model.
PQ-Induced Oxidized Glutathione Is Significantly Reduced
with Low-Dose AB. In addition to antioxidant enzyme systems, the cell is home to numerous small molecule antioxidant systems. The most abundant small molecule antioxidant system in the cell is glutathione, which may be cycled between a reduced and oxidized state depending on the redox environment of the cell and has shown incredible importance in attenuating ROS-induced neurotoxicity [8,30]. When examining GSH in our neuronal cell culture model, we observed no significant changes in any treatment group (Figure 5(a)). In contrast, when measuring GSSG we observed that PQ alone increased GSSG roughly 4fold compared to control neurons. Moreover, low-dose AB attenuated the PQ-elevated GSSG levels back to control levels, while high-dose AB had no significant change on GSSG levels in PQ-treated cells ( Figure 5(b)). Overall, these findings support our O 2 •− and H 2 O 2 data (Figures 2 and 3) and together strongly suggest that low-dose AB decreases levels of ROS, attenuates oxidative stress, and inhibits neurotoxicity following PQ exposure.
Discussion
Of the neurodegenerative diseases, Parkinson's disease is highly associated with oxidative stress induced by environmental factors such as herbicide (i.e., PQ) exposure [31]. While the exact cause of Parkinson's disease remains elusive, numerous studies have elucidated excess ROS production to be a potential mechanism in the loss of critical dopaminergic neurons in the substantia nigra in the brain [32]. A primary source of intraneuronal ROS, more specifically O 2 •− , implicated to be involved in the disease is complex I of mitochondria [33]. Complex I inhibitors (which are also found in pesticides and herbicides) such as rotenone and 1methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) create a backup of electrons in the mitochondrial respiratory chain, which further leak onto molecular oxygen generating O 2 •− and induce oxidative stress [34]. Interestingly, PQ possesses a similar structure to MPTP and has also been demonstrated to interact with complex I to generate reactive radical species [35]. Herein, we confirm these findings by demonstrating that mitochondrial O 2 •− is indeed increased in NG108-15 cells treated with PQ. Intriguingly, we observed no change in mitochondrial H 2 O 2 levels, which suggests a predominantly 1 electron transfer to generate primarily O 2 •− . Moreover, low doses of AB were able to significantly attenuate this increase in mitochondrial oxidative stress, which translated to a more reducing cellular environment as evidenced by lower DHE oxidation as well as decreased levels of oxidized glutathione. In contrast, high doses of AB could not rescue the PQ-induced oxidative stress and exacerbated some of the effects. These findings warrant examination into the specific components of the AB concentrate to elucidate potential molecules that could exacerbate redox cycling reactions in a dose-dependent manner.
There are currently limited medical therapies for the treatment of neurodegenerative diseases. On the contrary, a breadth of evidence exists suggesting dietary intake of polyphenols may have beneficial effects in counteracting neurological disorders. For example, consumption of red wine, which is known to possess high levels of polyphenols, may reduce the incidence of neurological disorders [36,37]. Other studies have demonstrated intake of polyphenol-rich foods may preserve cognitive function, delay the onset, or even reduce the risk of neurodegenerative diseases like agerelated dementia or Alzheimer's disease [38][39][40]. However, it remains controversial if the beneficial effects of polyphenolrich diets are actually acting in the brain, as it is not clear if polyphenols cross the blood brain barrier [41]. Polyphenols have been reported to be poorly absorbed by the intestines, rapidly excreted, and exist in low concentrations in systemic circulation [42,43], which further argues for a potential limited role in the brain. In contrast, several investigations have concluded that low concentrations of polyphenols do in fact cross the blood brain barrier under both experimental in situ conditions and after in vivo dietary consumption of polyphenol-rich foods [44][45][46][47]. In the present study, we identified that only low concentrations of AB provided a protective role against ROS-induced neuron cell death caused by PQ. With the understanding that only small amounts of polyphenols may reach the brain after dietary consumption of polyphenol-rich foods, our data support a beneficial and antioxidant effect of these molecules in low concentrations and the possible protection against neuron cell death.
The use of antioxidants as therapeutics is controversial due to an extensive list of failed clinical trials in an array of diseases. Based on this, it is easy to conclude that antioxidants are not sufficient in ameliorating disease, but numerous variables must be taken into account when assessing the efficacy of these trials. The first variable to consider is dosage. It is commonly presumed in medicine that if a positive dose response to a drug is achieved at low concentrations then high concentrations will produce an even more favorable outcome, but this is not always found to be true. For example, in 2002 a phase II, double blind, randomized, and placebo controlled clinical trial was performed on the potential effectiveness of coenzyme Q 10 in slowing the progression of Parkinson's disease [48]. A negative correlation was observed with increasing dose of coenzyme Q 10 (ranging from 300 to 1200 mg/day) and progression of the disease, which thus prompted researchers to investigate even higher doses of coenzyme Q 10 in Parkinson's disease. In 2007, another phase II, double blind, randomized, and placebo controlled study was performed utilizing doses of coenzyme Q 10 ranging from 2400 to 4000 mg/day and found no significant improvement with any dose on the diminution of progression of Parkinson's disease [49]. The conclusion drawn from this study was that coenzyme Q 10 provided no benefit over placebo in Parkinson's disease due to the fact that high doses could not replicate what was seen in the lower dose clinical trial. Another example of dosage discrepancies involves the use of vitamin E for therapy in Alzheimer's or Parkinson's disease patients. Three separate clinical trials utilizing vitamin E supplements (ranging from 800 to 2000 IU/day) found no significant impact or even worsening of the severity of Alzheimer's or Parkinson's disease progression [50][51][52]. However, three separate studies utilizing vitamin E administration through means of dietary intake (ranging from 5 to 15 mg/day in foods naturally containing higher levels of vitamin E) showed positive benefits in slowing the progression of both diseases [53][54][55]. Similar to what was observed with coenzyme Q 10 , it appears that lower doses (and possibly vehicle of administration) are possibly more efficacious than higher doses when examining the effects of antioxidants. In our study presented here, we observe a similar phenomenon where only low-dose AB ameliorated PQ-induced neurotoxicity, while higher doses exacerbated the phenotype. This nonlinear regression between antioxidant dosage and disease outcome may explain the subjective failure of antioxidant clinical trials and warrants further investigation into the potential mechanisms leading to the nonmonotonic response.
Another significant variable in the outcome of antioxidant therapy is the timing of administration. The majority of clinical trials focus on the treatment of patients that have already been diagnosed with a major disease, and as such assessing the preventative capabilities of antioxidants is already past due. Conversely, numerous retrospective analyses have examined the potential for dietary intake of antioxidants in altering the risk of developing neurodegenerative disorders like Alzheimer's disease. For example, it has been shown that diets rich in fruits and vegetables reduce cognitive decline and the risk for Alzheimer's disease later in life [56,57]. Additionally, in the aforementioned Rotterdam study it was observed that intake of vitamin E in the form of food (not supplements) also reduced the incidence of dementia [54]. These studies suggest that antioxidants serve as preventative measures as opposed to reactive measures against neurological disorders. Herein, we present evidence that supports this hypothesis as we show pretreatment of neurons with AB for 24 hours prior to PQ administration protects neurons from ROS-induced cell death. Performing the converse experiment in which AB was administered at the same time or 24 hours after PQ treatment did not produce any observable beneficial response (data not shown). Taken together, antioxidant supplementation through dietary intake appears to play a greater role in the prevention of neurological diseases as opposed to their treatment.
The last major variable to consider when assessing the efficacy of antioxidants in the treatment of diseases is the specific ROS that is being targeted. ROS are often considered a homogenous group of substances that are harmful to the cell, but this view overlooks the vast complexity of the redox environment. ROS are diverse with some being free radicals, possessing charges, or participating in one or two 8 Oxidative Medicine and Cellular Longevity electron oxidation/reduction reactions depending on the structure of the specific species [58]. Additionally, not all ROS cause "oxidative stress," which is defined as irreversible damage to cellular components, but many ROS participate in controlled, regulated, and reversible modifications to cellular constituents that lead to redox-mediated signaling pathways [59]. For example, H 2 O 2 oxidizes reduced cysteines in proteins creating reversible adducts that may alter the shape and function of a protein, thus making the protein redox responsive [60]. In contrast, O 2 •− is a poor oxidant but reacts readily with iron-sulfur cluster containing enzymes reversibly affecting their activity and contributing to redox-mediated cellular signaling [61]. With the understanding that ROSmediated reactions are unique and diverse, it becomes clear that the use of a generalized antioxidant that may scavenge several ROS at once (or potentially a ROS that is not highly relevant in the disease state) may not prove to be efficacious or even deleterious. In our data set, we demonstrate that the primary ROS produced by PQ is O 2 •− , and this has been shown by others as well [26]. Low doses of AB demonstrated the ability to significantly attenuate PQ-induced O 2 •− in neurons, yet, high doses potentiated the production. Moreover, high dose of AB appeared to significantly reduce the amount of steady-state H 2 O 2 in neurons even in the absence of PQ suggesting that high dosage of antioxidants altered normal redox signaling within the cells or even created a reductive stress upon the cells [62]. In summary, it appears that low, but not high, dose of AB restores the homeostatic redox environment and decreases cellular death caused by the PQinduced O 2 •− -mediated oxidative stress. Next, we observed an interesting phenomenon that Nox2 protein was virtually absent in neurons treated with high doses of AB (independent of PQ treatment). Polyphenols have been demonstrated to attenuate Nox activity in various models, but their role in regulating actual protein levels is unclear [63][64][65]. Our data suggest that AB may be interfering with the normal expression of Nox2, but it is unclear at this time if this occurs at the transcriptional, posttranscriptional, translational, or posttranslational level. Furthermore, the Nox2 catalytic subunit of the Nox complex is also known as gp91phox due to the fact that the 55 kDa protein becomes heavily glycosylated causing it to run on a western blot at approximately 91 kDa [66]. Polyphenols have been shown to interfere with and reduce the amount of advanced end glycation products observed in several disease states [67][68][69], which raises the question if these small molecules also play a role in modifying normal cellular glycosylation of proteins. Our data suggest AB plays a significant role in the downregulation of Nox2, and further investigation is warranted into the mechanism of this process.
Finally, our study does possess some potential limitations. First, due to proprietary reasons we are limited in the understanding of the exact constituents and concentrations of the commercially available AB concentrate. Additionally, while the dilutions we utilized did produce favorable outcomes, further biodistribution studies are needed to understand if the optimal concentrations we observed translate in vivo. Next, our use of a neuronal cell line may not perfectly mimic the effects on primary neurons. However, NG108-15 cells divide and grow in a highly differentiated manner, which increases their likelihood to react like primary neurons in an in vitro setting. Lastly, treatment of neuronal cells in vitro with AB does not take into account in vivo variables such as absorption and biotransformation that may alter the AB components and exposure to neurons in a living system. Upon consumption, polyphenols may be oxidized by liver enzymes and the digestive microbiota, which could ultimately change the structure and function of these molecules once they have reached a target organ. While our current studies do not address the potential alterations digestion may have on the AB, we believe the data presented herein show significant preliminary promise for AB in the amelioration of ROSinduced neurotoxicity. With these promising results, we are currently investigating the ability of AB to attenuate neurological dysfunction in vivo utilizing various animal models of neurodegeneration. These models will allow for a deeper understanding regarding AB bioavailability to neurons of the central nervous system, and if concentrations are able to reach levels necessary for the attenuation of oxidative stressmediated neurological disease.
|
v3-fos-license
|
2020-05-10T13:04:36.478Z
|
2020-05-01T00:00:00.000
|
218562933
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://europepmc.org/articles/pmc7285038?pdf=render",
"pdf_hash": "e87457d12db7202805d77f4e2aa6db4070b73760",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43089",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "625455caa85099e2a9a556119eae0229390dafad",
"year": 2020
}
|
pes2o/s2orc
|
Endodontic-Like Oral Biofilms as Models for Multispecies Interactions in Endodontic Diseases
Oral bacteria possess the ability to form biofilms on solid surfaces. After the penetration of oral bacteria into the pulp, the contact between biofilms and pulp tissue may result in pulpitis, pulp necrosis and/or periapical lesion. Depending on the environmental conditions and the availability of nutrients in the pulp chamber and root canals, mainly Gram-negative anaerobic microorganisms predominate and form the intracanal endodontic biofilm. The objective of the present study was to investigate the role of different substrates on biofilm formation as well as the separate and collective incorporation of six endodontic pathogens, namely Enterococcus faecalis, Staphylococcus aureus, Prevotella nigrescens, Selenomonas sputigena, Parvimonas micra and Treponema denticola into a nine-species “basic biofilm”. This biofilm was formed in vitro as a standard subgingival biofilm, comprising Actinomyces oris, Veillonella dispar, Fusobacterium nucleatum, Streptococcus anginosus, Streptococcus oralis, Prevotella intermedia, Campylobacter rectus, Porphyromonas gingivalis, and Tannerella forsythia. The resulting endodontic-like biofilms were grown 64 h under the same conditions on hydroxyapatite and dentin discs. After harvesting the endodontic-like biofilms, the bacterial growth was determined using quantitative real-time PCR, were labeled using fluorescence in situ hybridization (FISH) and analyzed by confocal laser scanning microscopy (CLSM). The addition of six endodontic pathogens to the “basic biofilm” induced a decrease in the cell number of the “basic” species. Interestingly, C. rectus counts increased in biofilms containing E. faecalis, S. aureus, P. nigrescens and S. sputigena, respectively, both on hydroxyapatite and on dentin discs, whereas P. intermedia counts increased only on dentin discs by addition of E. faecalis. The growth of E. faecalis on hydroxyapatite discs and of E. faecalis and S. aureus on dentin discs were significantly higher in the biofilm containing all species than in the “basic biofilm”. Contrarily, the counts of P. nigrescens, S. sputigena and P. micra on hydroxyapatite discs as well as counts of P. micra and T. denticola on dentin discs decreased in the all-species biofilm. Overall, all bacterial species associated with endodontic infections were successfully incorporated into the standard multispecies biofilm model both on hydroxyapatite and dentin discs. Thus, future investigations on endodontic infections can rely on this newly established endodontic-like multispecies biofilm model.
Introduction
Most oral bacteria are commensal [1], but depending on host immune response and dysbiotic microbial interactions rather than on specific pathogens [2], they contribute to oral diseases [3]. Like bacterial species in general, oral bacteria possess the ability to form biofilms on solid surfaces in the presence of nutrient-containing fluids [4]. Biofilms were described decades ago as communities of bacterial cells that are embedded in a polymeric matrix that contains polysaccharides, DNA, RNA, endodontic infection (primary, secondary). We applied the batch culture approach, which was first described in 2001 [25], and is based on the biofilm model of supragingival plaque [24,26,27]. As about 50% of bacteria in the oral cavity are uncultivable and culture method only provides information about living cells [28][29][30], in the current study, a PCR-based 16S rRNA gene assay [31] was used for detection and quantification of bacterial species within the endodontic-like biofilms. Additionally, the endodontic-like biofilms were visualized using fluorescence in situ hybridization (FISH) and confocal laser scanning microscopy (CLSM). To the best of our knowledge, endodontic-like multispecies biofilms using hydroxyapatite as well as dentin as substrata were formed in vitro for the first time in this study. Based on this study, future investigations on endodontic infections can rely on this newly established endodontic-like multispecies biofilm model.
All strains, except for T. forsythia and T. denticola, were maintained on Columbia blood agar. Tannerella forsythia and T. denticola were maintained in T. forsythia medium, containing per liter solution: 37 g brain-heart-infusion, 10 g yeast extract, 1 g cysteine, 5 µL/mL hemin, 20 µL/mL N-acetylmuramic acid, 2 µL/mL menadione and 5% horse serum. Prior to the onset of biofilm experiments, all strains were transferred into adequate liquid media (mFUM [33], BHI and T. forsythia medium) and incubated anaerobically at 37 • C for two cycles of precultures (16 h and 8 h, respectively). Prior to biofilm inoculation, all strains were adjusted to a defined optical density (OD 550 = 1.0) and mixed in equal volumes. Biofilms were cultivated in 24-well polystyrene cell culture plates on sintered hydroxyapatite (HA; Ø 9 mm, Clarkson Chromatography Products, Inc., South Williams-port, PA 17702, USA) and dentin discs (Ø 7 mm, bovine teeth) that had been preconditioned (pellicle coated) for 4 h, with shaking (95 rpm) in 0.8 mL saliva (whole unstimulated saliva, pooled from individual donors [32], 1:2 diluted with sterile 0.25% NaCl solution; for the preparation of batches of pooled, processed, and pasteurized saliva, see Guggenheim et al. [25]. The pellicle-coated discs were equilibrated for 45 min at 37 • C in the anaerobic chamber in 1.6 mL growth medium (containing 960 µL undiluted saliva, 160 µL fetal bovine serum (FBS), and 480 µL mFUM + 0.3% glucose. Finally, the 200 µL bacterial suspension, consisting of equal volume and density (adjusted OD550 = 1.0) of each strain was given to each well, and the biofilm was incubated for 64 h under anaerobic conditions. At 16 and 40 h, the discs were washed 3 times with 2 mL of 0.9% NaCl (two dippings each) and transferred to fresh media in the 24-well plate. After 64 h the discs were washed again as previously and either proceeded to staining and confocal laser scanning microscopy (CLSM) or transferred into a 50 mL Falcon tubes with 1 mL of physiological NaCl and vortexed for 3 min in order to remove the biofilm from the discs, prior to the transfer to 5 mL Falcon tube and sonication at 30 W for 5 s (Sonifier B-12, Branson Ultrasonic, Urdorf, Switzerland). Then, the harvested biofilm suspension was prepared for quantification by real-time quantitative PCR (qPCR). For fluorescence in situ hybridization (FISH) and CLSM analyses biofilms were fixed in 1 mL of 4% paraformaldehyde + RNase inhibitor (RNAi) for two hours at 4-8 • C (Figure 1).
Biofilm Quantification Using Quantitative Real-Time PCR (qPCR)
The DNA was isolated from harvested biofilm samples and individual strains for standard curves using the GenEluate bacterial genomic DNA kit (Sigma-Aldrich, Saint Louis, Missouri, USA) according to the manufacturer's recommendations including the pretreatment steps for Gram-positive bacteria with slight modifications. The pretreatment lysis step was expanded from 30 min to 1 h (lysozyme, mutanolysin, and lysostaphin) and the lysis step with proteinase K from 10 min to 20 min. The extracted DNA was eluted twice in 60 µL preheated nuclease-free water. The amount of the isolated DNA was determined using a spectrophotometer NanoDrop ND-1000 (Thermo Fisher Scientific, Waltham, Massachusetts, USA). The quantification of the individual bacteria templates in biofilm samples was generated by using external standard curves. The standard curves were created by defined concentrations from 10 ng to 0.00001 by 10-fold serial dilutions. The logarithm of the corresponding quantification cycle values was used to obtain a linear regression. The theoretical cell numbers of each organism in the samples were converted from the obtained Cq values using theoretical genome weight.
The SYBR Green-based detection was conducted to quantify bacteria in biofilm samples with the primers listed in Table 1. The quantitative PCR was carried out using the 2xSYBR ® Green PCR Master Mix (Thermo Fisher Scientific, Waltham, Massachusetts, USA) with a final reaction volume of 15 µL, containing 7.5 µL of SYBR ® Green PCR Master Mix, 6 µL sample DNA (undiluted, 1:10 and 1:100 diluted, respectively) and 1.5 µL of primer mix (final concentration 0.5 µM each). The qPCR assays were performed on a One Step Plus Real-Time PCR System (Applied Biosystems, Foster City, California, USA); samples were incubated initially 10 min at 95 • C, then 40 cycles of 15 s at 95 • C and 1 min at 60 • C.
For the quantification of S. sputigena, P. micra, S. aureus and P. nigrescens the microbial DNA qPCR assays were used and conducted according to the manufacturer's protocol (Qiagen Instruments, Hombrechtikon, Switzerland; Cat. no. BPID00305AR, BPID00260AR, BPID00314A, and BPID00280AR, respectively).
Fluorescence in Situ Hybridization (FISH)
After fixation, discs were washed in 500 µL 0.9% NaCl + RNase Inhibitor and dabbed off on a paper towel. Pre-treatment of Gram-positive bacteria occurred as described before [23] within 1 mg/mL lysozyme solution in 0.1 M Tris-HCl, pH 7.5, 5 mM EDTA for 8 min at room temperature (RT). To permeabilize the cell walls accordingly Staphylococcus aureus cells needed a longer and stronger pre-treatment with 10 mg/mL lysozyme for 50 min at 37 • C and additionally with 20 µg/mL lysostaphin for 5 min at RT both in the same buffer as described previously. Pre-hybridization in 500 µL of proportionate hybridization buffer ( Table 2) for 15 min at 46 • C. Immediately thereafter, the discs were transferred in extra wells with 370 µL preheated appropriate probes in the corresponding hybridization buffer ( Table 2). The discs were hybridized for 4 h at 46 • C, then immersed in 2 mL preheated washing buffer and incubated for 45 min at 48 • C. Total DNA was stained with 15 µM Syto 59 (Thermo Fisher Scientific, Waltham, Massachusetts, USA) in nanopure water for 30 min or with 0.5 µg/mL DAPI (SERVA Electrophoresis GmbH, Heidelberg, Germany) in nanopure water for 5 min at room temperature. All incubations with fluorescent dyes were performed in the dark. Discs were embedded upside down on chamber slides in a matching drop of Mowiol and stored for at least 24 h before microscopic examination.
Confocal Laser Scanning Microscopy (CLSM)
CLSM was conducted using a Leica TCS SP5 microscope (Leica Microsystems, Wetzlar, Germany) provided by the Centre for Microscopy and Image Analysis of the University of Zurich. For the imaging of the biofilms on hydroxyapatite and dentin discs, the slightly modified procedure, as described before [38], was performed. Briefly, the used lasers were a UV laser at 405 nm excitation, an Argon laser at 488 nm excitation, a DPSS diode laser at 561 nm, and a Helium-Neon laser at 594 nm and 633 nm excitation. Furthermore, filters were adjusted at 430-470 nm to detect DAPI, at 500-540 nm for FITC, at 570-600 nm for Cy3, at 610-640 nm for ROX, and at 660-710 nm for Cy5 and Syto 59. Biofilms were scanned sequentially in steps of 1 µm thickness. Finally, the images were processed using Imaris 8.3 (Bitplane, Zurich, Switzerland).
Statistical Analysis
Within the three independent experiments with basic biofilm and additions of endodontic species, every group was represented in triplicate biofilm cultures. As a result, statistical analysis was performed on nine individual data points, coming from the nine individual biofilm cultures per experimental group. Two-way analysis of variance (ANOVA) was used to analyze the difference in bacterial cells per biofilm between the control group (standard nine-species biofilm) and the six additions of endodontic strains. Tukey's multiple comparisons test was used for correction. Furthermore, the statistical comparison was performed between the number of cells per biofilm on hydroxyapatite discs and dentin discs, respectively. Missing values were ascribed the lowest detection limit value of the assay to allow for logarithmic transformation. Statistics have been implemented using GraphPad Prism (version 7) with the intent of comparing the species' total cell counts within the different biofilm formations (significance level p < 0.05).
The Addition of Endodontic Pathogens Induced Significant Changes in Cell Counts within Endodontic-Like Biofilms on HA
For this study a slightly modified in vitro subgingival biofilm described by Guggenheim et al. [24] was used and in the following is referred to "basic" nine species subgingival biofilm. This "basic" subgingival biofilm consisted of Actinomyces oris, Veillonella dispar, Fusobacterium nucleatum, Streptococcus anginosus, Streptococcus oralis, Prevotella intermedia, Campylobacter rectus, Porphyromonas gingivalis, and Tannerella forsythia. In order to guarantee reproducibility of the new established biofilms, all assays were conducted three times in triplicates.
Box plots in Figure 2 demonstrate cell counts per endodontic-like biofilm on pellicle-coated HA discs after analysis by qPCR. To form endodontic-like multispecies biofilms, a total of six endodontic pathogens were separately added to a "basic" nine-species subgingival biofilm (see Methods, Figure 1).
Figure 2.
Boxplots demonstrating cell counts per endodontic-like biofilm on pellicle-coated hydroxyapatite discs after analysis by qPCR. To form endodontic-like multispecies biofilms, a total of six bacterial species were added separately to a "basic" nine-species subgingival biofilm. x-axis of panal (A) shows the strains of the "basic biofilm" (in the first column total counts (beige) as a control group are shown), while x-axis of panal (B) shows in the first column total counts (beige) again, as well as the strains of the endodontic species (E. faecalis (blue), S. aureus (dark green), P. nigrescens (red), S. sputigena (orange), P. micra (light green) and T. denticola (pink)). Statistically significant differences between the biofilm with additional strains and the control group ("basic biofilm" or endodontic-like biofilm) is marked with 1-4 asterisks (* p < 0.05; ** p < 0.01; *** p < 0.001). The internal line represents the median; the whiskers indicate minimum and maximum. The p values (p ≤ 0.05) of the significantly different data are provided. Data derive from three independent experiments, each represented in triplicate biofilm cultures (n = 9).
The Bacterial Composition of Endodontic-Like Biofilms on Dentin Was Also Substantially Affected by the Presence of Endodontic Pathogens
Box plots in Figure 3 demonstrate cell counts per endodontic-like biofilm on dentin discs after analysis by qPCR. To form endodontic-like multispecies biofilms, a total of six endodontic pathogens were separately added to a "basic" nine-species subgingival biofilm.
The total cell counts within the endodontic-like biofilm on dentin discs containing S. sputigena were significantly lower (biofilm 4, p = 0.011) compared to the number of cells per biofilm in the "basic biofilm" (Figure 3, 1st column). The total cell counts of A. oris and V. dispar in the "basic biofilm" did not differ from the total cell counts in the endodontic-like biofilms 1-6 ( Figure 3A). However, F. nucleatum counts decreased for all strains (biofilms 1-3 p < 0.05; biofilm 4 p < 0.0001) except for P. micra and T. denticola (p = 0.950 and p = 0.746, respectively). Similar findings were obtained for cell counts of S. anginosus in biofilms 1-4 (p < 0.0001). Interestingly, the total cell counts of F. nucleatum and S. anginosus were precisely the same as on HA discs.
While the addition of E. faecalis on dentin discs (biofilm 1) did not affect the basic biofilm, a positive impact on the growth of P. intermedia on dentin discs could be observed (p < 0.05). As on HA discs, the addition of E. faecalis (biofilm 1), S. aureus (biofilm 2), P. nigrescens (biofilm 3), and S. sputigena (biofilm 4) affected the growth of C. rectus positively (p < 0.0001). The total cell counts of P. gingivalis was significantly lower (p < 0.0001) in biofilm 4 containing S. sputigena than in the "basic biofilm". The addition of S. aureus (biofilm 2) and S. sputigena (biofilm 4) to the "basic biofilm" induced a substantial decrease (p < 0.0001) in T. forsythia counts. That is reflected by a significant decrease (p = 0.022) of T. forsythia counts in endodontic-like 15-species biofilm (biofilm 7) in comparison with the T. forsythia counts in the "basic biofilm".
Regarding additional species there was a decrease in E. faecalis and S. aureus (p < 0.0001) counts and an increase in P. micra (p < 0.0001) and T. denticola (p < 0.0001) counts in biofilms 1, 2, 5, and 6, respectively, compared to counts of these species in the biofilm 7 on dentin discs. . Boxplots demonstrating cell counts per endodontic-like biofilm on dentin discs after analysis by qPCR. To form endodontic-like multispecies biofilms, a total of six bacterial species were added separately to a "basic" nine-species subgingival biofilm. The x-axis of panal (A) shows the strains of the "basic biofilm" (in the first column total counts (beige) as a control group are shown), while the x-axis of panal (B) shows in the first column total counts (beige) again, as well as the strains of the endodontic species (E. faecalis (blue), S. aureus (dark green), P. nigrescens (red), S. sputigena (orange), P. micra (light green) and T. denticola (pink)). Statistically significant differences between the biofilm with additional strains and the control group (basic biofilm or all species biofilm) is marked with 1-4 asterisks (* p < 0.05; ** p < 0.01; *** p < 0.001). The internal line represents the median; whiskers indicate minimum and maximum. The p values (p ≤ 0.05) of the significantly different data are provided. The data were derived from three independent experiments, each represented in triplicate biofilm cultures (n = 9).
Different Substrates Did Not Affect the Composition of the Endodontic-Like Multispecies Biofilms
In the box plots in Figure 4 cell counts per endodontic-like biofilm on HA and dentin discs after analysis by qPCR are shown. Regarding total counts, only the "basic" subgingival biofilm showed a significant reduction of cell counts (p = 0.019) when grown on dentin, whereas total counts of the endodontic-like multispecies biofilms were not affected by the different substrates. . Boxplots demonstrating total cell counts per endodontic-like biofilm on HA and dentin discs after analysis by qPCR. To form endodontic-like multispecies biofilms, a total of six bacterial species were added separately to a "basic" nine-species subgingival biofilm. The x-axis shows endodontic-like biofilms on HA and dentin discs. Statistically significant differences between the total counts of the two substrates is marked with 1-4 asterisks (* p < 0.05). The internal line represents the median; whiskers indicate minimum and maximum. The p values (p ≤ 0.05) of the significantly different data are provided. The data were derived from three independent experiments, each represented in triplicate biofilm cultures (n = 9). Figure 5 shows CLSM images of endodontic-like ten-species biofilms 1-6 grown on HA discs following FISH using FITC-and Cy3-labelled probes (see Table 2). Enterococcus faecalis ( Figure 5A) cells seem build aggregates in the ten-species endodontic-like biofilm (biofilm 1). Figure 5B shows S. aureus situated on the bottom of the biofilm and forming microcolonies. Prevotella nigrescens ( Figure 5C) and P. micra ( Figure 5D) seem to be scattered throughout the biofilm. The same applies in respect of S. sputigena ( Figure 5E) but forming larger aggregates more or less scattered throughout the biofilm. Treponema denticola ( Figure 5F) seems to be spread in a low amount on the bottom of the biofilm 6. Figure 6 shows CLSM images of endodontic-like ten-species biofilms 1-6 grown on dentin discs following FISH using FITC-and Cy3-labelled probes (see Table 2) and highlights the fact that dentin tubules are colonized by bacteria. Figure 6A,C show dentin tubules filled with cells of E. faecalis and S. sputigena, respectively. Prevotella intermedia cells cannot be seen; it seems that they did not invade the dentin tubules. Figure 6B shows cells of S. aureus on the bottom of the biofilm at the interface with dentin tubules. Figure 6D clearly shows cells of P. nigrescens (arrows) within the dentin tubules. Figure 7 shows CLSM images of endodontic-like 15-species biofilms (biofilm 7) grown on HA discs following FISH. Figure 7A,B show P. intermedia bacteria forming aggregates in the middle of the biofilm surrounded by F. nucleatum. Enterococcus faecalis, P. micra and S. aureus grow homogenously scattered throughout the biofilm ( Figure 7C,D). Figure 7E,F show P. nigrescens, S. sputigena, T. denticola forming larger aggregates. Interestingly, aggregates of P. nigrescens and T. denticola could be observed in immediate vicinity to each other. Finally, many FISH-labeled bacteria, namely P. gingivalis, T. forsythia, P intermedia, F. nucleatum, and C. rectus, were visualized in the biofilm 7 ( Figure 7G,H). It seems that P. gingivalis was located at the top of the biofilm, while T. forsythia was situated on the bottom of the biofilm. Prevotella intermedia, could be visualized in the intermediate layer of the biofilm, together with F. nucleatum and C. rectus ( Figure 7H). Table 2). To form endodontic-like multispecies biofilms, a total of six bacterial species were added separately to a "basic" nine-species subgingival biofilm. The resulting biofilms 1-6 contained additionally E. faecalis (A), or S. aureus (B), or P. nigrescens (C), or P. micra (D), or S. sputigena (E), or T. denticola (F). Prevotella intermedia appears green (FITC-labeled) and the newly added bacteria appear red (Cy3-labeled). Non-hybridized bacteria appear blue due to DNA staining (YoPro 59). Scale bar = 10 µm. Table 1), Prevotella intermedia appears green (FITC-labeled) and the newly added bacteria appear red (Cy3-labeled). Non-hybridized bacteria appear blue due to DNA staining (YoPro 59). Images were taken at the biofilm base showing dentinal tubules. The arrows indicate bacteria adhered in tubules. Scales = 20 µm (A,B) and 10 µm (C,D).
Discussion
In this study, new endodontic-like multispecies biofilm models (ten-species biofilms 1-6, 15-species biofilm 7) were formed for the first time and the role of different substrates on biofilm formation was investigated. Mixed-species biofilms are the dominant form in nature and are also prominent in the oral cavity as more than 700 microbial species inhabit this environment [43]. These biofilms resemble multi-cellular organisms and are characterized by their overall metabolic activity upon multiple cellular interactions. The development of a mixed-species biofilm is influenced by its species and by interactions between these microorganisms. Cell-cell communication or quorum sensing mediated by signal molecules can affect such interactions within mixed-species biofilms e.g., by altering gene expression that can result in synergistic or antagonistic interbacterial interactions [44][45][46]. For instance, two bacterial species that are involved in periodontits and endodontitis, Treponema denticola and Porphyromonas gingivalis, displayed synergistic effects in in vitro biofilm formation [47]. Competition among species in a mixed biofilm can be influenced by environmental conditions known e.g., by production of antistreptococcal bacteriocins [48]. The necessity of endodontic multispecies biofilm models to study the complex interspecies interactions in endodontic diseases has been already underlined in the literature so far [19]. Supragingival, subgingival, and endodontic biofilms constitute a very complex, organized entity and it is difficult, if not impossible to duplicate their characteristics in in vitro experiments. The complexity is not only related to the nature of the biofilm, but also to the complex anatomy, which houses tissue along with biofilms [19]. Biofilm models developed in Zurich are standing out due to their exceptional reproducibility for applications with direct or indirect impact on prophylactic dentistry such as spatial arrangement and associative behavior of various species in biofilms [24][25][26]32,[49][50][51][52][53][54]. The overall physiological parameters of multispecies biofilms can be measured quite accurately, but it is still impossible to assess the multitude of interactions taking place in such complex systems [50]. In this study, an endodontic-like multispecies biofilm was used containing representative organisms found in supragingival, subgingival and endodontic-like biofilms to enable camparison of endodontic-like biofilm formation between enamel and dentin surfaces. To form endodontic-like multispecies biofilms, a total of six endodontic pathogens were separately added to a "basic" nine-species subgingival biofilm. The multispecies biofilm formation on pellicle-coated HA and dentin discs was compared showing strong similarities in regard with the cell counts per biofilm. Regarding total counts neither of the endodontic-like biofilms 1-7 showed a significant difference on the two substrates, only the "basic" nine species subgingival biofilm showed reduced total counts on dentin. A study by Jung et al. [55] showed that bacterial colonization was higher on dentin than on enamel, however, this was an in situ study and investigated initial colonization.
By adding six different strains of bacteria one by one, we observed different effects of the added strains on the "basic biofilm". For example, the addition of E. feacalis affected negatively the growth of A. oris, F. nucleatum, S. anginosus and P. gingivalis in biofilm 1 on HA-discs. However, E. faecalis affected negatively the growth of F. nucleatum and S. anginosus on dentin discs. Previous clinical studies showed a significant relation between the presence of E. faecalis in asymptomatic primary endodontic infections [56], although E. faecalis is known for persisting in endodontic infections associated with root-filled teeth [56,57]. This microorganism was also found as a monospecies infection even after intracanal medication. The high persistence of E. faecalis can be attributed to its natural adaption to adverse ecological conditions in the root canals [58,59] or to the formation of biofilms [60,61]. A previous report by Chávez de Paz et al. [62] showed that different E. faecalis strains differ in their capacity to produce different proteases depending on their origin and to suppress the growth of other species in multispecies biofilms [59]. In the present study, we used the vancomycin-sensitive strain E. faecalis ATCC 29212, which serves as a representative control strain in many in vitro trials, because of its availability in our lab. Regarding the E. faecalis-associated suppression of growth of other species within a biofilm, our findings are in line with the results of a previous research using the oral strain OGRF1 [59]. In our study, E. faecalis ATCC29212 seems to have suppressed the growth of other oral species within the basic biofilm on HA-discs; the total cell counts within the biofilm 1 containing E. faecalis were significantly lower compared to the cell counts in the "basic biofilm" without E. faecalis.
Furthermore, the addition of E. faecalis affected negatively the growth of A. oris, F. nucleatum, S. anginosus and P. gingivalis in biofilm 1 on HA-discs. A similar finding regarding A. oris was highlighted by Thurnheer and Belibasakis [40] after studying the incorporation of E. faecalis into supragingival biofilms on HA-discs. In a previous study by Ran et al. [63], it was observed that E. faecalis cells were able to form biofilms despite the nutrient reduction in the local microenvironment. Moreover, the hydrophobicity of E. faecalis cells increased under starvation conditions, and the biofilm-related gene transcription was triggered by oxygen/nutrient deprivation. Our findings seem to be in line with this study concerning nutrient supply. In specific, E. faecalis cell counts showed an increase in the 15-species biofilm (biofilm 7) compared to the 10-species biofilm (biofilm 1) both on HA discs and dentin discs. This finding is illustrated in Figure 7C,D, which show a high number of E. faecalis cells within the biofilm 7.
Likewise, the addition of S. aureus to the "basic biofilm" affected negatively the growth of A. oris, F. nucleatum, S. anginosus, P. gingivalis and T. forsythia on HA-discs. Interestingly, S. aureus yielded similar effects on F. nucleatum, S. anginosus and T. forsythia when grown on dentin discs. In previous studies S. aureus was identified in samples from infected teeth root canals associated with endodontic abscesses [64], as well as in samples from healthy periodontal tissues representing a source for systemic infections [65]. Previous research by Thurnheer and Belibasakis [32] on the growth of S. epidermidis on HA and titanium in a biofilm model for peri-implantitis showed that S. aureus possessed the trait to outcompete other oral bacterial species. To confirm this finding, Makovcova et al. [66] also noticed that there was a general competition between S. aureus and Gram-negative bacteria in vitro. The group stated that S. aureus grew in smaller clusters in mixed-species biofilms than in S. aureus monospecies biofilms. However, we showed the opposite effect of a large bacterial consortium (15-species versus 10-species biofilms) on S. aureus on dentin discs ( Figure 3). Particularly, S. aureus showed higher growth in 15-species-biofilms than in 10-species-biofilms. This outcome confirms the synergistic interactions between S. aureus and other species in polymicrobial biofilms as already described by Giaouris et al. [67].
In contrast to S. aureus, Prevotella nigrescens counts decreased in 15-species-biofilm compared to 10-species biofilm (biofilm 3), both on HA discs and dentin discs. Prevotella nigrescens is a black-pigmented bacterium often detected in endodontic infections. To discriminate it from P. intermedia SDS-PAGE was used [68,69]. Previously, P. intermedia was for decades supposed to be the most frequently detected species associated with endodontic infections [68,69]. Prevotella nigrescens belongs to Gram-negative anaerobic bacteria and together with P. intermedia and P. gingivalis is associated with necrotic pulp tissue [70]. Prevotella nigrescens subsists on glucose [71] and its decrease may be related to the relatively higher supply of glucose in biofilm 3 compared to biofilm 7. The glucose catabolism of P. nigrescens may induce a decrease in the pH of the biofilm [71]. Opposed to P. nigresens, P. gingivalis is an asaccharolytic bacterium whose growth does not depend on fluctuations of glucose but on the supply of amino acids and haemin [72]. In the absence of glucose both P. intermedia and F. nucleatum produced acid-neutralizing metabolites leading to increased pH, as shown earlier in an in vitro study by Takahashi et al. [73]. It was observed that a basic pH, as can be found in in vivo subgingival biofilms, supported the growth of P. gingivalis [73]. Though, in the current study, in a glucose-containing medium, the addition of P. nigrescens to the "basic biofilm" affected negatively the growth of P. gingivalis on HA-discs. Thus, the low growth of P. gingivalis might have been negatively influenced by the pH decrease reinforced by the addition of P. nigrescens. Moreover, there was an overall decrease of the P. gingivalis counts in biofilms 1-6 compared to the P. gingivalis counts in "basic biofilm" on HA discs. In the same way, there was a decrease of P. gingivalis counts in biofilm 7 (all species) compared to "basic biofilm". These findings may be related to the pH decrease because of the presence of glucose. Contrarily to glucose, haemin [72] and amino acids are the nutrients for P. gingivalis and these are running out faster in a bigger consortium of species. Therefore, these results may rather be based on the depletion of resources in biofilms 1-7 compared to "basic biofilm".
In a similar manner as P. nigrescens, Selenomonas sputigena negatively affected the growth of F. nucleatum, S. anginosus and P. gingivalis on HA-discs. On dentin discs, the same effects were observed for the same species, including S. oralis and T. forsythia. Furthermore, the addition of S. sputigena negatively affected the total bacterial cell counts in the endodontic-like biofilm on dentin discs compared to the counts in biofilm 4. These results might correspond to the findings by Rocas et al. [74] who detected S. sputigena in symptomatic cases of endodontic infections associated with sinus tract. After all, Selenomonas sputigena was identified in a different community than e.g., Streptococcus spp., F. nucleatum, P. gingivalis, and T. forsythia [74].
Tannerella forsythia had the greatest difficulty to establish in the "basic biofilm" as well as in biofilms 1-6 on HA and on dentin discs. These results correlate with the findings made by Guggenheim et al. [24] using a subgingival in vitro biofilm model. Furthermore, Zhu et al. [75] pointed out similar observations regarding the counts of T. forsythia grown in a flow cell system together with T. denticola and P. gingivalis. Given the fact that T. forsythia grew well as a single-species in a planktonic state, our findings might be confirming T. forsythia's general difficulties to establish in polymicrobial in vitro biofilm models.
Together with P. gingivalis and T. forsythia, Treponema denticola constitutes the red-complex bacteria and it has been detected in necrotic pulps associated with swelling caused by primary endodontic infections [76]. Previous research suggested that the strong synergistic association between T. denticola and P. gingivalis based on the motility of T. denticola [75]. In fact, it was shown that fibrilin binds to dentilisin of T. denticola enabling the coaggregation between these two species inside of periodontal pockets resulting in an up-regulation of the fibrilin gene [77]. CLSM images of a 10-species subgingival biofilm model by Ammann et al. [23], showed T. denticola growing loosely in the top layer along with P. gingivalis. However, our images showed T. denticola situated in the intermediate layer and on the bottom of the biofilm ( Figure 5F), building star-shaped clusters in proximity to P. nigrescens ( Figure 7C) (P. gingivalis cannot be distinguished from the other bacteria in this Figure). Surprisingly, the addition of T. denticola to the "basic biofilm" negatively affected only the growth of P. gingivalis on HA-discs (biofilm 6). A previous inquiry on metatranscriptome demonstrated that the gene expression of T. denticola differed dramatically in vitro from in vivo conditions [77]. This finding may explain the absence of in vitro synergetic interactions between these two species. However, as mentioned before, the addition of the endodontic strains to the "basic biofilm" on HA discs (biofilms 1-5) had a similar effect on P. gingivalis. Previous research by Neilands et al. [78] showed that P. micra enhanced the growth of P. gingivalis in 10% serum. This finding, however, could not be observed in the present study.
In addition to P. gingivalis, Streptococcus anginosus showed a similar behavior in biofilms 1-4 by addition of the endodontic species. Previous studies showed that S. anginosus depends on glucose and amino acids for its homeostasis and has a slow metabolism regarding recovery from nutrient deprivation [79]. Even if this is a survival strategy in the oral cavity where nutrients intake varies, this result might have been a disadvantage regarding the cell growth in this study. The growth of S. anginosus dropped significantly in the 15-species biofilm compared to the "basic biofilm" (both on HA-and dentin discs). Munson et al. [28] suggest that species with fast metabolism inhibit species with slow metabolism by the emission of metabolic products. This finding may explain the reduced cell counts of S. anginosus in our polymicrobial in vitro biofilm model containing 15 different species.
Only two of the "basic strains" were positively affected by addition of the new species, namely C. rectus both on HA and dentin discs and P. intermedia only on dentin discs. In previous studies, C. rectus was detected in primary endodontic infections associated with periradicular lesions [80]. Furthermore, C. rectus was positively associated with P. endodontalis, P. micra, S. sputigena, F. nucleatum, and Actinomyces sp. probably due to the production of growth factors, like formate [80]. This may explain the finding why the adddition of E. faecalis, S. aureus, P. nigrescens and S. sputigena enhanced the growth of C. rectus both on HA and on dentin discs.
In order to examine endodontic biofilm architecture, the biofilm of apical periodontitis of extracted teeth was analysed by Ricucci et al. [18]. The authors could not find a morphological pattern in this biofilm regarding the composition of bacteria (cocci, rods, filaments) and the amount of extracellular matrix and extent of the biofilm in the root canal. Examining the subgingival biofilm formation on natural teeth, Zijnge et al. [39] found 4 layers beginning with the early colonizers Actionomyces sp. in the basal layer. Periodontal Gram-negative pathogens like P. gingivalis, P. intermedia, P. endodontalis, P.nigrescens were found in the top layer and Spirochetes could be detected outside of the biofilm. Similary, our Figures 5D and 7C show P. intermedia, together with the Gram-positive P. micra, established on the top of the biofilm ( Figure 7G, 7H). Fusobacterium nucleatum that was detected in the intermediate layer previously [39], is a bridge-building microorganism [9] that facilitates the binding of initial colonizers with late colonizers such as P. gingivalis, P. nigrescens and P. intermedia [81,82]. Accordingly, in the present report, CLSM images show in the center of the biofilm aggregation of P. intermedia cells surrounded by cells of F. nucleatum ( Figure 7A) and C. rectus ( Figure 7H). Another CLSM image ( Figure 6D) shows P. nigrescens at the bottom of the biofilm at the interface with dentin invading dentin tubules, while P. intermedia is homogeneously distributed throughout the biofilm ( Figure 5, 6, 7), especially in the intermediate biofilm layer. Again, the cells of T. forsythia could be detected at the bottom of our multispecies biofilm model ( Figure 7G, 7H). This was not in accordance with the spatiotemporal model of oral bacterial colonization and previous findings that pathogens like T. forsythia were mostly present as microcolonies in the top layer of biofilms [39].
In addition to P. nigrescens, other bacterial species such as E. faecalis, S. aureus and S. sputigena, were detected at the openings of dentin tubules ( Figure 6). Jung et al. [55] used FISH/CLSM to visualize the colonization of dentin tubules by bacteria in situ. Previous research by Love [83] demonstrated in vitro the adhesion of E. faecalis on tooth roots in medium containing human serum. Interestingly, E. faecalis invaded dentin tubules by adhesion to exposed unmineralized collagen [83]. In another study, Sum et al. [84] showed that the adherence of E. faecalis to collagen varies and can be enhanced by chemical alteration of the dentin surface.
There are several limitations associated with the use of in vitro biofilms, such as the lack of a host defence system, necrotic tissue, innervation and living odontoblasts [85]. Using modern techniques, it is still not possible to say if the in vitro biofilms consist of the same extracellular matrix like the corresponding in situ biofilms [19]. There has been an attempt to mimic the environment within dental tubules and root canals using only human serum as a medium, but this condition decelerated the growth of E. faecalis [83]. Thus, there is a need for further studies on endodontic biofilms using diverse media in order to get even closer to in vivo conditions. Even though CLSM is the method of choice for visualizing the biofilm matrix, it does not provide detailed information about the ultrastructure of biofilm because of the low magnification [86]. Thus, a combination of image data of different methods would provide a more accurate picture of a biofilm architecture than the FISH/CLSM alone [86,87]. A limitation of qPCR is that it amplifies all target DNAs, including that from non-viable cells. Amplification of DNA from dead cells could be inhibited by coupling qPCR with propidium monoazide [88,89]. Real-time qPCR has been described as the gold standard for RNA quantification, although the reproducibility and reliability of this method have been questioned so far [90,91]. The present study introduces a new endodontic-like multispecies biofilm. The findings of this study can be used in endodontic research for testing new antimicrobial agents and simulating endodontic flora for various endodontic applications. Furthermore, these findings enable researchers to test the effects of different antibiotics on endodontic biofilms under in vitro conditions. Further work is needed to depict the microscopic architecture of our endodontic-like multispecies biofilm model and to explore the complex interspecies interactions in endodontic disease. We also have to keep in mind that not only the species composition, but also the genetic expression can change in different biofilms. However, the study of the changes in the metatranscriptome of the biofilms was beyond the objectives of this work and is the focus of future studies.
In conclusion, the present study shows successful incorporation of six endodontic bacteria into an existing subgingival nine-species biofilm model. The counts of five out of nine strains in "basic biofilm" tend to decrease by the addition of some of the endodontic pathogens on HA discs. Only C. rectus counts increased by addition of E. faecalis, S. aureus, P. nigrescens, and S. sputigena. On dentin discs, C. rectus and P. intermedia counts increased by addition of the mentioned strains or E. feacalis alone, respectively. Based on this study, future investigations on endodontic infections can rely on this newly established endodontic-like multispecies biofilm model. Author Contributions: D.L. conducted the experiments, analyzed the data and wrote this manuscript. L.K. was involved in the data analysis and manuscript drafting. M.F. was involved in experiments, in the data analysis and manuscript drafting. T.A. critically reviewed the manuscript. T.T. conceived the idea for this manuscript and critically reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.
Funding:
The study was supported by Institutional funds of the University of Zurich.
|
v3-fos-license
|
2019-05-10T13:08:51.246Z
|
2015-04-30T00:00:00.000
|
151697697
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://journals.uran.ua/index.php/1991-0177/article/download/41679/38558",
"pdf_hash": "c24c897d82dea9c488b53eeabd145d44a9822bf8",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43090",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"sha1": "6168111462b09fd0e38c80ea41c841c5d323056b",
"year": 2015
}
|
pes2o/s2orc
|
Problem organization of physical education of university students in accordance with their interests, level of physical fitness and physical development of the individual
Purpose: to substantiate the construction of a system of physical education university students with current norms and standards of physical development. Material and Methods: analysis of the sources of literature and public documents relating to the organization of physical education in Ukraine. Results: analysis of the literature on this problem suggests that theoretically grounded on the belief that the human body is an outward display of metabolic processes and is the basis of preclinical diagnosis, reflecting the individual structural features of somatotype, allowing clinical anthropometry as a method of establishing the physical development of a person in any of its age. Conclusions: the level of physical fitness with its division into general and special remains the unsolved problem in organizing the process of individualization of physical education. This is due to the lack of tests that assess the quantitative and qualitative characteristics of physical fitness with the individual predisposition morphofunctional organization somatotype to preferred forms of physical activity which determine their propensity for classes
Introduction.
Health of each citizen is the property of the state and reflects the level of its labor and defensive potential. The importance of physical training of younger generation and the need of control of the level of physical fitness of various groups of the population is repeatedly emphasized in a number of documents [1]. Students of higher education institutions are one of the categories of such groups of the population. A high level of their physical development and an appropriate level of physical fitness provide their effective educational activity during the stay in educational institution and the follow-up professional activity. It is a rather complex problem which is solved in each social and economic period of the development of the state depending on the level of its general culture of the corresponding period of the development. Physical culture reflects the level of its social development as an integral component of a complete culture of the society [2]. As well as the general culture in general, physical culture has its making components to which its material, spiritual and physical components belong. The interconditionality of these components of physical culture defines the efficiency of its providing at each stage of physical development [3]. Each of the noted components of physical culture represents the independent scientific directions of researches which decision assumes that other interdependent components of physical culture have an appropriate level of an adequate resolution. The physical component includes physical development which demands the necessary volume of the motive activity available to this age, and the hygiene of food corresponding to it in every period of the age development [4]. From these constitutives the orientation of the conducted researches is connected with the efficiency of the organization of the necessary motive activity providing the normal physical development of the considered contingent of student's youth.
Communication of the research with scientific programs, plans, subjects. It is naturally supposed that the problem of hygiene of food is solved according to requirements of the age period of this contingent. Practically the objective belongs to a problem of providing a healthy lifestyle of student's youth and is connected with the implementation by that of the Consolidating plan of the research works of the Ministry of family, youth and sport of Ukraine No. 0111U001206 (20130111U001206 ( -20140111U001206 ( ), No. 0111U000192 (20110111U001206 ( -2015.
The objective of the research: to prove the creation of system of physical training of students of higher education institutions taking into account modern norms and standards of physical development.
The tasks of the research. To establish the basic provisions defining a structure of the creation of physical training of student's youth.
Material and methods of the research: the analysis of sources of literature and the state documents concerning the organization of physical training in Ukraine.
Results of the research their discussion. The organization of physical training, concerning only full physical development, is connected taking into account specific features of its course. At the basis of this process the defining role is played by the motive activity. The leading need of the improvement and the development of biological objects for a phylogeny is the motive activity which defines not only the formation of difficult forms of the locomotor activity, but also the development and the formation of system of their management and trophic providing. The deepest foundation of this process is presented in the work of the academician E. k. Sepp "History of the development of the nervous system in vertebrata" [5]. Just lokomotions were the cornerstone of the formation of touch systems of an assessment of the external and internal environment and the development of neurohumoral systems of coordination of their equilibrium state. That is, the whole adaptive process and the improvement of a complete organism. The formation of an organism in ontogenesis, fully reflecting needs of its phylogenetic development, proceeds at a direct requirement of the motive activity, adequate for each age period. In this case conditions act equally adversely both as hypodynamia and as hyperdynamia [6]. The main objective in this case consists in the establishment of optimum volume of the age physical activity. The accounting of individual predisposition to its contents on an arsenal of the physical exercises available to each individual, and the mode of their performance is not less important factor in its implementation. The solution of this question is closely connected with the constitutional features of a somatotype. Much attention was paid by Hippocrates, Aristotle to the importance of structure of a body and its features of physical development, connecting the structure of a body with a measure of resistance of an organism to various factors of the environment. Their humoral theory of the formation of a body formed the basis of the theory of constitutional predisposition of a somatotype to certain diseases that served the development of the theory of donozologic diagnosis of diseases [7]. In the subsequent development of this direction led to the formation of medical constitutional anthropometry which was based on the foundation of that fact that the body of a person is an external display of exchange processes and can serve as the most effective indication of donozologic forecasting. This statement is a fundamental in an assessment of specific features of physical development and a necessary base for the creation of physical preparation taking into account individual opportunities and requirements of an organism [8-10]. There is a sensitive issue of an assessment of their preliminary physical fitness and the current physical state concerning the organization of physical training of student's youth and a choice of means of ensuring of their physical preparation. It in turn demands the existence of age norms of physical development, standards of its assessment, control tests and the development of methods of systematic monitoring of the process of physical development, physical fitness and physical state [11; 12]. The order "About the approval of the concept of the Nation-wide target social program of the development of physical culture and sport for 2012-2016" was accepted by the cabinet of Ukraine of August 31, 2011 on No. 828-r. It is said in it that the way of life of the population of Ukraine and a condition of the sphere of physical culture and sport create a threat and are an essential call for the Ukrainian state at the present stage of its development that is characterized by certain reasons, basic of which act: the demographic crisis reflecting the reduction of the population of Ukraine; the absence of the settled traditions and motivation to physical training and mass sport as to the most important factor of physical and social wellbeing, the improvement of a state of health, the maintaining a healthy lifestyle and the increase in its duration; the deterioration of a state of health of the population with sharply progressing chronic diseases of heart, hypertension, neurosis, arthritis, obesity and other diseases that reduces a number of persons who can be attracted in an elite sport, capable to maintain the considerable physical activities necessary for the achievement of high sports results; in comparison with 2007 the number of persons who are carried to special medical group increased; the discrepancy to requirements of the present and essential lag from the international standards of resource, personnel, scientific and methodical, medicobiological, financial, material, information support is observed. Now there is no uniform system of the accounting of the level of physical development of the population, physical fitness and a physical state in the country. The only system of an assessment of the level of physical development is developed in Japan in 1964 by k. Hirata [13]. There is a development of such system to the People's Republic of China (China) and in Russia intensively. It should be noted that in the thirties the XX century such system was developed in Russia, but for a number of the objective reasons, it didn't gain the further development [14]. This problem is developed on such subjects in Ukraine according to consolidating plans of carrying out scientific works of the Ministry of Education and Science of Ukraine in the field of a family of youth and sport: "Theoretical and applied basis of the construction of physical development, physical fitness and physical state of different groups of the population" (No. 0111U01206, 2013-2014) and "Theoretic-methodological basis of the construction of a mass control and an assessment of the level of physical development and physical fitness of different stratum of the population" (No. 0111U00192, 2011(No. 0111U00192, -2015. However it should be noted that all existing and developed systems of an assessment of physical development, a physical state and physical fitness are constructed on the basis of average data that is an essential shortcoming, especially in the countries having essential climatic-geographical differences in various regions. This fact was proved by works of Novosibirsk academy of sciences with big persuasiveness by the comparison of norms of criteria of an assessment of physical health of natives of the Asian North and the European regions of the USSR [15]. If to proceed from the situation that the structure of a constitution is an external display of exchange processes, in works with anthropometrical researches this fact was considered even earlier [16; 17]. At the modern level, raising a question of the need for the organization of physical training to consider specific features of physical development and level of physical fitness, it is necessary to proceed from a creation of individual norm of physical development. This problem is actively understood in the 80th years of the last century. Having theoretical development which a reasonably open essence of this problem, its practical use in the theory of the organization of system of physical training didn't find the application [18] that demands the development of methods of its use in the organization of control of physical development of the controlled contingent and the introduction in the organization of physical training of students. one of the reasons of such condition of a question is that the individual physical development is connected with the process of physiological maturing the morphofunctional systems of an organism. The time of course of this process defines a biological age of an individual which in a significant amount of cases doesn't coincide with a chronological age [19]. The existing methods of its definition yield rather contradictory results which rather specific individual establish a various biological age for it. The time of maturing of the chosen morphofunctional indicator used for a control is the cornerstone of an assessment of a biological age. Height, weight, ossification process, emergence of teeth and any nonspecific reactions of an organism belong to a number of such indicators. Now there are more than one hundred and fifty of them. The discrepancy of indicators of rather specific individual in an assessment of his biological age testifies that the speed of maturing of various functional systems in ensuring physical development of the specific individual can reach a certain mismatch. This effect generates their inconsistency and is shown in allometry of the form-building process of a somatotype [20]. Geoffroy Saint-Hilaire, considering this question, paid attention to the need for the process of physical development to allocate a growth and a shaping. The growth of form-building body weight is actually that the main indicator of the biological development which most substantially reflects its biological age. If on the population of one chronological age to determine a body weight, the established average size will reflect the most characteristic body weight which is defined by the characteristic of biological age. Concerning this body weight it is possible to divide all other surveyed one chronological age as lagging behind and advancing in the speed of biological maturing. In the most generalized form this approach defines the minimum sufficiency for the unambiguous definition of a biological age and an assessment of level of physical development. The qualitative characteristic of allometric deviations connected with a maturing mismatch of morphofunctional educations can be reflected with the necessary extent of specification at the increase in a feature set and accuracy of their measurement that is rather in details stated in the researches which are conducted in kharkov state academy of physical culture. The determination of level of a physical state is the second component of the organization of physical training of students taking into account their specific features and, on the basis of it, an assessment of a measure of their preparedness for the performance of physical activity of certain intensity, volume and the corresponding qualitative orientation. For this purpose the most expedient is the use of nonspecific reactions of an organism which act as an integrated indicator of the reaction of an organism to an action of various factors of the environment. The control of characteristics of the cardiovascular system and, in particular, the frequency and the amplitude characteristic of a cardio signal and a change of the arterial pressure measured at the same time on the left and right hand with a simultaneous assessment of four of its indicators in uniform coordinate system of their representation are at the existing hardware providing a physical state with the most effective method of an assessment [21]. For the determination of level of physical fitness tests of an assessment of the general and special physical fitness are required. The unified state system of such tests isn't present. However this task has its resolvability from a rather big arsenal of the existing tests of an assessment of physical qualities, the level of their manifestation and the control methods of current state. In relation to student's youth of higher educational institutions, it is necessary to consider specifics of their activity which are regulated by features of the organization and course of the educational process. Specifics of student's activity proceed within five years and take the most viable period. The organization of physical training these five years has a especially significant role as it is necessary not only to keep the high level of viability of an organism, but also to prepare it for specifics of the forthcoming professional activity which in most cases significantly differ from a rhythm of life and its specifics in the period of students. The essence of special physical preparation of an organism of future expert consists in it which activity will proceed in essentially other -the professional and production environment. This task practically not only isn't solved in one of the existing higher education institutions, but also isn't put. Despite of that it has the vital value for preservation of duration of an effective production activity. one of factors of the solution of this question is instilling during student's activity of deep understanding of the importance of physical activity for the preservation of physical health and preparation of necessary knowledge for the performance of this task in the changing conditions of the forthcoming activity.
Conclusions. The analysis of data of literature on the considered problem allows to consider that theoretically reasonable provisions that the body of the person is an external display of exchange processes and forms a basis of donozologic diagnostics, reflecting specific features of a structure of a somatotype that allows to use clinical anthropometry as a method of the establishment of physical development of a person at any his age.
The assessment of the individual physical development connected with allometry of the formation of a constitution, and the establishment of a biological age of an individual on the methods which are stated in works [4; 6; 7], allow with a necessary accuracy of similarity of the structure of a somatotype to form uniform groups on the level of physical development of the contingent of student's youth.
The determined consistent patterns of behavior of the cardiovascular system as a nonspecific integrated reaction to the influence of alternating factors of the environment which are presented in the works [12; 21], give the chance to define the current physical state of an individual that allows to define optimum conditions of his functional loading.
The level of physical fitness with its division into the general and special one remains the most not resolved task in a question of the organization of an individualization of the process of physical training. It is connected with the lack of system of tests which allow estimating quantitative and qualitative characteristics of physical fitness taking into account an individual predisposition of the morphofunctional organization of a somatotype to preferable forms of physical activity, determining the tendency to classes by them.
The lack of the noted sort of tests, necessary standards and norms define a further orientation of the conducted researches on the declared subjects of the performed scientific work.
|
v3-fos-license
|
2023-10-02T13:42:06.873Z
|
2023-10-02T00:00:00.000
|
263311739
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jenci.springeropen.com/counter/pdf/10.1186/s43046-023-00192-1",
"pdf_hash": "20b4aea08cef4ca6442f28350547f4b11df3bec9",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43091",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "1bf3747fbe95ad3347e74fb784612bdfbf012fa7",
"year": 2023
}
|
pes2o/s2orc
|
Immunotherapeutic strategy in the management of gastric cancer: molecular profiles, current practice, and ongoing trials
Gastric cancer (GC) is the one of the most commonly solid cancer worldwide. Although under the aggressive treatment, the poor clinical outcomes of patients with GCs have not been improved. Current studies emphasized that targeting therapies or immune response-based therapeutic strategy may be a potential approach to improve the clinical outcomes. Moreover, accumulative evidence has reported the increasing expression of PD-L1 expression in GC cells and highlighted its role in the tumor progression. Currently, great development has been established in the immune checkpoint inhibitors (ICIs) and further changed the clinical practice of GC treatment and prognosis. In addition, the combination therapies with targeting therapy or traditional therapies are expected to push the development of immunotherapies. In our present review, we predominantly focus on the biomarkers and molecular profiles for immunotherapies in GCs and highlight the role and administration of ICIs-based immunotherapeutic strategies against the GCs.
Introduction
Gastric cancer (GC) is one of the most commonly solid cancer worldwide [1].Because of the late diagnosis of GCs, the patients with GCs always present poor clinical outcomes.Accumulative evidence has revealed that the 5-year survival rate of patients with GCs is only 20-30% [2,3].Currently, the immunotherapy has been incorporated in the clinical management and widely explored as a potential therapeutic strategy [3].The existing immunotherapies can be classified as the active and passive approaches.The active immunotherapies utilize the immune responses in patient per se to destroy the cancer cells, while the passive immunotherapies depend on the exogenous agent, like targeting antibody to destroy cancer cells.Immunotherapies against the tumor have brought hope to amount of patients, especially in GCs [4].With the breakthrough development of immunotherapies both in experimental study and clinical trials, a variety of immunotherapies are available for patients with GCs, and novel approaches are under clinical investigation.Among various immunotherapies, administration with immune checkpoint inhibitors (ICIs) has been served as the standard of care management recently.In our present review, we predominantly focus on the biomarkers and molecular profiles for immunotherapies in GCs and highlight the role and administration of ICIsbased immunotherapeutic strategies against the GCs.
Biomarkers and molecular profiles for immunotherapies in GC
Accumulative evidence has revealed that tumor cells can escape the immune responses and surveillance through various mechanisms.Among them, immunoinhibitory checkpoints (programmed cell death protein-1 (PD-1)/programmed death ligand-1 (PD-L1) and cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4)) mediated co-inhibitory pathways presented a critical role in immunosuppression [5].PD-1/PD-L1 and CTLA-4 have been served as target, and the immune checkpoint inhibitors (ICIs) have been established to reactivate immune responses within the tumor microenvironment in a variety of cancers.During the latest years, immunotherapeutic strategies have been widely accepted in clinical practice as the part of first-line treatment against advanced gastric cancer (GC) [6].According to previous studies, it has been reported that different immunotherapeutic strategies present diverse efficacy in patients with GC, which makes it a huge challenge to evaluate the clinical benefits of ICIs.Therefore, it is necessary to determine biomarkers and molecular profiles to predict the efficacy of immunotherapies against GCs.So far, there are several potential biomarkers to assess the therapeutic responses after ICIs therapy, including PD-L1 levels, tumor mutational burden (TMB), microsatellite instability-high (MSI-H), Epstein-Barr virus (EBV) infection, gut microbiota, and HER2 overexpression [7][8][9].
Evaluation of PD-1 expression levels
There are three criteria of PD-L1 level in GCs.Of note, PD-L1 combined positive score (CPS) is a better assessing approach in patients of GC and has been served as a predictive marker of the efficacy of ICIs for advanced gastric cancer and a stratification factor in clinical practice.According to the CheckMate-032 study, it has been reported that CPS is a better evaluation criteria than tumor proportion score (TPS) in assessing PD-L1 expression levels [10].Moreover, in the KEYNOTE-061 study, it has been revealed that patients with CPS ≥ 10 benefit most from the pembrolizumab [11].Due to the results of CP-MGAH22-05 study, dual-positive subgroup (HER2 and PD-L1) presented longer PFS and OS, which revealed the efficacy of combination of margetuximab and pembrolizumab was related to the expression of PD-L1 [12].In the contrary, according to the ATT RAC TION-2 study, the benefits of nivolumab on overall survival (OS) were not associated with the PD-L1 expression levels [13].On the contrary, some studies on various solid tumors assessed the PD-L1 levels by TPS rather than CPS.CPS is generally assessed through the biopsy tissue [14].Therefore, CPS is not a perfect but a useful biomarker for advanced GCs patients with ICIs treatment.
Tumor mutational burden (TMB)
TMB is a useful biomarker of ICI response in a wide range of solid tumors, including bladder cancer, melanoma, and glioma.TMB refers to the quantification of the number of somatic mutations in coding region of the genome.According to both of the KEYNOTE-061 and KEYNOTE-158 trial, TMB-H patients obtained higher ORR, longer OS, and significant clinical benefit comparing to the non-TMB-H tumors [11,15,16].Due to the positive correlation of TMB and clinical ICIs responses, TMB is potentially served as the immunotherapy-related biomarker [17].A previous study reported that patients with advanced TMB-H GCs treated with systemic therapy in clinical practice presented better outcomes than TMB-L patients [18].Therefore, pembrolizumab was granted approval for TMB-H patients.However, further investigation with large cohort is also required to determine the role TMB-H for patients with GCs.
Microsatellite instability-high (MSI-H)
MSI-H is caused by the MMR gene defect and is also a predictor for ICIs responses.For gastric cancer, MSI-H status is a favorable prognostic factor, a positive predictor for ICIs response, and a negative predictor of cytotoxic chemotherapy responses.Accumulative evidence revealed the relationship of the ICIs efficacy in patients with MSI-H tumors.According to the phase III KEY-NOTE-062 trial, classified through the PD-L1 CPS, combination of pembrolizumab and chemotherapy presented no significant more benefits on OS in patients (CPS ≥ 10 or CPS ≥ 1) than chemotherapy alone.However, patients with MSI-H tumors obtained more benefits from combination of pembrolizumab and chemotherapy treatment than chemotherapy alone [19].Notably, MSI-H GCs also present TMB-H status, indicating the potentially promoted immune cell or ICIs responses.
Epstein-Barr virus (EBV) infection status
EBV is the predominant virulence factor for nasopharyngeal carcinoma.Current evidence also revealed that EBV infection can induce the progression of EBV-associated GC (EBVaGC) [20].Latent EBV proteins can downregulate the expression of E-cadherin, which also is the important step in the progression of the loss of cell-tocell adhesion and the carcinogenesis of EBVaGC [21].Moreover, it has also been demonstrated that the EBV miRNA BART11 can reduce the expression of forkhead box protein P1 (FOXP1), which activates the epithelialmesenchymal transition (EMT) of GCs and further accelerates cancer invasion and metastasis [22].According to the previous study, there is a positive association between EBV status and CD8-positive T-cell infiltration and PD-L1 expression in EBVaGC, suggesting its great sensitivity to ICIs.Accumulative evidence has revealed that EBV status was positively associated with the expression of PD-L1 [23,24].Therefore, it has been reported that ICIs were successful against EBVaGC and MSI GCs [25].A phase II study of pembrolizumab demonstrated that EBVaGC was more susceptible to the administration ICIs [26].
Overexpression of human epidermal growth factor receptor 2 (HER2)
HER2 is a receptor tyrosine kinase proto-oncogene and has been attached great attention in GCs.The overexpression of HER2 can be observed in approximately 17.9% of GCs [27].Moreover, the overexpression of HER2 is also associated with the poor clinical prognosis and increased recurrence in GCs [28].Previous study revealed that targeting HER2 with the monoclonal antibody trastuzumab combined with the chemotherapy can prolong the survival of patients with HER2-positive GCs [1].However, only limited benefits can be obtained in overall survival [29].
Overexpression of HER2 also is associated with the increasing expression of PD-L1.According to a previous study, 85% of HER2-positive GCs were featured by the overexpression of PD-L1 [30].
Based on an experimental study, downregulation of HER2 in PD-L1/HER2-positive GC organoids leads to a decrease in PD-L1 expression [31].Above results indicated that combination of PD-L1 targeting therapy and anti-HER2 therapy may present a potential and positive effects in patients with HER2-positive GCs.
Immune microenvironment
The tumor microenvironment (TME) presents a critical role in immune escape and resistance against cancer therapies, leading to the progression of malignancy.During the therapeutic process, TME contributes to the oncogenesis and therapeutic efficacy.The tumor-infiltrating immune cells are the predominant components within the TMW and present a wide range of functions.Within the TME of GCs, the most predominant immune cells are tumor-associated macrophages (TAMs) and tumor infiltrating lymphocytes (TILs).In addition, it has been reported that HER2 also mediate the alteration of TME, which further affect the tumor progression and clinical prognosis.
The TAMs derived from the peripheral blood then infiltrate the tumor tissues and release a variety of chemokine to affect the tumor growth, invasion, and metastasis.Several properties of TAMs have been served as predictors in GC, including TAM density or TAM polarization.In addition, a variety of TAM-derived factors are also considered as the biomarker to predict the clinical outcomes, involving Tim-3 or CCL5.Of note, TAM can induce the immune tolerance by blocking the anti-tumor function of cytotoxic T cell.It has also been reported that TAMs present a critical role in angiogenesis within TME.TILs are another key immune component in GCs, which are comprised of B cells, T cells, and nature killer (NK) cells.It has been reported that TILs are the predictor of poor clinical outcomes or tumor recurrence in GCs.Of note, increasing infiltration of CD8-positive lymphocytes is associated with prolonged OS in GCs, while high density of Th22 and Th17 cells related to a decreased OS.Immune evasion is a critical step during the progression of GCs, which is mediated by PD-L1.The tumor-localized PD-L1 can bind to the PD-1 on lymphocytes, which inhibit the anti-tumor immune responses.Therefore, PD-L1 may be useful targets in the management of GCs.
Dendritic cells (DCs) are another critical component within immune microenvironment in GC, which is responsible for presenting antigen towards immune cells and mediate further immune responses.It has been reported that increasing DC infiltration was associated with the increased 5-year survival in GC [32].According to a previous clinical trial, administration with tumorassociated antigens, including HER2 peptide, can activate DCs, which can be autologous transplanted into patients and induce T-cell response against the antigen [33].Currently, the GCs have been classified into four subtypes through the Cancer Genome Atlas (TCGA), including EBV, MSI, genomically stable (GS), and CIN.There is intense infiltration of lymphocytes which can be observed in the EBV and MSI-H subtypes.The GS subtype presents more CD4-positive T cells, macrophages, and B cells, which is more suitable for immunotherapies.Furthermore, the CIN subtype presents T-cell depletion and more infiltration of TAMs, considered as the "cold tumors."
ICIs: PD-1/PD-L1 inhibitors and anti-CTLA4 antibodies
PD-1 is a negative costimulatory immune molecule localized in the surface of various immune cells.Its corresponding receptor PD-L1 is localized on antigenpresenting cells (APCs) and tumor cells.The binding between PD-1 and PD-L1 activates immunosuppressive signal pathways and mediate immune escape.Therefore, inhibiting this interaction can promote the immunotherapeutic responses.According to the phase I study KEY-NOTE-012, it has been reported that pembrolizumab presented a potential anti-tumoral function in advanced GCs [34].In the following study, KEYNOTE-059 cohort 1 revealed that pembrolizumab monotherapy presented a significant efficacy.Therefore, the FDA approved pembrolizumab as a third-line treatment for patients with advanced or metastatic GC (PD-L1 CPS ≥ 1).However, the KEYNOTE-061 study reported that pembrolizumab failed to present advantage than chemotherapies in patients with PD-L1-positive GCs [11].Moreover, nivolumab has also been conducted as the PD-L1 inhibitor in ATT RAC TION-2 study.The results indicated that great benefits on OS can be observed by using nivolumab [13].An exploratory analysis on the avelumab treatment revealed that prolonged OS in patients with PD-L1 CPS ≥ 1.However, in the JAVELIN Gastric 100 study, avelumab administration after the first-line chemotherapies in advanced GC fails to improve the OS in patients with PD-L1 TPS ≥ 1% [35].Taken together, although the PD-1/PD-L1 inhibitors present potential clinical efficacy against GCs, the benefits of monotherapy are still limited.Therefore, the combination of ICIs and chemotherapies may present more clinical significance.So far, there are four pivotal phase III trials which have been published to assess the efficacy of ICIs for advanced GCs.
CTLA-4, an immune checkpoint receptor, can bind to the B7 on the surface of APCs and then prevent the activation of CD4 T cells which deprives the costimulatory signal from CD28.Therefore, blockade of CTLA-4 can release T cells from suppression.So far, the predominant anti-CTLA4 antibodies involve ipilimumab and tremelimumab.According to a phase I/II clinical study (CheckMate-032), ipilimumab monotherapy presented a significant efficacy in patients with advanced GC with chemotherapy [36].However, in another phase II clinical trial, maintenance therapy with ipilimumab presented no significant benefits for advanced GC [37].Tremelimumab is another potential CTLA-4 inhibitor.According to a phase Ib/II trial, it has been reported that tremelimumab can promote the T-cell activity and showed a median PFS of 1.7 months and median OS of 7.7 months [38].Although tremelimumab presents no such significant efficacy in GC patients, durable anti-tumor activity can be observed in several GC patients, which emphasized that GC patient with specific biomarker can obtain more benefits (Fig. 1).
Dual ICI strategy
It has been revealed that the combination of anti-PD-1/ PD-L1 and anti-CTLA-4 antibodies can promote an effective and durable response for a variety of cancers, especially in GCs.According to the previous study, the median duration of response in the combination of nivolumab plus ipilimumab outperformed than chemotherapy group in patients with CPS ≥ 5.In the CheckMate-032 study, the treatment of nivolumab was compared with the combination with ipilimumab in patients with advanced or metastatic tumors.Although the combination group presented higher ORR than the nivolumab alone, but the OS of these two groups was similar [10].Notably, the benefit of combination therapy is more obvious in the patients with PD-L1-positive and microsatellite instability-high (MSI-H) features.Therefore, nivolumab combined with ipilimumab may be a potential treatment against GCs.However, there are also controversial results in CheckMate-649 trial.It has been revealed that combination of ipilimumab and nivolumab did not achieve the prolonged OS compared with the chemotherapy group [39,40].Moreover, according to the KEYNOTE-062 and KEYNOTE-061 trials, the combination of nivolumab and ipilimumab even increased the mortality rate in the early stage comparing to the chemotherapy group [19,41].Based on the above results, it has been revealed that combination of PD-1/PD-L1 and CTLA-4 monoclone antibody is not suitable for all cases with GCs.Therefore, different immunotherapeutic strategy may be adapted to specific populations (Table 1).
Chemotherapy combination therapy
Currently, ICIs have been served as the neoadjuvant strategy before surgical resection or maintenance therapy after chemotherapy.According to the KEYNOTE-059 trial, the combination strategy presented a much higher ORR than pembrolizumab monotherapy.However, there was still a contrary in this trial.The pembrolizumab monotherapy appeared to have a longer OS than the combination group [45].Based on the result of the phase III trial KEYNOTE-062, patients with untreated advanced or metastatic GCs were administrated with pembrolizumab and chemotherapy.However, the combination of pembrolizumab and chemotherapy failed to present superiority to chemotherapy alone in mOS in patients with PD-L1 CPS ≥ 1 or ≥ 10 [19].
The CheckMate-649 trial assesses the superiority of nivolumab plus ipilimumab or chemotherapy over chemotherapy alone in patients with HER2-negative cancers [39,40].According to the trial CheckMate-649, nivolumab plus chemotherapy presented superior OS compared to chemotherapy alone and reduce 20% of mortality.Moreover, in the patients with higher CPS, this combination strategy presented better efficacy with increased ORR and prolonged OS.Therefore, the nivolumab combination therapy has been approved by FDA for the patients with advanced or metastatic GC [46].In addition, other anti-PD-1 antibodies also presented promising efficacy as the first-line treatment in patients with GCs.According to a phase 2 clinical trial, camrelizumab with CapeOx (oxaliplatin or capecitabine), followed by camrelizumab plus apatinib, presented an ORR of 65% in patients with advanced or metastatic GC [47].So far, another multicenter phase III clinical study (NCT02942329) based on the camrelizumab plus apatinib as the second-line treatment against GCs is ongoing [48].Moreover, in a phase Ib study, sintilimab with CapeOx was served as the first-line treatment and showed a great response against advanced or metastatic GC [49].Another clinical trial revealed that combination of sintilimab with CapeOx as the neoadjuvant treatment achieved 23.1% pathological complete response (pCR) and 53.8% major pathologic response (MPR) in GC patients [50].Therefore, further study is required to determine whether pCR can be a predictive factor for the long-term survival benefits.So far, an ongoing study (ORIENT-106) is underway to determine the efficacy of the combinative strategy of sintilimab and ramucirumab for progressive or metastatic GC (CPS ≥ 10).According to the NCT03469557 study, tislelizumab plus chemotherapy as the first-line treatment presented an ORR of 46.7% in GC patients [51].According to the CS1001-101 study, it has been revealed that there was a potential correlation between the PD-L1 expression and the efficacy of immunotherapies.Moreover, the CS1001 (PD-L1 antibody) plus XELOX presented an ORR of 62% in patients with advanced GC [52].Another ongoing study (NCT03852251) also reported that ICIs combined with mXELOX presented inspiring anti-tumoral efficacy in advanced GC patients [53].In a summary, there is a great clinical value in the combination of immunotherapy with chemotherapy, and further clinical trials with large cohorts are also urgently required (Table 2).
Targeted antibody combination therapy
Immunotherapy combined with targeted therapy is a heating topic in a variety of cancers.In the terms of GCs, HER2 and VEGF/VEGFR are the predominant optional targets in the clinical practice.Accumulative evidence has illustrated the synergistic effect of ICIs and anti-HER2 therapy in various cancers, including breast cancer and GCs.Anti-HER2 l antibody (trastuzumab) combined with chemotherapy is previously considered as the firstline option for advanced HER2-postive GCs.In the phase III clinical study KEYNOTE-811 (NCT03615326), pembrolizumab combined with trastuzumab, fluoropyrimidine, and platinum-containing chemotherapy presented a better ORR than the group administrated with trastuzumab combined with chemotherapy [56].The efficacy of pembrolizumab combined with trastuzumab was evaluated for patients with HER2-positive advanced GC.
Comparing with the trastuzumab and chemotherapy, combining with pembrolizumab achieved a significant tumor reduction.Based on the potentially clinical value of immunotherapy combined with targeted therapy, FDA approved that pembrolizumab plus trastuzumab and chemotherapy were served as the first-line treatment against HER2-positive GCs.In 2020, a phase II study PANTHERA is based on the first-line triple treatment regimen (pembrolizumab, trastuzumab, chemotherapy) in HER2-positive advanced GCs patients.According to the results, approximately 56.6% of patients presented more than 50% reduction in tumor burden [57].Moreover, margetuximab is another optimized anti-HER2 antibody and mediates the activation of immune responses by anti-HER2 targeted T-cell responses.
According to the phase II/III MAHOGANY trial, combination of margetuximab with the retifanlimab showed great anti-tumor effects [58].VEGF/VEGFR inhibitors are another targeting option against GCs.So far, there are various VEGFR2 targeting drugs, including ramucirumab, apatinib, lenvatinib, and regorafenib [59,60].A phase I/II study based on the nivolumab combined with paclitaxel and ramucirumab revealed promising clinical efficacy.A multicenter phase I/II study of nivolumab combined with paclitaxel plus ramucirumab demonstrates promising clinical activity.Patients with higher PD-1 expression (CPS ≥ 1) presented longer OS [55].According to the EGONIVO study, combination with regorafenib, nivolumab also achieved good response rate and OS in advanced GC patients [61].Moreover, the combination of ramucirumab and durvalumab treatment has also been detected in GC patients.The results revealed that the safety of combinative strategy is consistent with the single treatment, which emphasized the clinical value of the combination of VEGF/VEGFR inhibitors with ICIs against GCs [62].Except the anti-HER2 antibody and VEGF/VEGFR inhibitors, ICIs can also combine with other targets-based therapies.Moreover, when nivolumab combined with ramucirumab served as the second-line treatment for advanced GC, the results revealed that ORR was 26.7%, and OS was 9.0 months (Table 3).
Radiotherapy (RT) combination therapy
RT is a common therapeutic strategy to damage cancer cells directly and activate the immune responses.However, previous studies also revealed that RT can upregulate the expression of PD-L1, induce immunosuppression, and consequently counters the benefits from RT.Therefore, addition of ICIs in the process of RT presents the synergistic effect against cancer cells.According to the CheckMate-577 study, nivolumab as the adjuvant therapy combined with a triple regimen (neoadjuvant chemoradiotherapy sequential surgery) in GC patients, which showed great benefits in DFS [43,63].Another Neo-PLANET phase II clinical study applied the combination of SHR-1210 and chemoradiotherapy as the neoadjuvant treatment for locally advanced proximal GCs, and the pCR rate was 26.7%.Moreover, a series of clinical studies are undergoing to explore the efficacy of the combination of RT and immunotherapy.
Toxicity profile and safety of ICIs in GC treatment
Accumulative evidence has revealed that ICIs are generally tolerated.Immunotherapy can provide lasting remission for patients with GCs; however, it sometimes brings life-threating adverse events, namely immunotherapy-related adverse events (IRAEs).The IRAEs are caused by the excessive inflammatory response and nonspecific reaction due to the ICIs.In the GCs, the IRAEs are consistent with the other cancers with immunotherapy.ICIs appears to present higher IRAEs after the combination with chemotherapies.Based on the Keynote-062 study, the combination group presents 24% of cases with IRAEs, but only 21% of cases can be observed in the pembrolizumab group.According to the clinical study, approximately 5-10% of IRAEs were induced by the anti-PD-1/PD-L1 antibodies against advanced GCs.
Table 2 ICIs combined with traditional chemotherapies involving clinical trials against GCs
Based on a phase III study, it has been reported that grade 3 or 4 IRAEs were increased up to about 10% in the immunochemotherapy group than the chemotherapy alone [64].So far, the interstitial pneumonia and myocardial damage are attached much attention after the combination of anti-PD-1 antibodies and anti-HER2 therapy, but no other IRAEs are observed according to the KEYNOTE-811 trial [54,65].Of note, there are more IRAEs (about 35% of cases) which can be observed after the combination of anti-PD-1/PD-L1 antibodies and anti-CTLA-4 antibodies than single ICIs [66].
In addition, GM-CSF and IL-6 are demonstrated to be the potential targets to attenuate the toxicity and IRAEs from immunotherapy.Currently, it has been reported that the majority of IRAEs can be attenuated by the systemic corticosteroids and other ancillary strategies without impairing the clinical benefits of immunotherapy in GC treatment.Notably, there are certain relationship between the IRAEs and efficacy of ICIs.Therefore, further studies are required to determine such association [67].
Adoptive cell therapy (ACT)
Cancer cells can express several antigens with high immunogenicity, which leads to the activation of various immune responses.Therefore, cancer cells can be recognized and killed by the immune cells.However, current studies revealed that cancer cells can also release immunosuppressive factors, including lymphocyteactivation gene 3 (LAG-3), TGF-β, and IL-10, leading to the immune escape.Thus, as to the patients with low immune response towards cancer cells, adoptive cell therapy (ACT) may provide potential clinical value to treat GCs.The ACT utilizes a variety of immune cells, including cytokine-induced killer (CIK) cells and tumorinfiltrating lymphocytes (TILs) to effectively destroy cancer cells.CIK cells present with great anti-tumor activity and is responsible for the release of cytokines for the regulation of immune response.A previous clinical trial demonstrated the strong anti-tumor activity of CIK cells.Moreover, combination with targeted therapy may enhance the efficacy against the GCs.In addition, TILs have been widely applied in advanced gastric cancer.According to the previous study, combined therapies with tumor-associated lymphocytes can increase the survival rate to 50% than the traditional therapy [68,69].Moreover, currently, allogenic NK cells have also been established for the treatment of GCs [70].However, this strategy is severely limited, due to the lack of strategies to obtain a large amounts of functional natural kill cells.Therefore, further studies are urgently required to establish novel approaches to obtain sufficient NK cells for cancer immunotherapy.Additionally, the immune cells, including expanded T cells and NK cells, or engineered chimeric antigen receptor T cells (CAR-T) and T-cell receptor T cells (TCR-T), can be utilized directly into the patients with cancers.Currently, CAR-T cells have been widely applied in clinical trials or experimental studies, because of its high specificity and great anti-tumor function.During this process, T cells will be engineered to target cancer cells.Notably, NK group 2 member D (NKG2D) ligand is widely localized in GC cell, which made it a specific target against GC.Therefore, the NKG2D-CAR-T cells can present great tumor killing function and can potentially be a novel therapeutic strategy against GCs [71].HER2 is another critical target in patients with GCs.It has been reported that HER2-CAR-T cells also present great efficacy against GCs and can be an effective strategy towards the HER2-positive GCs [72].So far, the majority of CAR-T therapies in solid tumors still remain in early stage; however, Claudin18.2CAR-T therapy presented a breakthrough against cancers.Clau-din18.2 is the specific membrane protein in the GC cells and can be served as the therapeutic target.According to a preclinical study, Claudin18.2CAR-T therapy can remove the tumor in rodent models without toxicity [73].Although the great efficacy of CAR-T cell therapy in hematological malignancies has been demonstrated, its clinical value in GC or the other solid tumors still need further investigation in the future [74,75].Moreover, some other GC-specific expression protein can also be the potential targets against GCs, including folate receptor 1 (FOLR1) and mesothelin (MSLN).Corresponding CAR-T therapies can also be established and investigated [76,77].TCR-T immunotherapy is a modified T-cellbased ACT.The TCR genes that can recognize the antigen of cancer cells were transduced into the T cells.However, due to the tumor heterogeneity, cancer cells present different antigens, which limited the wide application of TCR-T therapy.Although TCR-T therapy may own great advantages over CAR-T against solid tumors, so far, the majority of the TCR-T clinical trials are undergoing the phase I/II clinical trials.
Cancer vaccines
Cancer vaccine is another novel active immunotherapy against the advanced GCs through activation of immune responses.The therapeutic cancer vaccines involve autologous tumor cell vaccines, dendritic cell (DC) vaccines, peptide vaccines, and genetically engineered vaccines.So far, the well-established cancer vaccines are mRNA vaccines, which can promote the expression of antigen and further induce immune responses.According to previous study, mRNA cancer vaccines companied with moderate adverse effects and great efficacy comparing to the chemotherapy or targeted therapy [78].A previous study has reported the role of autologous tumor-derived Gp96 vaccination in patients with GCs.The results revealed that the vaccine group improved that DFS comparing to the chemotherapy group [79].Moreover, the combination of cancer vaccines with chemotherapies also presents significantly enhanced cytotoxicity against cancer cells [80].
Challenges and potential strategies in immunotherapy against GCs
Accumulative clinical trials and experimental studies have revealed the great advantages of immunotherapies than the traditional therapeutic strategies against the GCs.However, there is no denying that there still exists a variety of challenges which limited the clinical application of immunotherapy, especially in GCs.The autoimmune toxicity and adverse effects of ICIs and CAR-T therapies required further attention, despite the synergistic effects on advanced GCs of combination treatment with ICIs and targeted therapy.However, VEGF, for example, presents with a wide expression, and the specific inhibition may commonly lead to the adverse effects, including hypothyroidism, coagulation disorders, hypertension, and neurotoxicity [81].Moreover, as to the cancer vaccine, although favorable benefits have been reported in phase I/II trials against advanced GCs, the host immune response also limited its clinical efficacy.Therefore, novel strategies are required to overcome this limitation.The combination of cancer vaccine with other immunomodulators may prevent the immune suppression or with the chemotherapy to enhance anti-tumor effects and reduce cytotoxicity [82].Of note, the CAR-T therapy presents amazing efficacy against GCs but also companied with strong toxicity [83].Aiming to facilitate the clinical application of this effective immunotherapy, reduction of the toxicity from CAR-T therapy with shorter lifespan or "on-switch" is necessary [84].
Conclusions
With the development of ICIs and the other immunotherapeutic strategies, there is an obvious change in the therapeutic options and efficacy against GCs.Although the ICIs, like anti-PD-1/PDL1 therapy, are not likely to be first-line treatment, they present great potential and clinical value in the combination with other therapies for the patients with advanced GCs.So far, the other immunotherapeutic strategies are not as mature as ICIs, but with the further development and investigation, better combination with ICIs may obtain better clinical outcomes against solid tumors.In a conclusion, based on the positive results in various clinical trials, immunotherapy has been incorporated in the clinical management of advanced GC.Further studies are urgently required to optimize immunotherapy efficacy, overcome emerging PD-1/PD-L1 resistance, and further promote GC patients' outcomes.
Fig. 1
Fig. 1 Different immunotherapeutic strategies against gastric cancer.Immune checkpoint inhibitor (ICIs), adoptive cell therapy (ACT), target therapy, cancer vaccine, and combination therapy are the predominant types of immunotherapies against GCs Nivolumab with paclitaxel plus ramucirumab presented promising antitumor activity with manageable toxicities Nakajima et al. (2021) [55]
Table 1
ICIs monotherapy and dual strategies involving clinical trials against GCs
Table 3
ICIs combined with target therapies involving clinical trials against GCs
|
v3-fos-license
|
2018-04-03T01:52:13.340Z
|
2012-11-11T00:00:00.000
|
18509006
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://academic.oup.com/jinsectscience/article-pdf/12/1/134/18151527/jis12-0134.pdf",
"pdf_hash": "63e26dfad35792168db3eb314de26e02abfb397a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43095",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "63e26dfad35792168db3eb314de26e02abfb397a",
"year": 2012
}
|
pes2o/s2orc
|
Defensive Gin-Trap Closure Response of Tenebrionid Beetle, Zophobas atratus, Pupae
Pupae of the beetle Zophobas atratus Fab. (Coleoptera: Tenebrionidae) have jaws called gin traps on the lateral margin of their jointed abdominal segments. When a weak tactile stimulation was applied to the intersegmental region between the two jaws of a gin trap in a resting pupa, the pupa rapidly closed and reopened single or multiple gin traps adjacent to the stimulated trap for 100200 ms. In response to a strong stimulation, a small or large rotation of the abdominal segments occurred after the rapid closure of the traps. Analyses of trajectory patterns of the last abdominal segment during the rotations revealed that the rotational responses were graded and highly variable with respect to the amplitudes of their horizontal and vertical components. The high variability of these rotational responses is in contrast with the low variability (or constancy) of abdominal rotations induced by the tactile stimulation of cephalic and thoracic appendages. Since the closed state of the gin traps lasts only for a fraction of a second, the response may mainly function to deliver a “painful” stimulus to an attacker rather than to cause serious damage.
Introduction
Holometabolous insect larvae metamorphose into adults through the pupal stage. Although pupae do not move at all or their locomotive capacity is greatly restricted, they are usually protected by physical, chemical, or biological (behavioral) mechanisms (Hinton 1955). The pupae of many Coleoptera and some Lepidoptera species are armed with heavily sclerotized projections or jaws near the intersegmental regions of adjacent abdominal segments (Hinton 1946(Hinton , 1952. The pupae often swing or rotate their abdomen in response to tactile stimulation of their appendages (Hollis 1963;Askew and Kurtz 1974), while they rapidly close the jaws in response to the stimulation of the intersegmental regions of the abdominal segments (Hinton 1946;Wilson 1971;Eisner and Eisner 1992). Hinton (1946) coined the term "gin trap" to describe the pinching device.
Sensory and neuronal mechanisms of the defensive response have been examined in the pupae of hawkmoths (Bate 1973a, b, c;Levine et al. 1985;Levine 1989, 1992;Lemon and Levine 1997). Tactile stimulation of the mechanosensory hairs located within small pits of the gin traps on the abdomen induces rapid bending of the abdomen toward the side of stimulation and the closing of one or more of the gin traps (Bate 1973b). Since the pupae of many Coleopteran insects (i.e., beetles) have highly developed gin trap structures (Hinton 1946;Bouchard and Steiner 2004), physiological and behavioral studies of these insects may provide insight into the functions and evolution of pupal defensive mechanisms.
Robust gin-trap closure responses have been observed in the tenebrionid beetle Tenebrio molitor (Hinton 1946;Wilson 1971), although the functional mechanism is largely unknown.
To clarify these issues, a series of morphological, physiological, and behavioral studies of the pupal defensive responses were performed using the pupae of a large tenebrionid beetle Zophobas atratus Fab. (Coleoptera: Tenebrionidae) from Central America (Tschinkel 1981). Many campaniform sensilla (strain sensors) were scattered over almost all parts of the pupal cuticle, including appendages and intersegmental membrane. This type of mechanoreceptive sensilla plays a role in triggering the gin-trap closure response as well as the abdominal rotation response (Kurauchi et al. 2011); the latter is induced by stimulating a cephalic or thoracic appendage and is characterized by relatively constant trajectory patterns of abdominal rotations as described in the Ichikawa et al. 2012. In the present study, pupal gin-trap closure responses were found to often accompany abdominal rotations with a variable trajectory pattern.
Animals
Giant mealworms, Z. atratus, were purchased as completely grown larvae from a local supplier. The detritivorous or omnivorous larvae were kept under crowded conditions in a mixture of peat moss and sawdust and were fed fresh Japanese radishes. Individual larvae were isolated in a plastic cup for pupation. The pupae were maintained at 26 ± 1° C under a 16: 8 L:D photoperiod.
High-speed photography
One-day-old pupae were usually used for the analysis of the gin-trap closure responses. The dorsal part of the thorax of a pupa was fixed to an edge of a horizontal plane of a rectangular block with melted paraffin, and the block was placed on a platform so that the horizontal plane faced upward or downward. Captures and analyses of high-speed movies (200 frames/s) were made as described in the Ichikawa et al. 2012.
Mechanical stimulation
Gin-trap closure responses were usually induced by manually brushing the sensitive area of the intersegmental membrane near the gin trap with a tip of a writing brush in order to prevent the soft intersegmental membrane from being damaged by repetitive mechanical stimulation. Although the force of manual brushing could not be controlled precisely, the force was estimated using a calibrated strain gauge; weak and strong brushings were approximately 0.3 and 1.5 mN, respectively. A thin nylon filament or nichrome wire with a known bending force (Kurauchi et al. 2011) was sometimes used to determine the timing of stimulation and latency of the response. A tibial segment (0.4 mm in diameter) from an adult Z. atratus was also used to test whether the closed state of a gin trap was prolonged, when the trap could successfully pinch the tubular segment mimicking an appendage of a potential enemy. To induce an abdominal rotational response, a weak brushing was applied to the distal portion of the middle-leg femur.
Results
The pupal abdomen consists of nine segments that are numbered A1-A9. There are three claw-shaped processes or spines on each lateral flange of segments A1-A7. The anterior and posterior processes are associated with a row of sclerotized teeth that form a jaw. The posterior and anterior jaws on subsequent segments make a pinching device known as a gin trap ( Figure 1). The third middle process without teeth is not involved in the pinching mechanism.
Simple gin-trap closure response A small area in the lateral region of the intersegmental membrane between the posterior margin of an abdominal segment and the spiracle of the next segment was most sensitive to tactile stimulation. A gin-trap closure response could be readily evoked by prodding the area with a thin filament (bending force, 0.6 mN) or weakly brushing the area and surrounding area with a fine brush. Similar tactile stimulation applied to other abdominal regions away from the sensitive area elicited no gin-trap response. A relatively weak stimulation usually elicited the closure of the single gin trap (e.g., Figure 1A-C), while a strong stimulation evoked the closure of multiple gin traps (e.g., Figure 1D). When the intersegmental area between the third and fourth segments was stimulated, the anterior jaw on the fourth segment started to move anteriorly approximately 35 ms after the onset of stimulation ( Figure 1A), just occluded with the posterior jaw on the third segment at 80 ms ( Figure 1B), then started to move posteriorly at 105 ms, and finally stopped moving 150 ms after the onset of stimulation ( Figure 1C). Thus, the rapid closure of the gin trap was followed by a rapid reopening after a brief intermission of approximately 25 ms. The mean latency of the response (the start of the anterior movement) was 33 ± 6 ms (n = 10). When a large gin-trap response to a strong stimulus occurred, the abdomen bended maximally toward the side of stimulation; in addition, two or three traps adjacent to the stimulated trap usually closed completely, while the remaining traps closed partially ( Figure 1D). Figure 2 shows the time courses of the gin-trap closing-opening responses in which the distances between the tips of the two jaws are plotted as a measure of the response. The stimulated traps closed earlier and reopened later during the large response involving multiple gin traps than during a small response involving a single trap. The negative value of the distance means a reversal of the position of the tips at the occlusion of jaws (see Figure 1 inset). If the period of the negative value was defined as the duration of a closed state, the duration during a large response was 5 ms longer than the duration during a small response. Figure 3 shows the mean durations of closed states of different gin traps during the two grades of responses. The mean durations of closed states appeared to be maximal in the gin traps lying between A4 and A5 or A3 and A4, which are larger than the other segments. The mean durations during large responses were approximately 10 ms longer than those during small responses. When the two jaws between A3 and A4 successfully pinched an object (adult tibial segment), the closed state was significantly prolonged to 65-150 ms (mean ± SEM, 104.3 ± 12.6 ms, n = 10).
To analyze the trajectory patterns of the abdomen during the gin-trap responses, a pupa was usually placed ventral-side up, and the position of the last abdominal segment on a posterior view was plotted every 5 ms. Figure 4 shows typical trajectory patterns of small and large gin-trap closure responses. The abdominal segment moved laterally in an arc during the closing phase and turned back medially and ventrally to reach the upper (ventral) position from the original resting position during the opening phase. Thereafter, the segments slowly returned to the original position in 1 second. The trajectories of the closing and opening phases usually crossed at a midpoint of their length.
Complex gin-trap closure response A strong stimulation of the sensitive area of the intersegmental membrane often induced a small or large rotation of the abdomen rather than the simple gin-trap closing-opening response. The trajectory patterns of many rotational responses revealed that the rotational responses were graded and had highly variable horizontal and vertical amplitudes of the rotational movements ( Figure 5). The variability of the rotational responses contrasts with the relative consistency of rotational responses induced by the stimulation of an appendage. Upon initial observation, a large rotation of the abdomen that was induced by stimulating the intersegmental region appeared to be similar to the abdominal rotation induced by stimulating an appendage; however, the temporal and spatial patterns of the two rotation types apparently differed ( Figure 6). The abdominal rotation induced by stimulating specific abdominal regions appeared to have a relatively slow initial phase of rotation followed by a rapid later phase. The shoulder-shaped trajectory course of abdominal movement during the initial phase was very similar to the arc-shaped trajectory course during the closing phase of the gin-trap response. The last abdominal segment reached only to a point several millimeters away from the starting point at 60 ms after the onset of rotation (Figure 6b), as it did during the phase of the gin-trap response (Figure 6a). In contrast, the stimulation of an appendage (a leg) induced a simple rapid rotation that had no slow initial phase and could reach the halfway point of rotation 60 ms after the onset of rotation (Figure 6c). The duration of the closed state of the gin trap was prolonged to 50-60 ms when a large abdominal rotation occurred in response to stimulation of an intersegmental region (data not shown).
The occurrence of abdominal rotations following the gin-trap closure phase varied from pupa to pupa, even though pupae of the same age (one day old) were used (Figure 7). A rotation was classified as small or large when the amplitude of the vertical component of a rotation was lesser or greater than 70% of the maximal amplitude of the vertical component of the largest rotation in the pupa. A few pupae always exhibited a gin-trap closing-opening response even when a stronger brushing was applied to the sensitive area of intersegmental membrane. Meanwhile, in the most sensitive pupa, half of the responses to the stimulus were classified as large. The remaining pupae were in between the other two groups. The probability of a small response was usually < 25%. The occurrence of abdominal rotations in response to stimulation of gin traps between A3 and A4 or A4 and A5 seemed to be somewhat larger than that in response to stimulation of the gin traps of other segments; however, these differences were not examined systematically.
Discussion
The particular area of the intersegmental membrane near a gin trap had many campaniform sensilla (Kurauchi et al. 2011), and gentle brushing of this area with a fine brush readily induced a response. Potential predators of the pupa in nature include carnivorous insects, centipedes, and spiders (Hinton 1946). Since the pupae usually have long appendages (i.e., antennae or legs) that are often covered with many sensory and protective hairs, the hairy part of their bodies may be most suitable for inducing the gin-trap closing-opening response of the pupa. In turn, the gin trap may be adapted to pinch the appendages of the potential enemies. Interestingly, the traps usually snapped shut for only a split second (Figures 1 and 2) and did not remain closed for longer than 150 ms even when the jaws successfully bit an appendage. This suggests that the pupa cannot cause serious damage to an attacker. The gintrap closure response may mainly function to startle or deter attackers (Hinton 1946;Eisner and Eisner 1992). If a gin trap remained closed for any length of time while they held an attacker, the attempts of the attacker to free itself could result in serious injury to the pupa (Hinton 1946). The abdominal rotations that often followed the closure of gin traps ( Figure 5) may make the pupa turn its dorsum toward the enemy (Ichikawa et al. 2012); the dorsum, which is fringed with many spines, probably functions as a shield. Since the soft intersegmental region is vulnerable to attack by parasitoids (Gross 1993), closing this vulnerable region may also be effective against parasitoids.
The magnitudes of abdominal rotations that occurred after gin-trap closure varied significantly ( Figure 5); this graded response contrasts with the stereotypical response induced by stimulating a cephalic or thoracic appendage (Ichikawa et al. 2012). To account for the stereotypical abdominal rotation patterns observed, we propose that the central nervous system (abdominal ganglion) may possess a neuronal mechanism that generates a motor pattern that rotates the abdomen in one (i.e., clockwise or anticlockwise) direction. However, some modification of the single pattern generator model is needed, because this model cannot explain why some pupae exhibited a small but significant difference in the trajectory patterns of their abdominal rotations when different parts of the body (appendages) were stimulated (Ichikawa et al. 2012). Z. atratus pupae have nine abdominal segments numbered A1-A9; each abdominal segment from A2-A6 has four longitudinal (intersegmental) muscle bundles that move the abdomen. It is reasonable to suppose that the magnitude of an abdominal rotation may depend on the number of abdominal segments involved in the rotation. In a preliminary experiment, the trajectory patterns of abdominal rotations became significantly small when some caudal segments of the abdomen were immobilized by surgical transection of the ventral nerve cord between A3 and A4 or A4 and A5. Thus, the graded rotational responses observed in the present study may be due to the difference in the number of abdominal segments activated. It seems likely that each abdominal ganglion from A2-A6 has a pattern generator producing a clockwise or anticlockwise rotation and that all or some pattern generators may be activated depending on the origin and strength of mechanosensory signals. For example, a descending sensory signal originating from a cephalic or thoracic segment usually activates all anticlockwise pattern generators to mobilize all abdominal muscles (e.g., Figure 6c), while a weak signal from an abdominal segment may activate a fraction of the pattern generators to recruit muscles in a few segments near the site of stimulation ( Figure 5A). This multiple pattern generator model possibly overcomes the weakness of the single pattern generator model, because small innate variability (fluctuation) of motor patterns caused by individual pattern generators may summate to become significant in a multiple pattern generator system.
The activities of central pattern generators are generally modulated by sensory feedback mechanisms (Delcomyn 1980;Marder and Buchner 2001). Because the closure time of the gin traps was significantly prolonged when the jaws trap an object, a feedback control mechanism of pattern generation may exist. Several campaniform sensilla found in the jaws (Kurauchi et al. 2011) may be involved in such a feedback control mechanism of the putative pattern generators. Electrophysiological studies may reveal the location and properties of the pattern generators and their sensory control mechanism.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2010-06-21T00:00:00.000
|
8416331
|
{
"extfieldsofstudy": [
"Medicine",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/11/3/2899/pdf?version=1403314431",
"pdf_hash": "cd32f395c26bc0eefe43b380290422c83b32fd12",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43100",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "cd32f395c26bc0eefe43b380290422c83b32fd12",
"year": 2010
}
|
pes2o/s2orc
|
An Emergency-Adaptive Routing Scheme for Wireless Sensor Networks for Building Fire Hazard Monitoring
: Fire hazard monitoring and evacuation for building environments is a novel application area for the deployment of wireless sensor networks. In this context, adaptive routing is essential in order to ensure safe and timely data delivery in building evacuation and fire fighting resource applications. Existing routing mechanisms for wireless sensor networks are not well suited for building fires, especially as they do not consider critical and dynamic network scenarios. In this paper, an emergency-adaptive, real-time and robust routing protocol is presented for emergency situations such as building fire hazard applications. The protocol adapts to handle dynamic emergency scenarios and works well with the routing hole problem. Theoretical analysis and simulation results indicate that our protocol provides a real-time routing mechanism that is well suited for dynamic emergency scenarios in building fires when compared with other related work.
Introduction
In the near future, it can be expected that buildings will be equipped with a range of wireless sensors functioning as part of an overall building management system.Included in this set of sensors will be devices to monitor fire and smoke, allowing detection, localization and tracking of fires.It is expected such information could be used for a variety of purposes, including guiding building occupants to the nearest safe exit, and helping fire fighting personnel to decide on how to best tackle the disaster.Fire/smoke sensors are expected to be programmed to report periodically and also when they detect a sensor input that exceeds a threshold.In the latter case, there is a need for an emergencyadaptive, real-time and robust message delivery toward the sink.For example, a fire-fighter relies on timely temperature updates to remain aware of current fire conditions.In addition, as the fire spreads throughout the building, it becomes likely that the sensing devices may become disconnected from the network or indeed be destroyed, so the network routes have to be changed or re-discovered to adapt to these emergency conditions in order for the network to continue operating.Most existing routing protocols consider the energy efficiency and lifetime of the networks as the foremost design factor.The routing mechanisms used in general wireless sensor networks and even routing for forest fire applications are not well suited for in-building disaster situations, where timeliness and reliability are much more critical.For forest fires the focus is on tracking of fires, rather than evacuation or guidance of fire personnel.This combination of real-time requirements coupled with dynamic network topology in a critical application scenario provides the motivation for our research.In this paper, we propose an emergency-adaptive routing mechanism (EAR) designed especially for building fire emergencies using wireless sensor networks (WSN), which provides timely and robust data reporting to a sink.We do not need to know the exact localization of each sensor and also no time synchronization is needed.To the best of our knowledge, this is the first time a real-time and robust routing mechanism adaptive to building fire emergency using WSNs has been proposed.Also, this protocol could be easily used in other similar emergency applications.
Section 2 presents the related work.In Section 3 we outline the routing problem.We present an emergency-adaptive routing mechanism in Section 4. In Section 5, we present a preliminary analysis.In Section 6, we give ns2 simulation results.Finally, Section 7 concludes this paper.
Background and Related Work
Most routing protocols for WSNs focus on energy efficiency and link node lifetime related explicitly to its energy resources, i.e., a node is assumed to fail when the battery is depleted.Some WSN applications require real-time communication, typically for timely surveillance or tracking.Real-time routing protocols in WSNs are not new.For example, SPEED [1], MM-SPEED [2], RPAR [3] and RTLD [4] were all designed for real-time applications with explicit delay requirements.He et al. [1] proposed an outstanding real-time communication protocol binding the end-to-end communication delay by enforcing a uniform delivery velocity.Felemban et al. proposed [2] a novel packet delivery mechanism called MMSPEED for probabilistic QoS guarantee.Chipara et al. proposed [3] a real-time power aware routing protocol by dynamically adapting transmission power and routing decisions.But these routing protocols are not well suited for routing in emergency applications such as building fires, where critical and dynamic network scenarios are key factors.Amed et al. proposed [4] a novel real-time routing protocol with load distribution that provides efficient power consumption and high packet delivery ratio in WSN.
There are many robust routing protocols proposed for WSNs.Zhang et al. [5] proposed a framework of constrained flooding protocols.The framework incorporates a reinforcement learning kernel, a differential delay mechanism, and a constrained and probabilistic retransmission policy.The protocol takes the advantages of robustness from flooding.Deng et al. [6] presented a light-weight, dependable routing mechanism for communication between sensor nodes and a base station in a wireless sensor networks.The mechanism tolerates failures of random individual nodes in the network or a small part of the network.Boukerche et al. [7] presented a fault tolerant and low latency algorithm, which refer to as periodic, event-driven and query-based protocol that meets sensor networks requirements for critical conditions surveillance applications.The algorithm uses a publish/subscribe paradigm to disseminate requests across the network and an ACK-based scheme to provide fault tolerance.In building fires, network topology changes rapidly because of hazard and node failure, so general robust protocols are not suitable for such scenarios.Here, we want to design protocols that can be adaptive to the occurrence of fire, expanding, shrinking or diminishing, etc.So, "robustness" in this paper means "adaptive to fire situations".
In this regard, the work by Wenning et al. [8] is interesting, as they propose a proactive routing method that is aware of the node's destruction threat and adapts the routes accordingly, before node failure results in broken routes, delay and power consuming route re-discovery.They pay attention to the aspect of node failures caused by the sensed phenomena themselves.
However, in their work, they focus on disasters such as forest fire that are very different from design issues in building situations.Fire emergencies using wireless sensor networks within buildings are more challenging because of the complex physical environment and critical factors of fire hazards.In [9], we proposed a fire emergency detection and response framework for building environments using wireless sensor networks.We presented an overview of recent research activity including fire detection and evacuation, in addition to providing a testbed especially designed for building fire applications.Other researchers have worked on emergency guidance and navigation algorithms with WSNs for buildings.Tseng et al. [10] proposed a distributed 2D navigation algorithm to direct evacuees to an exit, while helping them avoid hazardous areas.Their design allows multiple exits and multiple emergency events in the sensing field.Sensors are used to establish escape paths leading to exits that are as safe as possible.When surrounded by hazards, sensors will try to guide people as far away from emergency locations as possible.Based on this, Pan et al. [11] proposed a novel 3D emergency service that aims to guide people to safe places when emergencies happen.In their work, when emergency events are detected, the network can adaptively modify its topology to ensure transportation reliability; quickly identify hazardous regions that should be avoided and find safe navigation paths that lead people to exits.Barnes et al. [12] presented a novel approach for safely evacuating persons from buildings under hazardous conditions.A distributed algorithm is designed to direct evacuees to an exit through arbitrarily complex building layouts in emergency situations.They find the safest paths for evacuees taking into account predictions of the relative movements of hazards, i.e., fires and evacuees.Tabirca et al. [13] solve a similar problem, but under conditions where hazards can change dynamically over time.
When fire expands in an inner building, there may cause a lot of segmentation in the network.In this case, a lot of routing holes occur that lead to data routing failure.The "Routing Hole Problem" is a very important and well-studied problem, where messages get trapped in a "local minimum".Some existing "face routing" algorithms have been developed to bypass routing holes using geo-routing algorithms.GPSR [14] recovers holes by using the "right-hand rule" to route data packets along the boundary of the hole, combining greedy forwarding and perimeter routing on a planar graph.The authors of [15] proposed the first practical planarization algorithm with a reasonable message overhead, lazy cross-link removal (LCR).Fang et al. [16] presented an interesting approach, the BOUNDHOLE algorithm, which discovers the local minimum nodes and then "bounds" the contour of the routing holes.In the building fire situation, holes feature prominently and can be expected to grow in size rapidly as a fire spreads, thus demanding solutions that are robust and low complexity for quick reactions.
Definitions
Given a homogeneous WSN deployed in a building for fire hazard applications with N sensors and M sinks, each sensor can adjust its maximal transmission ranges to one of the k levels: r 0 , r 1 … r k-1 = r max by using different transmission power levels from p 0, p 1, till p k-1 = p max .Initially, all sensors work in p 0 .From the application aspect, real-time and robustness are two main challenges.T max is the maximum acceptable delay in reporting such a fire event to a sink node.It is required that each sensor i will report data packets to a sink node, such that: (1) A communication path from sensor to the sink can be found if such a path exists.
(2) The end-to-end delay of the path is no more than T max.
(3) The choice of route is adaptively changed in response to failed nodes (assumed to be caused by fire damage).(4) A suitable minimized power level (min {p 0, p 1… p k-1 }) is selected to ensure transmission to satisfy (1), ( 2), (3) without unnecessary power dissipation.Each node in the network exists in one of four states (listed in the order of health degree from best to worst): "safe": initial state while no fire occurs."lowsafe": one-hop away from an "infire" node."Infire": when detects fire."unsafe": detects that it cannot work correctly any longer due to a definite fire There is a STATE message recording current change of node state to notify its neighborhood nodes in a fire.STATE (INFIRE) message: If a sensor detects fire, it enters "infire" by broadcasting a message out to denote a new local fire source.STATE (LOWSAFE) message: The nodes in "safe" state that receive a STATE (INFIRE) message will become "lowsafe", and then broadcast a STATE (LOWSAFE) message to notify its neighbors.The nodes that hear the STATE (LOWSAFE) message will get to know the new state of its neighbors about fire and do nothing.STATE (UNSAFE) message: An "infire" node works until it cannot work correctly.Before it cannot work any longer, it enters into "unsafe" state and broadcast a message.Any nodes that detect its residual energy is too low to work will enter into "unsafe".And then broadcast a STATE (UNSAFE) message.
Thus each sensor may change its state autonomously in response to the fire and messages it receives, as shown in Figure 1.
Initialized Routing Structure Initialized Sink Beacon:
The purpose of routing initialization is to form an initialized neighborhood and routing construction after the sensors are deployed and connected as a WSN in the building.We assume that sinks are deployed in a relatively safe place such that they are less likely to be destroyed, for example due to walls collapsing.Once the network is deployed, each sink generates a HEIGHT message using power level p 0 .This serves to advertise to neighbor nodes and includes a "height" parameter that represents the hop count toward the sink, and is initialized to 0. The height value is incremented by each forwarding hop.Each node records the height information in its local neighborhood table when it receives the first HEIGHT message.The message contains a sequence number so that a node can determine if it has seen the message already, in which case it ignores it.If it is the first time that it receives a HEIGHT message, the node forwards the HEIGHT message out.As explained below this process serves to ensure that each node will know a minimal delay route path from itself toward one of the sinks.
End-to-End HEIGHT Delay Estimate:
In this HEIGHT message broadcasting process, the end-to-end delay from a node to the sink could be approximated by the cumulative delay on each hop.We use "delay estimate" in our EAR routing mechanism to make the forwarding choice.We denote delay (sink, i) as the delay experienced from the sink to each node, and then we could use delay (sink, i) as a bound to guide a real-time delivery from the node to the sink.The delay in transmitting a packet is estimated by: (1) In formula (1), n is the hop count from the sink to node i, T c is the time it takes for each hop to obtain the wireless channel with carrier sense delay and backoff delay.T t is the time to transmit the packet that determined by channel bandwidth, packet length and the adopted coding scheme.T q is the queuing delay, which depends on the traffic load, and R is the retransmission count.Among them, we omit the propagation delay, as in a WSN this is negligible due to the use of short-range radios.In the delay calculation, the delay of MAC layer with MAC protocol used is counted in.
The average end-to-end delay from each node to the sink can be computed as the cumulative hop-by-hop delay, and the delay experienced in the current hop is calculated and updated locally, and then recorded in the HEIGHT message.Then delay (sink, i) is recorded into the neighborhood table of each node.We use a periodic HEIGHT message update to calculate an average end-to-end delay (from multiple end-to-end delay estimates) as reference.Since packets in WSNs always tend to be relatively small, we consider it reasonable to ignore any impact of delay differences related to packet size.Furthermore, delay estimate utilizes Jacobson's algorithm [17] to make adjustment by considering both the weighted average and variation of the estimated variable and as a result provides a good estimate of the delay.It can work well when link quality and network load varies.The calculation of average end-to-end delay and variation avoids a large number of deadline misses due to high variability in communication delays.
Since the traffic from the node to the sink is usually heavier than the traffic from the sink to the node under the same radio situation according to sensor applications, we can say that queuing delay T q (sink, i) ≤ T q (i, sink).This is bounded by the maximum queuing delay, i.e., T q (i, sink) ≤ T qmax .When assuming the same radio and link quality for downstream and upstream links on the counterpart route path, we can get that: delay (i, sink) ≤ delay qmax (sink, i).delay qmax (sink, i) is the delay experienced from sink to i with the maximal delay on queuing.Then our delay estimate and realistic delay on the route path T satisfy: delay (sink, i) ≤ T ≤ delay qmax (sink, i).We can use delay (sink, i) as a "bound" to guide the real-time routing forwarding selection.If the delay and slack time (defined as time left for routing) meets the estimated delay time for data delivery, the packet has a high probability to arrive before deadline and thus ensures real-time communications.
Periodic Sink Update:
With the HEIGHT message broadcast process, an initial neighborhood is formed by each sensor for which it records neighbor ID, height, state, estimated delay, residual energy of all neighbors, as well as the transmission power that the node uses to communicate with its neighbor on the path to the sink.Each sensor records its own ID, state, and residual energy.In addition, each node maintains sink ID with its minimal-delay sink.In a fire scenario, the sink may become disabled and the network's topology will be changed by the fire.To ensure robust connectivity, each sink will periodically send out a HEIGHT message to refresh the network.The refresh rate is a protocol design parameter that trades off overhead for increased robustness to lost HEIGHT messages and path changes.In a fire situation one would expect to decrease the period, although the impact on network traffic load must also be examined.
Routing Mechanism Details Forwarding Choice:
For a given application-specific T max , we use slack to remember the time left on the path from the current node to the sink.Each node in the neighborhood table is associated with a forward_flag and a timeout.The flag is used to identify the next hop as a best forwarding choice, i.e., when a node is chosen as the best forwarding choice, the forward_flag is set to 1.The timeout value is the valid time for the current forwarding node and used to prevent stale neighborhood information (introduced in Section 4.3.)If "timeout" of a forwarding choice is due, its forwarding flag is set to 0 to evict the stale relay node.
To select the best forwarding choice from local neighborhood table, we use the following criteria: Firstly, we filter the forwarding choices by "height" to choose the nodes with lower height.Secondly, choose the node with enough slack time according to delay estimate on the path.Thirdly, we filter the remaining forwarding choices by node state in the priority from "safe" to "infire".
If there is more than one node satisfied, we select the best forwarding choice with higher residual energy.If there is still a tie, we choose the lower ID.
If we cannot find a best forwarding choice with the current transmission power, we say that a "hole" has occurred (i.e., stuck in local minimum).
Hole Problem Handler by Adapting Power level:
If a sensor node cannot find a next hop that satisfies the real-time constraint with current power level, it means that the node is stuck in a local minimum.The solution is to increase the transmission power gradually by levels to find another neighbor or invoke a new neighbor discovery.Otherwise, a notify message is sent back to its upstream node (i.e., parent) to stop sending data packets to the current node; and then a routing re-discovery is invoked by the upstream node.
If we could find another node existing in the neighborhood table by adapting the transmission power, then we increase the power level and name this neighbor as a forwarding choice.Otherwise, a new neighbor discovery is invoked by increasing the transmission power gradually by levels.We increase power level gradually but not to the maximal power level directly by considering of the big interference incurred by larger power.We know that there are only two to three power levels provided on existing MICA motes and most of the motes currently used.So, it converges very quickly to the optimal power level.Figure 2 shows an example of a new neighbor discovery, where sink1 and sink2 are two sinks, and the other nodes are sensors.Node i reports and routes data to the sink.The number on each node represents the "height" of each node toward the sink.As the route path {i, a, sink1} with p 0 is invalid because slack cannot satisfy the estimated end-to-end delay, node i is in the "hole".If there are no existing eligible neighbors, then i will increase its power to p 1 to reach node j and delivers the packets to another sink sink2 by route path {i, j, sink2} when "slack" on this route is no less than delay estimate.Each sensor has k levels of power setting: {p 0 , p 1 , p 2 …p k-1 } and could be in k levels of maximal transmission range as: {r 0 , r1…r k-1 }.We defined a function to find appropriate transmission power by increasing the power as follows: where, cur is the current number of transmission range level among k levels, ι is the count of unsuccessful attempts.A sensor will increase its transmission power gradually in levels if it cannot find an eligible new neighbor.A node increases its power according to formula (2) until one of the following conditions is satisfied: (1) It finds a node as a forwarding choice in "safe" state according to the height and delay estimate.
(2) If p = p max ; in this case, it finds the new neighbor as a forwarding choice by the height and delay estimate in a priority from "safe", "lowsafe" to "infire"; otherwise, no eligible new neighbor can be found.In the new neighbor discovery, sensor i will broadcast out a Routing Request (RTR) message.In this process, sensor i piggybacks height, slack and the newly adapted power p i in RTR message.For a node j that hears the message, if the estimated end-to-end delay is no more than slack and its height is lower than height(i, sink), as well as its state is "safe", then j is selected as a new neighbor.If sensor j hears the RTR with p max , and if its height is lower than height (i, sink), then j is selected a new neighbor when j is not in "unsafe" state.The new neighbor will reply to node i with the same power that node i is using, after a random backoff to avoid collisions.The forwarding choices send reply message with p i only as necessary for reaching node i, otherwise reverting to their previous power level.Upon receiving the reply, node i inserts the new neighbors into its neighbor table.During RTR and reply message exchange, we could calculate the delay between i and its new neighbor j as follows: For meeting real-time requirements, the forwarding choices should satisfy that: "slack" is no less t han the average delay between i and j plus the delay estimate at node j, i.e., slack(i) ≥ Ave_delay(i, j)+delay(sink, j) ( If there is more than one new neighbor found, a best forwarding node is selected by the priority of state from "safe", "lowsafe" to "infire".If there is still a tie, the best relay is selected by the node with higher residual energy and lower ID number.
For a node that works in a larger transmission range could still be adapted to decrease the transmission power to improve energy efficiency and network capacity, while delay deadline is loose.So we define when a node detects a good connectivity with safe neighborhood that is larger than a predefined threshold, i.e., |Neighbor safe | > N_threshold, power decrease process is invoked.
We defined a function to find appropriate transmission range by decreasing transmission power as follows: In formula (5), cur is the current number of transmission power level among k levels.ι' is the count of decrement.
A node is eligible for power decrease until: (1) The minimum power has been reached.
(2) There are two consecutive power levels such that at the lower level the required delay is not met but at the higher power level the required delay is met.(3) There are two consecutive power levels such that at the lower level the required safe neighborhood connectivity N_threshold is not met but at the higher power level it is.
Neighborhood Table Management:
The neighborhood table records information including transmission power for reaching the neighbor nodes, and is updated by periodic HEIGHT messages from sinks.For power adaptation and new neighbor discovery, the neighborhood table will be updated with the new neighbors and new transmission power.The node also updates its neighborhood with the neighbors and new states as they change.If it receives a STATE (UNSAFE) message, the unsafe neighbor is removed from the table.
Routing Reconfiguration
In building fire emergencies, robust routing is crucial due to the impact of quickly moving fire on node liveness.In this section, we explain how we reconfigure to deal with failures.We assume that: (1) the minimal time interval between "infire" and "unsafe" state of a node is chosen as a parameter known beforehand and denoted as t unsafe .(2) We use necessary transmission range for connectivity between nodes (according to selected power level) to approximate the minimal fire spreading time between two nodes.In practice, there are well-known guidelines for estimating the rate of fire spread [18][19], taking into account of building materials, building geometry, etc.It's also the case that obstacles, such as walls, that mitigate radio propagation also have the effect of slowing fire spread.
When a forwarding choice is used for relaying, we add "timeout" and avoid using stale and unsafe paths, i.e., every node on the path from source s to destination d has "timeout" to record the valid time of each link on this route.The timeout is updated when node state changes occur among the neighborhood.The forwarding choice that exceeds the timeout value is considered invalid and then evicted.
We assign an initialized large constant value to represent the estimated valid time for the node in "safe" state.
When a neighbor node j is caught in fire, a STATE (IN-FIRE) message is broadcast.If a "safe" node i receives a STATE (IN-FIRE) message from its neighbor, node i will enters into the "lowsafe" state.The timeout of node i is updated, i.e., the valid time of node i is updated, as the minimal time that this node may be caught in fire until it is out of function: timeout (i) = min(spread_time(i, j) + t unsafe (6) Then the timeout value of both downstream and upstream links that are adjacent to node i are also updated accordingly.If node i becomes "infire", the timeout of adjacent links are updated as t unsafe , i.e., timeout (i) = t unsafe .
Otherwise, if node i becomes "unsafe" by local sensed data and threshold, then timeout (i) is updated as 0 and the timeout of the adjacent links are also updated to 0.
The link timeout value is updated as the state of the node adjacent to the link changes.When a node state is changed for fire, the "timeout" on upstream and downstream links that are adjacent to this node will both be updated.For path link (i, j) on each route path, the timeout value for this link is calculated as: timeout (link(i, j))=min(timeout(i), timeout(j)) (7) In formula (7), timeout (i) and timeout (j) represents the valid time for node i, j of the route in fire, respectively.
In a building fire, node failures because of fire damage will trigger routing tree reconfiguration.In case of a path link timeout value that is lower than a threshold (i.e., the route path will be invalid very soon), a route reconfiguration is invoked to find another available route path before the current one becomes invalid.The reconfiguration is only invoked by an upstream node i of the path link (i, j) whose valid time is no less than the timeout of the link, i.e., timeout (i) ≥ timeout (link (i, j)).The routing reconfiguration of the node is invoked as a routing re-discovery by broadcasting a RTR message to set up a new route path search.The search of the forwarding choice is invoked in its neighborhood table to find if one of the existing neighbors is eligible to act as a relay or not by adapting the power to the setting recorded in local neighborhood.Otherwise, we will start a neighbor re-discovery process by increasing its power level gradually.
The re-discovery process stops when it finds another new forwarding choice with a valid route path cached toward one of the sinks (that could be a different sink from current one).b, d, j, and c.When these nodes receive the message, they will enter into the "lowsafe" state.For the state change of sensor i, then timeout (i) is updated as t unsafe .Accordingly, sensor i will update the timeout of its upstream and downstream link, i.e., link (b, i) and link (i, j).As our designed condition for reconfiguration, when timeout (link (b, i)) and timeout (link (i, j)) is lower than a predetermined threshold, the routing reconfiguration is invoked by the upstream node whose timeout is no less than the link timeout.Then sensor b will broadcast a RTR message to find a new relay to the sink, i.e., route path {f, b, c, e, sink}.When it comes to path link (i, j), sensor i is the upstream node of the link with the lower valid time.It will still work on this path (to forward data from sensor i to the sink) until sensor i becomes "unsafe".
It is assumed that data packet acknowledgements are sent at the link layer (not end-to-end).When a node does not receive an acknowledgement after a given time, we say the downstream link becomes invalid and then reconfigure routing.
Analysis
Lemma1.The EAR routing of the sensor network graph is loop-free.Proof: Suppose that there exists a loop "ABCDE…A" in the network graph by EAR routing.Each node selects its next node which has less height towards the sink.When a node is stuck in local minimum, i.e., in a routing hole, the node could increase its transmission range to find another node that has less height towards the sink if exists.According to this, we could get: . This is a contradiction, so we conclude that the EAR routing of the network graph is loop-free.
Theorem1.If there exists a route within delay bound from a node to one of the sinks, EAR can find this route.
Proof: From Lemma 1, we know that there is no loop in the routing graph.Since the number and height of sensor nodes is limited, so the routes will lead to the sink eventually as long as the real-time route exists.
Theorem2.For a given delay bound T max , the routing path found by EAR is within the delay requirement.
Proof: We denote delay (sink, i) as the delay estimate that is the minimal delay from the sink to the node, while delay (i, sink) as the delay from the node to the sink on the counterpart route path.We denote T (i, sink) as the realistic delay experienced from a node to the sink.For queuing delay in wireless sensor networks, data packets are always reported from the node to the sink, while less traffic (usually control command) is delivered from the sink to the node.So T q (sink, i) ≤ T q (i, sink).When assuming the same link quality of upstream and downstream links, there exists: delay (sink, i) ≤ delay (i, sink) ≤ T (i, sink).In EAR, we use delay (sink, i) as estimate of delay time form the node to the sink in routing discovery to find a route that meets the lower delay threshold, i.e., using delay (sink, i) to estimate T (i, sink).In this way, we could improve the real-time delivery ratio from the node to the sink.Since we measure the average delay with HEIGHT using power p 0 , we get the maximal delay estimate time delay (sink, i) on the minimum delay route path from the sink to the node within different power levels.In EAR, we find a relay node i that delay T from i to the sink with this route path should satisfy that it is no larger than the delay estimation on the route path, i.e., T (i, sink) ≤ delay (sink, i).Otherwise, we increase the power level to find another forwarding choice j, and such a node j (with increasing power) exists by satisfying: delay (sink, j) + Ave_delay (i, j) ≤ T slack ; where T slack = T max -T(s, i).The end-to-end delay time T satisfies: T(s, sink) = T(s, i) + T(i, sink) ≤ T(s, i) + Ave_delay(i, j) + delay(sink, j) ≤ T(s, i) + T slack ≤ T max .So, we find a route from node s to the sink that satisfies T(s, sink) ≤ T max .
From the above situations, if a real-time route exists, EAR can find a route path satisfying that the end-to-end delay is within the delay requirement T max .
Simulations
We verify our routing by simulations using the ns2 network simulator [20].To create a realistic simulation environment, we simulated EAR based on the characteristics and parameters of the MICAz motes, as shown in Table 1.All nodes could be used to work with three power levels and they will work in the minimal power level as the default transmission power.Many-to-one traffic pattern is used, which is common in WSN applications.This traffic is typical between multiple source nodes and one of the sinks.There are 100 nodes distributed in a 100 m × 100 m region as shown in Figure 4. We randomly select four nodes as source nodes, and place 1-4 sinks in the simulation areas as node 99, 98, 97 and 96 respectively.Each source generates constant bit rate (CBR) traffic periodically.The real-time packet miss ratio and packet dismiss ratio by delay estimate as well as energy consumption are assigned as the main metrics for evaluating the performance of EAR.The real-time packet miss ratio (we use "miss ratio" in the following paragraph) is the ratio of all packets missed because of the delay bound to the total packets sent out.The packet dismiss ratio by delay estimate (we use "dismiss ratio" in the following paragraph) is defined as the ratio of packets discarded by delay estimate and the total sending packets.The energy consumption is the average energy consumed for each sensor during the simulation.Within the simulated area, a fire breaks out 30 seconds after the simulation is started which means the first 30 seconds of the simulation.The node in the network is static.At 30 seconds after the simulation begins, a fire occurs randomly in the network area and then spreads to its neighbors continuously every 10 seconds.When the fire reaches a sensor node, it will lead to a terminal node failure after 10 seconds.
We compare our protocol with minimal hop count routing and RPAR protocol to make performance evaluation.The two comparing routing mechanisms are operated with the initial power as the default transmission power in EAR.RPAR is a real-time power-aware routing mechanism that achieves this by dynamically adapting transmission power and routing decisions based on packet velocity calculated by geographical distance and time left.
EAR Performance When Sink Number Increases
We simulate EAR performance when increase the sink number from 1 to 4 as the delay bound is set from 10 to 100. Figure 5 shows the end-to-end delay as delay bound increases.We could see that end-to-end delay decreases as the sink number increases, because more sinks incur more packet delivery within the bound.For a given number sink, the end-to-end delay increases slowly as we relax the bound.For one sink, the end-to-end delay is very small as the bound is 10 ms, because very seldom packet can be delivered within the bound.Figure 6 shows the miss ratio when we decrease delay bound.The packet miss ratio according to delay bound decreases as the sink number increases from 1 to 4. Because more sinks increases the real-time packet delivery probability.Figure 7 illustrates the packet dismiss ratio according to delay estimate.From the result, the dismiss ratio decreases as the sink number varies from 1 to 4. And we can see that EAR provides a good delay estimate and guide packet delivery towards the real-time direction when compared with miss ratio results in Figure 6. Figure 8 shows the average residual energy for node in the simulation time from 0 to 300 s when the delay bound is 70 ms.The average node energy does not vary greatly as we increase the number of sinks.Since increase the sink number, more packets are delivered by more energy consumption and also less routing trials with increased power.The node energy decreases as we relax the bound because more packets are delivered within the given delay bound.
EAR Performance Compared with No Power Adaptation and No Fire Situations
We evaluate EAR routing when using 3 power levels adaptation and no power adaptation situations.Figure 9 illustrates the end-to-end delay with power adaptation and without power adaptation situations.We get the results with 1 sink and 3 sinks respectively.It is obviously the end-to-end delay decreases a lot when we use power adaptation.By the benefit of power level adaptation, we could increase the network connectivity in fire and also help to find lower delay route path to guarantee a real-time packet delivery under the given delay bound.Delay Bound(ms) 1 sink with power adaptation 1 sink without power adaptation 3 sinks with power adaptation 3 sinks without power adaptation Figure 10 shows the miss ratio with/without power adaptation.The miss ratio increases greatly if we adapt power level in the network to increase the probability of real-time packet delivery.We then evaluate energy efficiency of EAR routing when fire happens and no fire happens situations.Figure 11 illustrates the average node energy in the simulation time when delay bound is set to 50 ms.From the results, it is obvious that average node energy decreases in fire situation.But until 250 s of simulation, the average node energy is larger than 0. For delay bound chosen as 50 ms, the network is still effective until close to the end of the simulation in building fire situations.
Performance Compared with Other Protocols in Fire Hazard
We then compare EAR with two related routing mechanisms: RPAR and minimal hop count routing.Figure 12 shows the end-to-end delay as delay bound increases from 10 to 100 ms when there is one sink (node 99).We can see that EAR has the minimal end-to-end delay as we relax the bound, then RPAR, and minimal hop count routing has the worst result.
Because EAR adapt power level to try to increase the probability of real-time delivery and it is adaptive to fire spreading by choosing the real-time route path avoiding the dangerous area in fire.RPAR also uses power adaptation to try to increase the real-time delivery, but they are not suitable for fired, and easily chooses a minimal delay path but in the fire area.There is no real-time guarantee mechanism in minimal hop count routing and it is not suitable for fire situations.Figure 13 shows the miss ratio of real-time packet delivery with one sink.EAR achieves the best real-time data delivery.RPAR is not suitable for fire hazard.Even it adapts power level to try to find a real-time delivery path, but the performance is bad in fire.Figure 14 shows the average node energy in simulation time when delay bound is 50 ms.From the results, three routing mechanisms have similar energy efficiency.EAR has no obvious better energy efficiency, because it increase its power level to increase real-time packet delivery and incur energy consumption.
Conclusions and Future Work
We present a novel real-time and robust routing mechanism that is designed to be adaptive to emergency applications such as building fire hazards.The probability of end-to-end real-time communication is achieved by maintaining a desired delay based on a message propagation estimate and power level adaptation.The design is be adaptive to realistic hazard application characteristics including fires expanding, shrinking and diminishing.Our routing mechanism is designed as a localized protocol that makes decisions based solely on one-hop neighborhood information.Our ns-2 simulation results prove that the EAR routing mechanism achieves a good real-time packet delivery adaptive to fire emergency when compared with other related works.We have implemented our protocol into a 4-node TinyOS testbed.Future work will include implementation on a 100-node testbed we have deployed at our university to monitor and help to handle building fires.
Figure 1 .
Figure 1.State transition diagram for each node.
Figure 3 .
Figure 3. Timeout update in fire and route reconfiguration.
Figure 3
Figure3shows an example for timeout update in fire.For sensor f, it reports to sink by route path: {f, b, i, j, sink}.After working for a while, sensor i (colored red) senses the fire occurrence.Then sensor i broadcasts a STATE (IN-FIRE) message to notify its communication neighbors (colored yellow): a, b, d, j, and c.When these nodes receive the message, they will enter into the "lowsafe" state.For the state change of sensor i, then timeout (i) is updated as t unsafe .Accordingly, sensor i will update the timeout of its upstream and downstream link, i.e., link (b, i) and link (i, j).As our designed condition for reconfiguration, when timeout (link (b, i)) and timeout (link (i, j)) is lower than a predetermined threshold, the routing reconfiguration is invoked by the upstream node whose timeout is no less than the link timeout.Then sensor b will broadcast a RTR message to find a new relay to the sink, i.e., route path {f, b, c, e, sink}.When it comes to path link (i, j), sensor i is the upstream node of the link with the lower valid time.It will still work on this path (to forward data from sensor i to the sink) until sensor i becomes "unsafe".It is assumed that data packet acknowledgements are sent at the link layer (not end-to-end).When a node does not receive an acknowledgement after a given time, we say the downstream link becomes invalid and then reconfigure routing.
Figure 5 .
Figure 5. End-to-end delay as delay bound increases.
Figure 6 .
Figure 6.Miss ratio percentage as delay bound increases.
Figure 7 .
Figure 7. Dismiss ratio percentage as delay bound increases.
Figure 9 .
Figure 9. End-to-end delay with/without power adaptation.
Figure 12 .
Figure 12.End-to-end delay as delay bound increases.
Figure 13 .
Figure 13.Miss ratio as delay bound increase.
|
v3-fos-license
|
2018-05-08T18:31:34.854Z
|
0001-01-01T00:00:00.000
|
10465400
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-11-389",
"pdf_hash": "591c06843b2792b159a196e33a62ee12715dbb71",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43102",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "591c06843b2792b159a196e33a62ee12715dbb71",
"year": 2010
}
|
pes2o/s2orc
|
Open Access Research Article Array Comparative Hybridisation Reveals a High Degree of Similarity between Uk and European Clinical Isolates of Hypervirulent Clostridium Difficile
Background: Clostridium difficile is a Gram-positive, anaerobic, spore-forming bacterium that is responsible for C. difficile associated disease in humans and is currently the most common cause of nosocomial diarrhoea in the western world. This current status has been linked to the emergence of a highly virulent PCR-ribotype 027 strain. The aim of this work was to identify regions of sequence divergence that may be used as genetic markers of hypervirulent PCR-ribotype 027 strains and markers of the sequenced strain, CD630 by array comparative hybridisation. Results: In this study, we examined 94 clinical strains of the most common PCR-ribotypes isolated in mainland Europe and the UK by array comparative genomic hybridisation. Our array was comprehensive with 40,097 oligonucleotides covering the C. difficile 630 genome and revealed a core genome for all the strains of 32%. The array also covered genes unique to two PCR-ribotype 027 strains, relative to C. difficile 630 which were represented by 681 probes. All of these genes were also found in the commonly occuring PCR-ribotypes 001 and 106, and the emerging hypervirulent PCR-ribotype 078 strains, indicating that these are markers for all highly virulent strains. Conclusions: We have fulfilled the aims of this study by identifying markers for CD630 and markers associated with hypervirulence, albeit genes that are not just indicative of PCR-ribotype 027 strains. We have also extended this study and have defined a more stringent core gene set compared to those previously published due to the comprehensive array coverage. Further to this we have defined a list of genes absent from non-toxinogenic strains and defined the nature of the specific toxin deletion in the strain CD37. Background Clostridium difficile (C. difficile) is a Gram-positive, spore-forming, anaerobic bacterium currently responsible for virtually all cases of pseudomembranous colitis (PMC) and for 10-25% of cases of antibiotic-associated diarrhoea [1]. The organism is resistant to various antibiotics and capitalizes on the ensuing disruption of the normal intestinal flora to colonization and cause disease. The spectrum of disease ranges from asymptomatic carriage to a fulminant, relapsing, and increasingly fatal colitis [2]. The effects of C. difficile infection (CDI) are devastating, both in terms of morbidity and mortality and the high costs of disease management [3,4]. Once regarded as rela
Background
Clostridium difficile (C. difficile) is a Gram-positive, spore-forming, anaerobic bacterium currently responsible for virtually all cases of pseudomembranous colitis (PMC) and for 10-25% of cases of antibiotic-associated diarrhoea [1]. The organism is resistant to various antibiotics and capitalizes on the ensuing disruption of the normal intestinal flora to colonization and cause disease. The spectrum of disease ranges from asymptomatic carriage to a fulminant, relapsing, and increasingly fatal colitis [2]. The effects of C. difficile infection (CDI) are devastating, both in terms of morbidity and mortality and the high costs of disease management [3,4]. Once regarded as rela-tively uncommon, there has been an upward trend in the incidence of CDI in both North America [1,5,6] and Europe [7,8] culminating in 2007 in over 5 times as many deaths (8,324) than MRSA (1,593) in England and Wales [9].
Various reasons have been suggested for this extraordinary rise in incidence and mortality, including the emergence of so-called 'hypervirulent' strains. The most prominent such strains belong to PCR-ribotype 027, responsible in North America for a 5-fold increase in the historical average of CDI, more severe disease, higher relapse rates, increased mortality, and greater resistance to fluoroquinolone antibiotics [10]. Although restriction endonuclease analysis (REA) and multilocus variable number tandem repeat analysis (MLVA) have greater powers of discrimination [11], PCR-ribotyping [12,13], represents the most widely used method of distinguishing strains, and relies on the use of specific primers comple-mentary to the 3' end of the 16S rRNA gene and to the 5' end of the 23S rRNA gene to amplify the variable-length intergenic spacer region. The fragments generated are analysed electrophoretically, and the size distribution of fragments obtained compared to reference patterns. Presently upwards of 150 PCR-ribotypes are recognised [14].
Typically, PCR-ribotype 027 strains (also characterised as toxinotype III, North American pulsed field gel electrophoresis type 1, NAP1, and restriction endonuclease analysis group BI) possess a binary toxin gene and encode a variant TcdC repressor protein suggested to account for increased toxin production [15,16]. Current PCR-ribotype 027 strains have, since the first documented isolate [17], acquired resistance to fluoroquinolone and erythromycin antibiotics [18][19][20], and their occurrence is often associated with an excessive use of quinolone antibiotics. The speed with which PCR-ribotype 027 can become predominant is exemplified by events in the UK where its incidence increased from virtually zero over the period 1990 to 2005 [21], to 25 [23]. However, whilst PCR-ribotype 027 strains have received much attention, other strains may also present an equivalent threat in terms of disease severity. In many countries, different PCR-ribotypes can predominate, but be extremely rare elsewhere. For instance, the PCR-ribotype 106, although common in the UK [22], was entirely absent from the European study of Barbut et al. [24]. In the Netherlands, PCR-ribotype 078 increased from 3% to 13% over the period February 2005 to February 2008, infected younger individuals than PCRribotype 027 and was more frequently involved in community-associated disease [25]. Human PCR-ribotype 078 isolates possess a number of features in common with PCR-ribotype 027 and have recently been shown to be genetically related to isolates from pigs [26].
Currently, the overall reason why particular strains achieve epidemic status is unclear. Although some suggestions have been made [27], in terms of altered toxin production, presence of binary toxin, changes in antibiotic susceptibility and sporeogenesis, the situation is likely to be more complex involving a number of different phenotypic traits. A previous comparative phylogenomic study using microarrays based only on those genes present in the annotated genome sequence of a PCR-ribotype 012 strain, CD630 [28,29] [GenBank: AM180355.1]), showed that the PCR-ribotype 027 strains tested formed a tight clade, which was distinct from the other 56 strains analyzed and confirmed the clonal nature of PCR-ribotype 027 strains, but indicated extensive variation in the genetic content [29]. A further study microarray study included extra genes from the Canadian PCR-ribotype 027 strain, QCD-32g58 [30] [Genbank: AAML00000000]) where the conserved genetic core was defined and divergent regions were conserved amongst strains of the same host origin.
The aim of this current study was to identify unique strain differences using a genome wide approach, with a view to both gaining greater insight into enhanced virulence and as a means of identifying regions of sequence divergence suitable for use as diagnostic indicators of hypervirulence. To accomplish this, a DNA microarray comprised of over 41000 oligonucleotides was designed and constructed using in situ inkjet oligonucleotide synthesis. The strains represented included CD630, R20291 and QCD-32g58. The strains subjected to comparative genomic hybridisation were chosen as they represented the most prevalent PCR-ribotypes from the UK and EU [2,22]. The work presented in this study represents the application of a novel microarray format to the study of comparative genomic hybridisation and is the only study that employs the widely used molecular typing technique of ribotyping to choose the strains for hybridisation and for subsequent clustering analysis.
Array verification and coverage
Forty thousand and ninety seven 60-mer probes were designed to cover the sequenced and annotated genome of C. difficile CD630. This essentially corresponded to a probe every approximately 200 bp. A further 687 probes were designed to extra genes in the preliminary 454 sequence produced for R20291 by the Sanger in 2007 and the available unannotated QCD-32g58 sequence. Additionally, 17 extra genes including the toxin genes, cwp66 and slpA were represented at high density by 346 oligonucleotides. Initial experiments were performed with a set of control strains that included CD630, R20291, R23052 and CD196 (R12087). A CD630 self-self hybridisation was also performed. Analysis of the data obtained showed that the genome of strain CD630 hybridised to 57 of the '027-specific' probes; and a BLASTN in silico analysis of the array oligonucleotides against the CD630, R20291 and QCD-32g58 sequences used to design the array showed that these oligonucleotides had highly significant matches in these strains. Accordingly, these oligonucleotides were excluded from further analysis.
Analysis of the remaining PCR-ribotype 027 oligonucleotides with genomic DNA of the control strains showed that these probes produced a positive signal with the DNA of PCR-ribotype 027 strains. Figure 1 shows a condition tree clustering for all the strains against all of the probes, with those representing CD0001 listed at the top, pCD630, the extra genes and finally the extra PCRribotype 027 genes at the bottom. Strains are grouped by PCR-ribotype and, on initial inspection, demonstrate that each PCR-ribotype exhibits a visually similar pattern of hybridisation. Additional files 1, 2, 3, 4, 5, 6, 7 and 8 presents a full list of the probes that are present or absent in each strain.
Core
The core gene list was established by examining CD630 probes present and at a 1:1 ratio for each strain. Analysis of the core genes for all the strains tested showed that 32% of CD630 probes were conserved (12788/40097). This percentage is higher than those previously published in other array studies of 19.7% [29] and 16% [30]. This is perhaps surprising due to the wide variety of PCR-ribotypes analysed but as this array is denser, containing more than one reporter element per gene, greater sequence conservation will be evident than for arrays with one reporter per gene. Therefore, genes such as slpA which may not be included in the previously reported core percentages would be represented in this current figure. Our array also covers intergenic regions not covered by lower density microarrays. Conservation of genes was seen amongst all functional categories (see Additional file 9). Even greater conservation was seen when comparing strains of the same PCR-ribotype and Table 1 indicates the percentage conservation amongst the studied PCRribotypes, with a conservation of 85% or more for PCRribotypes 003, 012, 014 and 020.
Mobile elements
C. difficile is known to have a highly mosaic genome with many mobile genetic elements such as conjugative transposons and prophages [28]. Of the 1392 probes representing mobile or extrachromosomal elements in strain CD630, only 92 probes were present in the core of all the strains hybridised. Additional file 10 summarises the presence of the known CD630 mobile elements in each PCR-ribotype. In the majority of PCR-ribotype 027 and 001 strains, CTn1 is absent or highly divergent. It is absent or highly divergent in all PCR-ribotype 078 and 015 strains. CTn2 is absent from all of the PCR-ribotypes except from the PCR-ribotype 12 strains, CD630 and The probes were arranged by their corresponding C. difficile 630 gene, with CD0001 at the top and CD3680 at the bottom, followed by CDS from the plasmid pCD630 (CDP01 to CDP11) and finally probes representing the genes unique to ribotype 027. Each column represents an isolate, and each row corresponds to a probe. The status of each probe is indicated by color as follows: red, present/conserved in the test strain; blue, absent in the test strain and yellow present in both the test and control strains. The strains are grouped by PCR-ribotype and this is indicated below. The writing on the left indicates regions of divergence from CD630 in all of the strains tested.
ECDC 012. CTn3 is absent or highly divergent from all PCR-ribotypes except 078 and 012. CTn3 or Tn5397 is the only known mobile C. difficile element containing erythromycin and tetracycline resistance [31]. Therefore resistance to these antibiotic classes in any of the strains tested, including R20291 and PCR-ribotypes 078 and 106 strains (which are resistant to erythromycin) must be provided by an as yet undefined genetic element or mutation [32,33].
CTn4 is detectable in the PCR-ribotype 027 Quebec strain 23M63 but is absent or highly divergent in 27/28 of the PCR-ribotype 027 strains tested on the array utilised in this study. It is also partially present in one of the PCRribotype 001 strains tested but absent from all the other PCR-ribotypes. CTn5 is absent or highly divergent in 6 PCR-ribotypes; 001, 002, 014, 015 003 and 020. In all PCR-ribotype 017 strains only genes CD1864-9 are absent or highly divergent. These genes are also absent or highly divergent in 1-2 strains of the remaining 3 PCRribotypes; 027, 078 and 106. One PCR-ribotype 014 strain exhibits hybridisation between CD3330-44, but CTn6 is absent or highly divergent in all the other strains tested. Conversely Ctn7 is present in some form in all PCR-ribotypes except PCR-ribotypes 002 and 015. Prophage 1 is absent from all the strains tested except the PCR-ribotype 012 strains. Prophage 2 hybridises between CD2927-59 in all but PCR-ribotype 001, 002, 014, 015 and 078 strains.
Virulence genes
Various genetic loci that have been implicated in the virulence and pathogenesis of C. difficile, including those encoding for toxins and putative adhesions, as well as factors responsible for the spread of C. difficile, such as flagella and motility genes, antibiotic resistance and regulatory genes.
Toxins
The C. difficile genome contains the PaLoc (pathogenicity locus) which harbours five genes (tcdABCDE) responsible for the synthesis and regulation of the two major virulence factors, toxins, TcdA and TcdB. Variation in this region is extensive and as a consequence toxinotyping is a frequently used molecular method used to discriminate between strains [34,35]. Variable sequences include both the structural genes encoding the toxins, and the associated regulatory genes. Thus, the ability of some PCRribotype 027 strains to produce more of both toxins is attributed to a deletion at position 117 in the negative regulator of toxin production, tcdC [15,16], leading to a truncated TcdC protein. The occurrence of similar deletions in other strains not generally associated with epidemics suggests, however, that such changes are not indicative of hypervirulence [20]. PCR-ribotype 027 strains are usually toxinotype III strains, whereas CD630 is toxinotype 0.
The array results confirm that tcdB is conserved among all PCR-ribotype 027 isolates examined and diverged in the 3' region of tcdC (the negative regulator of toxin production) as indicated by a lack of hybridisation to EXP_CD630_800001_805000_s_PSO-60-77, the last tcdC probe on the array. Naturally occurring toxin A-B+ strains cause diarrhoea and colitis in humans [36] and are generally PCR-ribotype 017 (toxinotype VIII). From the observed hybridisation obtained with our array, all of the PCR-ribotype 017 strains examined here lacked tcdA and exhibited divergence in tcdB when compared to the corresponding CD630 and SM probes (data not shown).
Some C. difficile strains also produce a third toxin in addition to TcdA and TcdB, a binary ADP-ribosyltransferase toxin encoded by cdtA and cdtB. The role of binary toxin in pathogenesis is unclear, although it has been linked to increased disease severity [2]. The genes cdtA and cdtB are conserved in PCR-ribotypes 027 and 078. Our hybridisation results agree with those previously reported for CD630, showing divergence in both of these genes which cause these genes not to be active in this strain [37]. PCR-ribotype 017 also displays similar results to previous publications and, shows limited hybridisation to some CD630 cdtA and cdtB reporters as concluded by Rupnik [35]. The results from this study for the other PCR-ribotypes examined show that this region is divergent.
Flagella and motility genes
Flagella are important in pathogenesis for many enteric pathogens including Campylobacter jejuni and Salmonella enterica serovar Enteridis [38,39]. Chemotaxis and motility are inextricably linked and both are important for bacterial survival allowing the bacteria to move towards nutrients and away from substances that may prove detrimental. Genes that allow for flagella modification by glycosylation have recently been described in C. difficile QCD-32g58 and R20291 upstream of the flagellar biosynthesis locus [32,40]. Reporters representing 2 of the 4 loci (CDR0223 and 5) are present on the array and are conserved in all strains but two PCR-ribotype 017 strains (L22 and 23). Stabler et al [32] described the flagella related genes in 2 loci of the CD630 genome, F1; CD0226-40 and F3; CD0245-71 [29]. Loss of, or significant divergence in the F1 and inter-flagella region (F2; CD0241-4) was observed in PCR-ribotype 027 strains; this was shown to be due to 84-90% sequence identity in this region [32].
Our data shows that only 7/93 strains are divergent in these genes and this includes the two PCR-ribotype 017 strains discussed above, two non-toxigenic strains and two PCR-ribotype 078 strains. PCR-ribotype 078 strains have previously been reported to be non-motile [32] and although the CD630 flagella loci appears to be highly divergent or absent in these stains, the corresponding R20291 flagella and flagella glycosylation genes are present, indicating that another mechanism of variation is responsible for their non-motility.
Antibiotic resistance
Another contributing factor to the spread of C. difficile infection is the acquisition of antibiotic resistance. The genome sequence of CD630 allowed the identification of many genes associated with antibiotic resistance, including those already known such as ermB and tetM, and those with no prior experimental data, such as the putative lantibiotic antibiotic resistance genes (CD0478-CD0482, CD0820-CD0824 and CD1349-CD1352). In contrast to strain CD630, the epidemic 027 strains have been shown to be highly resistant to fluoroquinolones due to point mutations in the DNA gyrase genes which cannot be detected by this microarray [4,29].
In agreement with previous array data, the lantibiotic resistance loci, CD0643-6 and CD01349-52 are absent or highly divergent in all the PCR-ribotype 078 strains tested and appear to be divergent in some of the tested PCR-ribotype 027 strains. The putative ABC transporter that confers daunorubicin resistance (CD0456) was absent from PCR-ribotype 078, 106 and 020 strains, but present in all others. The R20291 sequence showed that chloramphenicol resistance was conferred by CDR3461, part of the CTn027. The array shows that this gene or its homologue is present in all of our PCR-ribotype 027 and 001 strains, present in the majority of PCR-ribotype 078 strains and divergent in the remaining PCR-ribotype strains.
Regulatory systems
Regulatory genes form a large part of the C. difficile genome comprising 11% of the CD630 genome [28]. In Staphylococcus aureus, the agr quorum sensing locus (agrCABD) has been implicated as a key regulator of many virulence factors [41,42]. In strain CD630 only homologues of agrD and agrB were present, respectively encoding a prepeptide of a secreted small autoinducer peptide and a transmembrane protein involved in AgrD processing. The homologous system in S. aureus also contains two further genes; agrC and agrA encoding a two-coponent system. Preliminary 454 sequencing of the PCR-ribotype 027 had shown that R20291 contained a second complete copy of an agr locus (agrCABD) in addition to the agrBD genes of strain CD630. Accordingly, oligonucleotides corresponding to this extra agrCABD locus was incorporated on our array at high density with an additional 25 probes.
Hybridisation against our array demonstrated that the extra agrCABD locus found in R20291 is entirely present in the genomes of 82 of the 94 (86%) strains tested, including two of four non-toxigenic strains ( Figure 2). Additional file 11 details the presence, absence and divergence (signal around 1) for each probe in the remaining 12 strains. The hybridisation to a few probes by DNA isolated from each of these 12 strain implies that this region is divergent rather than absent. PCR primer walking was performed on the strains detailed in Additional file 12 and primers were designed to the region CDR3184-3190. These primers generated amplicons of the expected size when DNA was derived from the positive control, R20291. No such amplicons were generated when DNA was derived from the 12 test strains. The positive control primers designed to amplify CDR3190 produced an amplicon with DNA isolated from all strains (data not shown). Overall, these results indicate that the absence of this additional agrCABD locus is the exception, rather than the rule.
Other virulence factors
The ability to sporulate is an important mechanism for the dissemination of all clostridia. A recent study has suggested that epidemic PCR-ribotype 027 isolates are more prolific in terms of spore formation than non-epidemic strains [43]. The sporulation related genes represented on the array are conserved throughout all the strains tested.
Another set of genes that have been implicated in virulence are those encoding cell surface proteins, including Cwp84 [44]. The majority of the genes coding for cell surface proteins are conserved in all of the strains tested. The genes which appear to show divergence are cwp66, CD2791 and CD3392.
Non toxigenic strains
In order to provide further validation of the array, the DNA of a total of four non-toxigenic strains (CD37, ATCC 43593 (1351), ATCC BAA-1801 (3232) and ATCC 43501 (7322)), were hybridised to the array. Braun et al [45] defined the integration site for the pathogenicity locus (PaLoc) by the sequence-based comparison of toxigenic strains and non-toxinogenic strains. Included in this analysis were the three ATCC non-toxigenic strains 43593, 43501 and BAA-1801. The C. difficile strain CD37 has previously been described as non-toxigenic but the nature of the deletion never fully characterised [46]. As shown in Figure 3, the PaLoc is absent from all four nontoxigenic strains at the site determined by Braun et al [45]. In these strains, the cdu1 gene is adjacent to the cdd1 gene and this was confirmed using the multiplex PCR and primers described by Braun et al., [45] (data not shown).
Further analysis of the non-toxigenic strains was performed and showed that 71 genes were absent or highly divergent from all of these strains compared to CD630 and 15 of the R20291 extra genes were also absent or highly divergent (detailed in Additional file 13). These genes include coding sequences (CDS) in the conjugative transposons CTn2 and CTn6. Two of the strains have additionally lost, or are highly divergent in, the flagella genes CD0226-40 and the R20291 flagella F2 region CDR0242-7.
Discussion
The microarray used in the current study was designed to cover one sequenced strain of C. difficile (CD630), and the preliminary unannotated sequence from two different PCR-ribotype 027 strains, R20291 (based on a 454 sequence run available at ftp://ftp.sanger.ac.uk/pub/ pathogens/cd/C_difficile_Bi_454.dbs) and QCD-32g58. Since the microarray was designed in 2007, the fully annotated sequence of R20291 [EMBL: FN545816], together with the historical PCR-ribotype 027 strain CD196 [EMBL: FN538970], has been published [32]. Comparison of the 027-specific probes on the microarray to the published sequence of R20291 has revealed some differences. In particular, a total of 234 additional R20291 genes were described in comparison to the sequenced strain CD630 and 505 genes were found to be unique to CD630. The array used in our study covers 169 of the 234 additional genes (72.2%). The missing CDS are detailed in Table 2. The majority of genes not represented on the array are transposon or phage related (40 genes) and the remaining 25 genes have oligonucleotide reporters representing neighbouring genes on the array.
During the gap closure sequencing and subsequent analysis of the R20291 and CD196 genomes, 47 extra genes were found in strain R20291 compared to the historical strain CD196. This included a unique 20 Kb phage island, termed SMPI1 which was found to be inserted into a unique PCR-ribotype 027 conjugative transposon, named CTn027. Our array was designed prior to gap closure of these 2 genomes and as a consequence represents only 29.8% of the 47 additional genes found in the R20291 genome. The majority of the genes not represented by the array form part of the conjugative transposon, CTn027, which is unique to R20291. However, the 14 CTn027 genes that are represented by our array were found to be present in the genomes of only 5 of the 28 PCR-riboytpe 027 strains tested, thereby indicating that this transposon is not common amongst PCR-riboytpe 027 strains.
The tiling nature of our array has established a more stringent and definitive core gene or sequence list than those previously published. Analysis of the core genes for all the strains tested showed that 32% of CD630 probes were conserved (12788/40097). This percentage is higher than those previously published in other array studies of 19.7% [32] and 16% [30]. The high density of our array, the fact that there is more than one reporter per gene and the coverage of intergenic regions means that our array provides a greater ability to define the core genes or sequences in each strain than PCR-spotted or single reporter per gene arrays. Conservation of genes was seen amongst all functional categories (Additional file 9).
As expected even greater conservation was seen when comparing strains of the same PCR-ribotype. Table 1 indicates the percentage conservation amongst the studied PCR-ribotypes, with conservation of 85% or more for PCR-ribotypes 003, 012, 014 and 020 in comparison to strain CD630. However, three of these ribotypes were only represented by one isolate and the study would have to be extended to include more isolates to provide a real indication of conservation among ribotypes 003, 012 and 020. Our array confirmed divergence between strains within the toxin encoding regions between PCR-ribotypes, particularly in the case of tcdB and cdtAB, while at the same time demonstrated that the particular tcdB variant present in R20291 was conserved amongst all PCRribotype 027 isolates tested.
Examination of the conjugative transposons in different PCR-ribotypes of C. difficile shows that the pattern of hybridisation to the probes representing the mobile elements provides only a limited indication of PCR-ribotype. Thus, while the majority of PCR-ribotype 106 strains lack any sequences homologous to CTn5, one strain (L25) does carry CTn5-derived sequences. Many strains showed homology to the genes at the terminal ends of the transposons. Whilst this could be because these genes are common to many transposons, genes such as CD3325 and CD3349 of CTn6 are present in all the strains tested even though the occurrence of the whole tranposon is limited to CD630 and one PCR-ribotype 014 strain. The two single PCR-ribotype 003 and 015, and the eight PCR-ribotype 002 strains appear particularly devoid of homology to the specific transposons and prophages probes present on the array. The elements tested appear completely absent from six of the eight PCR-ribotype 002 strains, as well as the single PCR-ribotype 003 strain (aside from partial hybridisation to some CTn1 probes), and PCR-ribotype 015 strain (aside from The gene context of region is detailed below the diagram but this is not to scale. Each row represents an isolate, and each column corresponds to a probe. Strain PCR-ribotypes are indicated on the right. The status of each probe is indicated by color as follows: red, present/conserved in the test strain; blue, absent in the test strain and yellow in this case were the region is absent in CD630 indicates divergence in these genes. partial hybridisation to some prophage 2 probes). Two of the PCR-ribotype 002 strains carry some regions with limited homology to parts of prophage 2.
A major aim of the study was to determine whether it was possible to identify divergent sequences that may be characteristic of either PCR-ribotype 027, or indeed hypervirulence. Seventeen of the 537 PCR-ribotype 027 probes represented on the microarray were present in all of the strains (Table 3). In silico analysis showed that these matches were not expected against the available non-027 nucleotide sequences. Determination of hypervirulent sequence markers to separate PCR-ribotype 027 strains from the rest of the strains was not possible. All of the 027 genes represented by the 027 probes were present in at least one strain of PCR-ribotypes 001, 020, 078 and 106. Table 4 details the percentage of 027 probes present in each PCR-ribotype. Additional file 14 details the 027 genes discovered by Stabler et al [32] absent from the array design. Additional file 15 examines the probes absent in each PCR-ribotype. Filtering was performed to see if any elements on the array could be used to identify individual PCR-ribotypes. No single probe was representative of just one PCR-ribotype.
It was noteworthy that the PCR-ribotype 020 reference strain also shares the extra PCR-ribotype 027 genes. PCR-ribotypes 020 and 014 are very difficult to differentiate by PCR-ribotyping and, therefore, frequently combined as "014/020 type". This 014/020 PCR-ribotype is currently the most frequently found type in Europe. It is remarkable, however, that type 014 differed considerably from 020 by the presence of extra 027 genes, indicating that the reference PCR-ribotype strains of 020 and 014 are clearly different. As only one reference strain of PCRribotype 020 was examined on the array, the possibility that these 2 PCR-ribotypes may be distinguishable by the presence or absence of the extra 027 genes needs to be further examined. Our study further emphasised that the extra copy of the Agr system (agrCABD) present in This includes probes that will not hybridise to either of our control strains, CD630 and R20291. The gene context of the region is detailed below the diagram but this is not to scale. Each row represents an isolate, and each column corresponds to a probe. The strain ID is indicated on the right. The status of each probe is indicated by color as follows: red, present/conserved in the test strain; blue, absent in the test strain and yellow present in both the test and control strains.
R20291 [32], and absent in CD630, is present in the majority of strains examined. It is, therefore, most likely not associated with hypervirulence.
Another aim of this study was to determine sequences that could be used to identify the strain CD630. The pCD630 plasmid is only present in one other strain (EK29). As detailed in Table 5, 81 CD630 genes are absent, or highly divergent, from all other PCR-ribotypes (except the PCR-ribotype 012 reference strain). Only the mobile elements CTn5 and CTn7 do not have any CDS on this list. The only genes which are not derived from mobile elements on this list are CD0211-2, which encode a CTP:phosphocholine cytidylyltransferase and a putative choline sulfatase, and CD2001, CD2003-5, encoding 2 conserved hypothetical proteins, an efflux pump and a MarR transcriptional regulator. CD3136-8 and 3147-53 are included in this list as they are only present in 9 of the 94 strains tested.
Conclusions
C. difficile has become the most common cause of nosocomial diarrhoea in recent years, partly due to the emergence and spread of the hypervirulent PCR-ribotype 027. The increasing rates of CDI are not only caused by the spread of this PCR-ribotype, which remains the second most commonly isolated PCR-ribotype in the UK and the fourth most commonly isolated PCR-ribotype in Europe [22,24].
This array comparative genomic study presents a snapshot of current EU clinical strains and the current molecular epidemiology of C. difficile [47]. Our study has shown that the PCR-ribotype 027 markers absent in the CD630 genome are not solely confined to PCR-ribotype 027 strains, but appear distributed amongst other PCRribotypes to varying degrees. Indeed, in some cases (PCR-ribotype 001, 020 and 106) there is greater overall carriage of these markers (100%) than amongst the PCRribotype 027 strains examined (98.8%). The apparent lower carriage rate in the latter may in part be a reflection of the larger sample size analysed (29 × 027) compared to the other PCR-ribotypes (10 × 001, 17 × 106, 9 × 078 and 1 × 020). This does not rule out the possibility that some of these markers may be indicative of increased virulence. Thus, PCR-ribotype 001 is one of the commonest types in Europe, and frequently associated with outbreaks, PCR-ribotype 106 was until recently the epidemic strain in England and Wales [22], whilst PCR-ribotype 078 strains are increasing recognised as being as equally aggressive as PCR-ribotype 027 strains [25]. The presence of markers of enhanced virulence common to 027 would, is, therefore, not surprising. Although comprehensive and of high density, the microarray employed here is of limited utility value as it does not cover all the extra PCR-ribotype 027 genes later revealed by Stabler et al. [32]. The presence of such '027specific'genes in the PCR-ribotype 078, 001, 020 and 106 should be confirmed. However, as they largely represent transposon-related genes, their usefulness as markers of hypervirulence for diagnostics may be limited.
We have fulfilled the aims of this study by identifying markers for CD630 and markers for hypervirluence, albeit genes that are not just indicative of PCR-ribotype 027. As a consequence of our comprehensive array coverage, we have also defined a more stringent core gene set compared to those previously published [30,32]. Further to this, we have defined a list of genes absent from nontoxinogenic strains and defined the deletion in strain CD37.
Strains and growth conditions
Ninety-four clinical strains were investigated in this study and these included 29 PCR-ribotype 027 strains, 17 PCRribotype 106 strains, 10 PCR-ribotype 001 strains, 9 PCRribotype 078 strains, 8 PCR-ribotype 002 strains, 8 PCRribotype 017 strains and 7 PCR-ribotype 014 strains (Additional file 16). Four non-toxigenic strains were also hybridised to the array for further investigation. The majority of the strains examined this study were isolated in the UK or the Netherlands. A 10 μl loop was used to inoculate pre-reduced BHIS agar from frozen bacterial stock. The plates were then incubated anaerobically at 37°C under an atmosphere of N 2 :H 2 :CO 2 (80:10:10, vol:vol:vol) in an anaerobic work-station (Don Whitley, Yorkshire, UK). A single colony was then used to inoculate a 10 ml BHIS broth and incubated overnight prior to DNA extraction.
DNA Extraction
A traditional DNA extraction method utilising phenol chloroform extraction was used [48]. Briefly, overnight cultures were pelleted and the cells were resuspended in 260 μl buffer EB (Qiagen). After the addition of 20 mg/ml lysozyme (Sigma-Aldrich, Gillingham, Dorset, UK) and 10% SDS (Sigma-Aldrich, U.K.), the solution was incubated at 37°C for 1 hour. The solution was then incubated for a further hour with 100 mg/ml DNase free RNase (Roche, Burgess Hill, U.K.) and Proteinase K (20 mg/ml; Qiagen, Crawley, West Sussex, U.K.). DNA was extracted by phenol:chloroform:IAA (Sigma-Aldrich) washes and phase-lock gel (5 Prime, Gaithersburg, MD, USA). The genomic DNA was then precipitated using ice-cold 100% ethanol and sodium acetate and purified with two washes of 70% ethanol. Purity and quantity were assessed using a NanoDrop1000 spectrophotometer (Thermofisher Scientific, Waltham, MA, USA) and visualisation by gel electrophoresis. Genomic DNA used for hybridisation to the microarray was fragmented by sonication and the fragment size was examined by gel electrophoresis.
Array design
The array was designed to cover the previously sequenced strain C. difficile 630, the preliminary 454 sequence data of the 027 strain R20291 and the unannotated sequence of the Canadian 027 isolate QCD32g58 in a strategy similar to that used by Witney et al [49]. The R20291 genome sequence was generated by 454/Roche GS20 as discussed in Stabler et al [32]. Genome annotation of strain R20291 and QCD32g58 was based on previously published annotations of C. difficile strain 630 [17]. The genomic sequences were compared against the database of strain 630 proteins by blastx, and a CDS feature in the query genome was created when a hit of over 90% identity was found. Glimmer3 was used to predict CDSs in genomic regions where no significant hits were found [50]. Any unique genomic regions left were examined and annotated manually in Artemis [51]. The genome comparisons were visualized in Artemis and ACT (Artemis Comparison Tool; [52]. In silico comparison against the Canadian strain QCD32g58 was also performed. Probes were firstly designed to the CD630 genome and then additional genes of interest from other strains (R20291 and Quebec) were included. The CD630 portion of the array had a tiling design. For this, the genome was divided into 5 Kb segments with the aim of producing the best probe for each 100 bp of sequence. All possible 60 mers were considered and ranked on the basis of melting temperature, likelihood of secondary structure and GC content. The highest ranking probe per 100 bp was then Using this method 40, 0097 oligonucleotides were designed to cover the CD630 genome. A further 681 probes covered any extra genes found in R20291 or QCD32g58. Regions such as the PaLoc were also represented by 346 extra probes at higher density.
Array production
Our high density custom microarrays were printed using an in situ inkjet oligonucleotide synthesizer by Agilent Technologies (Stockport, Chesire [53]. The probes were 60 oligonucleotides in length and printed in single copy per array. Four arrays were printed per slide.
Labelling and hybridisations
The genomic DNA was labelled using the Bioprime DNA labelling system (Invitrogen, UK). Hybridizations were performed, using SureHyb technology (Agilent, Stockport, Chesire, U.K.), with 2 μg of test genomic DNA labelled with Cy5-dCTP and 2 μg Cy3-dCTP (GE Healthcare Life Sciences, UK) with labelled C. difficile 630 genomic DNA as a common reference. The labelled DNA was purified using a MiniElute kit (Qiagen, Crawley, W. Sussex, UK) and the extent of Cy dye incorporation was measured using a nanodrop spectrophotometer. The test and control DNA were combined in a final volume of 39 μl and at a concentration of 2 μg each. To this mixture 10× Oligo aCGH/ChIP-on-Chip Blocking agent and 2× Hi-RPM hybridisation buffer (Agilent Technologies, U.K.) were added. The solutions were then denatured at 95°C, and incubated at 37°C for 30 min. The microarray was hybridized overnight using a SureHyb chamber at 65°C for 24 h. Slides were washed once in pre-heated Oligo aCGH/ChIP-on-chip Wash Buffer 1 for 5 min and briefly in Oligo aCGH/ChIP-on-chip Wash Buffer 2. Microarrays were scanned using an Axon 4000b array scanner (Molecular Devices, Sunnyvale, CA, USA) and intensity fluorescence data acquired using GenePix Pro (Molecular Devices).
Technical replicates were performed with our control strains CD630 (self-self hybridisation), R20291 and CD196, and this included dye-swap experiments. No replicates were performed for the clinical strains tested.
Microarray data analysis
The data was normalized and analysed using GeneSpring GX version 7.3 (Agilent Technologies, UK). Initially for each spot, the median pixel intensity for the local background was subtracted from the median pixel intensity of the spot, and any values less than 0.01 were adjusted to 0.01. Background-subtracted pixel intensities for the test strain channel were divided by those for the reference strain channel. The resulting log ratios were normalised by applying Per Spot Per Chip normalization, using 50% of data from that chip as the median.
An arbitrary cut-off of twofold was used to identify those genes that are specific to one of the strains. Therefore, for all strains, the upper cut-off was set at a ratio of 2 and the lower cut-off at a ratio of 0.5. Genes with a ratio greater than the upper cut-off were deemed to be specific to the test strain, genes with a ratio less than the lower cut-off were deemed to be specific to the reference strain, and genes with ratios between 0.5 and 2 were deemed to be present in both strains. Previous studies have shown that using arbitrary twofold cut-offs to determine presence or absence of genes is more conservative than other methods such as GACK or standard deviation from the median [48]. The presence or absence of a sequence was based on the presence or absence of one probe. The presence of absence of a gene was based on the presence or absence of more than one probe.
PCR amplification
PCR amplifications were performed using primers described in Supplementary Table 7 and KOD Hot start DNA polymerase (Novagen, Merck Chemicals, UK). Reactions were performed using a denaturation step at 95°C followed by 30 cycles at 95°C for 30 seconds, 52°C for 1 minute, 72°C for 2 -7 minutes, followed by a final extension of 72°C for 5-7 minutes. PCRs used to define the PaLoc used the primers and reaction conditions as described by Braun et al [45]. PCR primer walking used to confirm the results for the second agr locus were performed using the same polymerase as above, with annealing temperatures of 55°C. PCR products were analysed on 1% or 3% agarose gels run at 100-150 mV for 1 hour and stained with ethidium bromide.
Microarray data accession number
Fully annotated microarray data has been deposited in ArrayExpress (E-MTAB-162).
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2010-11-30T00:00:00.000
|
14944532
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1371/journal.pone.0014166",
"pdf_hash": "cdab9acf3cf22bb9fa5a4e03e04ae46028160270",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43103",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "cdab9acf3cf22bb9fa5a4e03e04ae46028160270",
"year": 2010
}
|
pes2o/s2orc
|
Broad-Scale Latitudinal Variation in Female Reproductive Success Contributes to the Maintenance of a Geographic Range Boundary in Bagworms (Lepidoptera: Psychidae)
Background Geographic range limits and the factors structuring them are of great interest to biologists, in part because of concerns about how global change may shift range boundaries. However, scientists lack strong mechanistic understanding of the factors that set geographic range limits in empirical systems, especially in animals. Methodology/Principal Findings Across dozens of populations spread over six degrees of latitude in the American Midwest, female mating success of the evergreen bagworm Thyridopteryx ephemeraeformis (Lepidoptera: Psychidae) declines from ∼100% to ∼0% near the edge of the species range. When coupled with additional latitudinal declines in fecundity and in egg and pupal survivorship, a spatial gradient of bagworm reproductive success emerges. This gradient is associated with a progressive decline in local abundance and an increased risk of local population extinction, up to a latitudinal threshold where extremely low female fitness meshes spatially with the species' geographic range boundary. Conclusions/Significance The reduction in fitness of female bagworms near the geographic range limit, which concords with the abundant centre hypothesis from biogeography, provides a concrete, empirical example of how an Allee effect (increased pre-reproductive mortality of females in sparsely populated areas) may interact with other demographic factors to induce a geographic range limit.
Introduction
Understanding how species' geographic distributions arise and are maintained constitutes one of the central goals of ecology. The 'abundant center' hypothesis from biogeography [1] predicts that local population density should decline as one moves from the core of a species' distribution toward the outer fringes, but many ecological mechanisms could give rise to such a pattern. Indeed across species, a broad array of causal factors are known to influence the positions and characteristics of geographic boundaries, but most studies, and especially those dealing with terrestrial animals, have evaluated only one factor at a time in isolation from other determinants [2]. Even among insects, where spatially replicated populations are often more tractable than in other animals, few studies have documented broad scale variation in reproductive success, nor how such variation may limit distribution range [3,4]. This is unfortunate because spatial variation in birth rate (reproduction) is probably the most critical determinant of geographic range boundaries [2].
Premature mortality of adult females (when females die before they lay their full complement of eggs) has long been known as a determinant of insect population dynamics [5,6,7], but the demographic consequences of reproductive failure at low popu-lation density (''demographic'' Allee effect) [8] have been documented for few insect species [9,10]. Theoretical models predict that a demographic Allee effect can contribute to maintenance of distributional range both with [11] and without [12] a strong environmental gradient. However, neither scenario has received strong empirical support.
Efforts to understand the structure, maintenance, and dynamics of animal range boundaries in a synthetic way are currently hamstrung by the lack of a model system for which multiple demographic parameters can be concurrently estimated in natural populations starting in the interior of a species range and moving out toward the edge of distribution [2]. Here, we answer the call for a model system for the study of geographic range limits. The bagcentred lifestyle of bagworms (Lepidoptera: Psychidae) makes them ideal animals for investigating geographic variation in demography. Multiple components of female fitness can be assessed using preand postmortem dissection of bags, including mortality during the pupal stage, timing of adult stage, mating success of adults, fecundity, overwintering survival of eggs, and reproductive output (Table 1). We provide herein a clear demonstration of how the interplay among a variety of demographic factors, including a striking spatial gradient in reproductive success, contributes to bagworm' geographic range boundary.
Study system
The bagworm Thyridopteryx ephemeraeformis (Haworth) is a univoltine, polyphagous moth widespread in the United States. Throughout its range, T. ephemeraeformis is broadly distributed as a pest in urban and agricultural landscapes on ornamental trees, predominantly juniper (Juniperus sp) and arborvitae (Thuja occidentalis).
Thyridopteryx ephemeraeformis possesses a suite of life history traits that make it an ideal candidate for a holistic approach to understanding what factors set and maintain range limits. Females are flightless as adults and reproduce within their bags, two traits that greatly facilitate studies of lifetime reproductive success and spatial population dynamics [13,14]. The bags are conspicuous on their host plant and infestations tend to occur in discrete patches on isolated plants or clusters of plants [14], thus facilitating sampling of local population even at low population density. Populations can be sampled along a broad latitudinal range (32-42uN) in the Midwest, but the species features a distinct geographic limit corresponding to northern Indiana that apparently has been stable for decades [15].
First instars construct a self-enclosing bag from host-plant material and enlarge this bag throughout their development. Upon completion of feeding, larvae tightly attach their bag to the host plant to pupate. Adults emerge in the fall. Males, which are typical winged moths, actively forage for sexually receptive females. The females are paedomorphic (neotenous), flightless, and do not leave their bag before the end of their life. Females attract mates during a 'calling stage' in which they disseminate setae impregnated with pheromone. Shortly after mating, the female oviposits a single clutch of eggs inside her pupal case and bag [16,17]; upon oviposition, the females drop to the ground and die. Females that fail to mate do not oviposit and eventually die within their bag, usually outside of the pupal case. The eggs laid by mated females overwinter inside the maternal bag, and neonates emerge in the spring.
Reproductive success of females for the 2008 reproductive season
Bagworm bags were sampled in March and April 2009, before the hatching of neonates. Study sites were located by driving through Indiana, Kentucky, and Tennessee along a 155 km wide north-south corridor (Fig. 1) and inspecting junipers for the presence of bagworms. Approximately 50 bags were collected on infested junipers at each of 110 sites. The bags were dissected to determine the mating status of females and the weight of egg masses. Postmortem assessments of the bags were conducted to evaluate the mating status of females based on the presence or absence of eggs (mated and unmated females, respectively) inside the pupal cases of females that had emerged as functional adults [18]. Adult emergence was diagnosed using as criteria the anterior split of the female pupal case and the presence of pheromoneimpregnated setae in the lower portion of the bag.
At the time of sampling in early spring, all egg masses laid by mated females appeared healthy (whitish colour, smooth shape).
To determine whether the eggs had survived the winter, pupal cases with egg masses were individually marked by location, kept in SoloH cups on a laboratory bench, and monitored daily to determine hatching of early instars. Early instars hatched over a 12 week period from the time of collection to 1 June. After a period of 7 days without emergence, the remaining egg masses that did not yield live larvae were visually inspected. All unhatched eggs had shrunk and turned black, indicative of overwintering mortality [19]. For each site, the mating success of females and proportion of females that produced live progeny (i.e., those females whose eggs overwintered successfully to hatch) were estimated as described in table 1. Reproductive success for the 2009 reproductive season Sampling was conducted in Indiana ( Fig. 2) throughout the emergence period of adults in late summer and early fall (26 sites for arborvitae and 24 sites for juniper). Sampling was initiated at the onset of pupation and terminated when all females had emerged (10 August to 18 November). Because the sites were distributed across a broad latitudinal range, they could not all be sampled on the same day. Each sampling interval lasted 3-4 days and gaps between sampling intervals lasted 5-10 days. Sampling at a given site ceased when all females had emerged. For each site and sampling interval, between 5 and 34 females (usually .10) were collected during the emergence period, depending on the relative availability of bagworms. Females were removed from their bag and classified as either in the pupal stage (subclassified as live or dead pupae) or emerged adults (subclassified as mated, unmated or calling females). The weight of egg masses laid by mated females was determined for different sites and sampling intervals. Female survival during the pupal stage, mating success, and reproductive success were evaluated at different sites using the equations listed in table 1.
Rate of extinction of local populations
Because a substantial number of bagworm larvae remain and develop on their natal host [14], it was assumed that host plants that harbored bagworms during the 2008 generation but had no live larvae in 2009 represent local extinction events. The study was conducted at 56 sites previously sampled in Indiana in March 2008 to determine the reproductive success of females. Each site consisted of a group of trees infested with bagworms that was at least 10 m away from other infested trees, with a distance between sites .2 km. Each site was sampled a second time in June 2009 to determine the presence (sustained infestation) or absence (local extinction) of live larvae on the juniper plants. For each site, an index of between-year reproductive output (RS) was tabulated taking into account the probability that females mated and the probability of overwintering survival of eggs. The probability of local extinction across sites was estimated for two classes of female reproductive output [RS = 0 (complete reproductive failure); RS .0 (some females successfully reproduced)] and four latitudinal classes [,39uN; 39-40uN; 40-41uN; .41uN].
Data analysis
Statistical analysis was conducted with the SAS statistical package (version 9.1, SAS Institute, Cary, NC). Partition of variance analysis was used to evaluate the variance associated with latitude and longitude (one-degree bands) and host plant (juniper or arborvitae). Linear and logistic regression was used to evaluate the effect of latitude on different parameters of fitness. Unless otherwise stated, all the relationships reported are highly significant (P,0.0001).
Results
Partition of variance analysis indicated that the variance component associated with latitude was consistently larger than that associated with longitude or host plant for all parameters, usually by a factor .5 (Table 2). This variance structure justified our use of latitude as a key variable across which we quantified the fitness parameters of female bagworms.
For the 2008 generation of bagworms, logistic regression revealed a significant latitudinal decline in female mating success and egg overwintering survival, with steep declines at latitudes corresponding to central-northern Indiana (above 39uN for mating success and above 41.5uN for overwintering survival). Female fecundity declined linearly with latitude ( Fig. 1).
For the 2009 generation, survival during the pupal stage, female mating success and egg biomass all declined linearly with latitude (Fig. 2). Pupal mortality was primarily associated with Hymenoptera and Diptera generalist parasitoids.
The variation in female reproductive output (mating success * egg biomass; Table 1) in 2008 and 2009 was evaluated using only latitudes above 38.4uN so that the data could be compared for different years. Analysis of covariance revealed a highly significant effect of latitude on reproductive success (F = 204.98, df = 1,107, P,0.0001), but no significance of year either alone (F = 0.58, Female fitness declined linearly with latitude in both years and was particularly low above 41uN (Fig. 3).
The proportion of infested trees declined non-linearly with latitude, exhibiting a steep decline above 41u N; no infested trees (out of 109 sampled) were observed above 42u N ( Fig. 3; Table 3). The abundance of potential host plants was relatively constant between 39-41uN and increased above 41uN, thus the northern range limit of bagworms cannot be attributed to a lack of potential host plants. Furthermore, the latitudinal decline in abundance is not due to interspecific competition because very few defoliators other than bagworms were observed on arborvitae or juniper in the study area.
Local extinction of populations between the 2008 and 2009 bagworm generations was observed at 9 of 56 sites (16.1%). The probability of extinction was 100% (N = 4) at sites where females experience complete reproductive failure (RS = 0); all these sites occurred above 41uN (Table 4). At sites where some females reproduced (RS .0), no extinction event was observed below 38uN, and the probability of local extinction was roughly constant further north (13.6-15.4%) ( Table 4). The high rate of extinction events above 41uN (6 of 17 sites, or 32.6%) was due to the high proportion of sites where females experienced complete reproductive failures (4 of 17 sites, or 23.5%) ( Table 4). Logistic regression revealed a significant increase in the rate of extinction (y) as a function of latitude (x) [y = e (41.8720.996x) /(1+e (41.8720.996x) ), x = 3.98, P = 0.046].
Discussion
Demographic Allee effects, defined as positive impacts of density on the total fitness of individuals (e.g., high rate of mortality and low mating success in sparse populations), have been hypothesized to strongly influence population dynamics and to help constrain geographic range boundaries [8,11,12]. Unfortunately, empirical data that could be used to understand how the interplay between spatial gradients and population demography influences the establishment and maintenance of geographic range limits are (table 3). For comparison, the effect of latitude (x) on female reproductive output (y, in mg eggs; see Table 1 rare in terrestrial animals [8,9]. This lack of data stems, in part, from the difficulty of adequately sampling low population density toward the edge of a species' distributional range and also from the lack of a model animal system for which multiple demographic parameters can be concurrently estimated [2]. We report here extremely low mating success of female bagworms in undisturbed, natural populations toward the edge of the distribution range, including the occurrence of total mating failure (0% mated female) at some sites. Of particular interest are the coincident latitudinal decline in bagworm abundance and female mating success above 41uN, and the apparent robustness of the latitudinal trends in 2008 and 2009. Because restricted mobility of females constrains their mating ability [10], species with flightless females may be particularly susceptible to low female mating success at low population density (mate encounter Allee effect) which may in turn influence the species' distributional range, as reported in the gypsy moth, Lymantria dispar [20,21,22]. Low vagility may further influence the interface between climate change and geographic range limits [3,23], particularly when species are unable to keep pace with changing landscapes through dispersal [24]. The abundance of potential host plants per se does not set the bagworm's range limit, as indicated by the increasing abundance of junipers and arborvitae with latitude (Table 3) and the absence of latitudinal variation in foliar nutrient content of the two main host plants of bagworms, junipers and arborvitae [25]. Because of the limited dispersal ability of bagworms and the naturally fragmented distribution of their host plants in urban and rural landscapes, we suggest that local reproductive success of females helps drive regional persistence for the species in concert with other demographic factors. Indeed, several components of female fitness declined toward the edge of the bagworms' distributional range, including survival during the pupal stage, mating success, fecundity, and overwintering survival of progeny, resulting in an overall reduction in reproductive success of females at northern locations, and in extreme cases, total reproductive failure. The increasing probability of extinction of local populations with declining female reproductive output (Table 4) indicates that patchy bagworm populations toward the range limit are temporally unstable. Such a demographic structure would be consistent with an ''invasion pinning'' scenario in which an Allee effect limits spatial spread [12].
The 'abundant-centre' hypothesis proposes that in a species range, a larger percentage of individuals of a population will be present in the centre of the range, where conditions are more favorable. Likewise, the hypothesis proposes a reduced density near the edge of a species range due to the interplay between numerous biotic and abiotic aspects of the habitat that worsens or becomes more intense as a species range boundary is approached [26,27,28]. In some cases, researchers question the validity of the abundant-centre hypothesis on the ground that apparent empirical support stems from reduced sampling near the edge of species range [27,28]. However, in other cases -such as the present studythe sampling designs for assessing population density across space are robust, and evidence for an abundant centre (and low density edges) is strong [29,30,31]. Predictive models suggest that the interplay between dispersal and demography can result in species that are 2 to 30 times denser in the centre of the range than at the edges [32]. If a species is going to be subject to Allee type dynamics, the likelihood for or intensity of such effects would be much greater in the vicinity of the range boundaries where densities are lower.
Several mechanisms may be simultaneously at work to restrict the reproductive output of female bagworms toward the edge of the distribution range, including an increased abundance of generalist pupal parasitoids (the most common natural enemies of bagworm pupae), low summer temperatures and short growing seasons (which together restrict the body size and fecundity of females; 33), low population density (which constrains female mating success), and low winter temperatures (which elevate egg mortality). Teasing apart the relative importance of these factors for the maintenance of the bagworms' geographic range limit will require a detailed population model that is parameterized from field data.
|
v3-fos-license
|
2018-04-03T05:01:56.948Z
|
2015-07-10T00:00:00.000
|
5245601
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/ncomms8748.pdf",
"pdf_hash": "6901ac4b477f9ba5e7e2434ebda322fac4471ecd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43105",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "21aad7a1e668341a3be1a803e30889b69e4d6562",
"year": 2015
}
|
pes2o/s2orc
|
Ultrastrong ductile and stable high-entropy alloys at small scales
Refractory high-entropy alloys (HEAs) are a class of emerging multi-component alloys, showing superior mechanical properties at elevated temperatures and being technologically interesting. However, they are generally brittle at room temperature, fail by cracking at low compressive strains and suffer from limited formability. Here we report a strategy for the fabrication of refractory HEA thin films and small-sized pillars that consist of strongly textured, columnar and nanometre-sized grains. Such HEA pillars exhibit extraordinarily high yield strengths of ∼10 GPa—among the highest reported strengths in micro-/nano-pillar compression and one order of magnitude higher than that of its bulk form—and their ductility is considerably improved (compressive plastic strains over 30%). Additionally, we demonstrate that such HEA films show substantially enhanced stability for high-temperature, long-duration conditions (at 1,100 °C for 3 days). Small-scale HEAs combining these properties represent a new class of materials in small-dimension devices potentially for high-stress and high-temperature applications.
D eveloping high-strength, ductile and thermally stable materials is highly desirable for both scientific interests and critical applications [1][2][3] . Alloying has been explored as a means to strengthen metals since the Bronze Age. Conventionally, one principle element serves as the matrix material and solute atoms change local stress fields to impede dislocation motion and strengthen the material, although it usually compromises ductility. Over the past few years, a new concept of alloys-HEAs, or equiatomic multi-component alloys-has attracted great attention 4,5 . Such alloys usually consist of four or more elements with equimolar or nearequimolar ratios, form a simple single solid-solution-like phase and show a variety of interesting and unusual properties 6,7 . Among them refractory HEAs are made of refractory elements and implemented for high-temperature applications. For example, a body-centred cubic (bcc)-structured NbMoTaW HEA subjected to uniaxial compression at 1,600°C attain a yield strength of 400 MPa and high heat-softening resistance 8,9 . However, all the refractory HEAs reported to date have been prepared using vacuum arc-melting technique and a vast majority of them suffer from low ductility at room temperature [9][10][11] : rendering them very difficult to process and unsuitable for usage.
The ductility and strength of a material can be also controlled by scaling, that is sample and microstructural sizes 12,13 . On the one hand, benefiting from higher surface-to-volume ratios and easier stress relaxation, cracking becomes more difficult in smallsized materials-the good deformability could be attained-even in conspicuous classes of brittle materials [14][15][16] . On the other hand, materials may attain significantly increased strengths by reducing their dimensions due to a limited scale of dislocation sources [17][18][19][20][21] . To achieve even higher strengths, a popular methodology is to include grain or interphase boundaries in micro-or nano-pillars, namely nanocrystalline or nanolaminate pillars, respectively. These nanostructured pillars can reach yield strengths of a few gigapascals [22][23][24][25] , but their main drawback is that their microstructures are generally unstable: grains grow rapidly even at low temperatures, consequently their strengths decrease considerably. To stabilize nanocrystalline structures, a few effective means have been introduced to suppress grain growth, such as alloying 26,27 and introducing texture 28 .
So far promising HEAs have been mostly studied in their bulk forms, but small-dimension HEAs have received much less attention. As demands for micro-and nano-scale devices for high-temperature and harsh-environment applications increase, c. magnetron co-sputtering system used to synthesize HEA thin films, in the conditions with and without Ar þ ion beam-assisted deposition: IBAD and Normal, respectively. (c) Powder X-ray diffraction patterns (Cu Ka1) of the NbMoTaW HEA films, compared with that of its bulk powder 11 , indicating a single bcc phase. (d) A SEM image of the typical cross-section of as-deposited IBAD HEA films. The inserted EBSD maps show columnar grains through the whole thickness of the films with a (110) out-of-plane texture and an average grain size of B70 and B150 nm for the IBAD and Normal conditions, respectively. The EDX analysis indicates that the four elements are homogenously distributed in a large length scale. The roughness of the top surface measured by AFM is about 5 nm. Two representative FIB-milled pillars (diameters of B500 and B100 nm) are shown in the insert at the bottom. Scale bars, 500 nm, except the one for B100 nm pillar is 100 nm. the fabrication and investigation currently popular HEAs at small sizes become more and more interesting. Now, the following question arises: what alloying and scaling conditions lead to the strongest both ductile and stable materials? Our strategy is to use the sample size and grain size as design parameters in a prototype refractory HEA, NbMoTaW alloy, to combine the benefits of alloying and scaling. Here, we show that fine-scale HEA films and pillars consisting of strongly textured, nanometre-sized and columnar grains exhibit ultrahigh strength, improved ductility and excellent thermal stability.
Results
Nanostructured HEA films and pillars. We used d.c. magnetron co-sputtering technique to deposit HEA films, as schematically illustrated in Fig. 1a,b (also see the experimental setup in Supplementary Fig. 1). Ion beam-assisted deposition (IBAD) method 29 was also applied to reduce deposition rate and decrease grain size. For simplicity, the method without using the ion gun is named as 'Normal' as opposed to 'IBAD'. Using the co-sputtering method, we produced 3-mm thick films that show good bonding with substrates and smooth surfaces (Fig. 1d). Electron backscatter diffraction (EBSD) orientation maps (insets in Fig. 1d) show that the films consist of strongly (110) textured columnar grains through the whole thickness of both IBAD and Normal-deposited films. The films deposited under the IBAD condition exhibit smaller grain sizes than those produced under the Normal condition, with an average grain size of B70 and B150 nm, respectively. The energy-dispersive X-ray spectroscopy (EDX) analysis reveals that the atomic composition varies by B5% and the overall value varies within 10%, which is comparable to the previously reported bulk NbMoTaW HEAs 8,11 . The X-ray diffraction patterns indicate a single-phase bcc structure in the as-deposited films, which also matches the bulk HEA in literature 8,11 . The results in Fig. 1 confirm that the co-sputtered films are made of the same alloy as those bulk forms produced by arc melting.
Micro-mechanical testing of small-scale HEA pillars. Focused Ga ion beams (FIB) were used to mill fine-scale pillars out of the obtained HEA films and microcompression tests were carried out using a nanoindenter. After compression a fraction of large pillars, above 1 mm in diameter, experience cracking at the top parts and cracks propagate along grain boundaries, showing intergranular fracture behaviour, but it only occurs under strains larger than B20% (Fig. 2a). The smaller pillars ( Fig. 2b-d) exhibit more uniform deformation without any cracking, even at above 30% compressive strain, suggesting that the compressive ductility is significantly improved. Furthermore, the columnarstructured HEA pillars exhibit very high yield and flow strengths. A 580-nm Normal HEA pillar shows a yield strength of B5 GPa and a 580-nm IBAD HEA pillar exhibits a yield strength of B6.5 GPa (Fig. 2e), which is almost twice of that of the singlecrystal HEA pillar with the same diameter and orientation ( Supplementary Fig. 4) and six times of that of the bulk HEA. Engineering stress at 5% strain (GPa) Astonishingly, we find that the smallest IBAD HEA pillars (B70-100 nm in diameter) exhibit remarkably high yield strengths of B8-10 GPa. To the best of our knowledge, such HEA pillars exhibit a strengthening figure of merit that is among the strongest pillars reported so far-for example, nanocrystalline 33 and about half of that of pure W whiskers 34 , still our HEA pillars exhibit much better ductility. Such HEA pillars also show a size-dependent strength, as presented by the relationship between the flow stress at 5% strain, s 0.05 , versus the pillar diameter, D (Fig. 2f). Our IBAD HEA pillars exhibit the highest strength levels, B5-7 times higher than that of single-crystal W pillars, and the lowest size dependence, a log-log slope of À 0.2.
Thermal stability of the HEA thin films. In addition to ultrahigh strength and improved ductility, we also demonstrate that such HEA films are substantially more stable after high-temperature, long-duration annealing compared with the pure W films that were prepared using the same experimental conditions. After 3 days' annealing at 1,100°C in an argon atmosphere the pure W film shows obvious structural instability: the morphology of the top surface changes from needle-like shapes to equiaxed-crystal structures; a large quantity of micrometre-sized pores are formed through the whole thickness; the grain size is significantly increased from B100 to 300 nm to a few micrometres, as shown in Fig. 3. In contrast to the W films, the post-annealed HEA film retains uniform needle-like morphology on the top surface without obvious grain growth, and few pores have been found through the entire cross-section of the films. With regards to mechanical properties, the HEA pillars exhibit much higher strength and better ductility than the W pillars before and after annealing (see a deformed W pillar before annealing in Supplementary Fig. 5). The formation of micropores and the growth of grains may dramatically reduce the mechanical performance of the W films and pillars, while the post-annealed HEA pillar (diameter of B1 mm) can still sustain a high yield strength of B5 GPa, which is nearly the same as that of the pre-annealed HEA pillar.
Discussion
In analogy to bundled bamboos, our column-structured HEA pillars actually consist of a set of strongly fibre-textured nanometre-sized grains, schematically illustrated in Fig. 4a. As a comparison of the normalized strengths (resolved shear strength (t) over corresponding shear modulus (G)), the IBAD HEA pillars exhibit the highest values (B0.02-0.05) among the typical single-crystalline pillars and nanocrystalline pillars (Fig. 4b).
To understand the ultrahigh strength of the HEA pillar, we propose a simple classical analysis on the resolved flow strength of the pillar (t sum ), which is contributed by lattice friction (t*), Taylor hardening (t G ) and source-controlled strength (t S ) and grain-boundary strengthening (t h-p ), simply expressed as (adapted from refs 11,35,36): Where s flow stress, m Schmid factor, T t test temperature, T c critical temperature (above T c flow stress becomes insensitive to test temperature), t à 0 the Peierls stress, a a constant falling in the range 0.1-1.0, b the Burgers vector, G shear modulus, 37 ) which can be simply represented by D, K h À p 1.7 GPa mm 1/2 (for bulk Mo) 38 . The calculated values are in good agreement with the experimental data (Fig. 4d), implying that the four possible strengthening mechanisms could work simultaneously in the nanostructured HEA pillars. It should be also mentioned that the smallest pillars (B70-100 nm in diameter) show obviously higher scattering levels in strength compared with the larger pillars. This large scattering could attribute to the inhomogeneous distribution of grain boundaries in these small pillars. In addition, the higher strengths of the IBAD pillars than those of the Normal pillars could be mainly attributed to their finer grain sizes. The higher point defect density in the IBAD pillars (as measured by electrical resistivity shown in the inset of Fig. 4d) could influence the strength as well, but its contribution is deemed to be small.
It is also instructive to look at a thought experiment regarding strength and fracture. One could consider comparing this bundled-bamboo structure to a discrete array of single-crystalline pillars of identical dimension as the grain size. With regards to strength these single-crystalline pillars would be close to theoretical strength, provided they are defect-free. If some of them are not, the overall strength of the array would be slightly reduced and only decrease significantly if the overall number of defects were increased by increasing the number of pillars, that is, increasing the diameter of the whole pillar. In the bundledbamboo structure itself, yielding of a single grain will result in stress concentrations at the boundaries, activating dislocation sources in the adjacent grains 39 and, therefore, yielding in those as well leading to a reduced overall yield strength compared with Engineering stress at 5% strain (GPa) ARTICLE the theoretical one. This is an alternative explanation of the sizedependent strength in the HEA pillars. With regards to fracture the single-crystalline pillars, each columnar grain, would exhibit higher and higher aspect ratios with increasing diameter of the bundled-bamboo structure, assuming constant aspect ratio of the bamboo-like structure. Then the single-crystalline pillars would fail more and more in a buckling mode. In this case of the bundled-bamboo structure, a larger cohesive strength of the grain boundaries is required for high aspect ratio grains to prevent buckling, which is intrinsically poor in HEAs 11 . If the deformation in each grain cannot be accommodated by its neighbours, it may lead to opening up voids and crack initiation along the boundaries 40 . This could explain why the large pillars eventually fail by intragranular fracture in contrast to the smaller ones where no fracture is observed.
The excellent thermal stability of the nanocrystalline HEA film could be attributed to their relatively low grain-boundary energy. Because grain interiors in the HEAs are highly disordered and far from a perfect crystal structure 11,41 , and the relative grainboundary energy would be lowered than that of pure metals, such as pure W. Consequently, the driving force of grain-boundary migration in the HEA would be lower compared with pure W, leading to reduced structural coarsening. Other mechanisms that could contribute to the high stability of the HEA films are: at elevated temperatures the elements with higher diffusion rates may segregate to grain boundaries, decrease grain boundaryspecific energy and stablize nanostructures against grain growth 26 ; similar to the recently reported nanolaminated nickel 28 , the low-angle boundaries and high aspect ratios of grains in the columnar structure may reduce the mobility of grain boundaries as well as suppress recrystallization; the residual stresses in the HEA and W films in the annealing condition could also affect microstructural stability. Nonetheless, the refractory metals have very similar thermal expansion coefficients to the sapphire substrate at both room and high temperatures, so both the residual stresses of HEA and W would be significantly smaller than their yield strengths. Therefore, dislocation motion due to residual stress would not play a substantial role in grain growth compared with the other mechanisms. Figure 5 schematically illustrates how a strong, ductile and stable material is created by combining alloying effect and scaling laws. In contrast to the strength-ductility trade-off for a bulk coarse-grained W and HEA, both strength and ductility are significantly improved in nanocrystalline HEA micropillars, compared with a bulk HEA, benefiting from reduced sample size and grain refinement (Fig. 5a). With regards to strength-stability synergy (Fig. 5b), the drawback of thermal instability in nanocrystalline W can be overcome by alloying in nanocrystalline HEAs that also results a higher strength level.
Technologically, the fabrication and properties of this new class of small-dimension refractory HEAs are interesting and attractive. Although co-sputtering technique has been suggested to produce HEA films in some earlier reports 42,43 , to our knowledge this work constitutes the first report of the formation of single-phase nanostructured refractory HEAs. Furthermore, the fabrication process for these thin films is fast and controllable: the alloy composition, film thickness and grain size can be tuned.
Toward application, although the HEA films and pillars contain heavy elements, they still offer the highest specificyield-strength values (strength-to-weight ratios) approaching 1 MJ kg À 1 and high Young's modulus ( Supplementary Fig. 6), and on this basis they surpass not only bulk metals and alloys but also other metallic pillars (Supplementary Fig. 7). The high specific strength of the small-scale HEAs combined with good ductility and high Young's modulus may permit access to high toughness, stiffness, hardness and wear resistance in a very highstress environment, relative to other engineering materials. Last but not least, because the nanostructured HEAs are thermally stable at elevated temperatures and their bulk forms can even access large stresses above 1,600°C, they may have a great opportunity to serve as high-temperature materials. Although mechanical tests for small-scale HEAs at high temperatures are still needed to prove this, our initial results of the HEA films under the high-temperature, long-duration conditions promise that they are capable of heat resistance and may serve as diffusion barriers and electrical resistors.
Despite much work remains to optimize small-scale HEAs for applications, for example, the best alloying elements and The strength of a pure bulk W with coarse grains is increased by either alloying to a HEA at an expense of ductility (alloying effect) or by sample size reduction to a micrometre-sized single crystal with a benefit of being more ductile as well (sample size effect). The optimized strength-ductility combination can be achieved in a nc HEA micro-pillar with the benefits from sample size reduction, grain-boundary strengthening and solidsolution hardening. (b) The strength of a pure bulk W can be either significantly increased by grain refinement to a nc W but at a dramatic expense of thermal stability or increased by alloying to a bulk HEA with an improvement of stability. In a nanostructured HEA both extraordinary strength and excellent thermal stability can be achieved. optimized grain and specimen size combination, the extraordinary properties of small-scale HEAs reported here offer a strong motivation to pursue their development.
Methods
Sample preparation and characterization. Our NbMoTaW HEA films were deposited using d.c. magnetron co-sputtering technique on (100) silicon substrates (coated with 50-nm SiO 2 and 50-nm Si 3 N 4 as diffusion barriers) or sapphire substrates (for annealing at 1,100°C) at room temperature ( Fig. 1b and Supplementary Fig. 1). The chamber base pressure was kept o10 À 6 mbar. During co-sputtering, the powers of the magnetrons were adjusted to obtain the equal arriving ratio of Nb, Mo, Ta and W, and the substrate was rotating as 30 rotations per minute in to homogenize alloy composition and film thickness. The IBAD method was also applied using a broad ion beam source (KRI KDC 40, beam energy of 1.2 keV, current of 5 mA and incidence angle of 35°) to decrease grain sizes, as compared with the Normal sputtering condition without ion gun. The film thickness is 3 mm and no difference was observed between the films deposited on silicon and sapphire substrates, in terms of the microstructure and mechanical properties. As a control, we also produced pure W films using the same conditions and parameters. The crystal orientations and elemental compositions of the films were characterized by EBSD and EDX, respectively, in a FEI Quanta 200 FEG SEM. The grain size and phase were determined by X-ray diffraction (Cu-Ka1 monochromatic radiation in a 2y range from 10 to 100°).
From the obtained films, the pillar specimens were fabricated using a FIB system (Helios Nanolab 600i, FEI) with a coarse milling condition of 30 kV and 80 pA, and a final polishing condition of 5 kV and 24 pA. The FIB-milled pillars have diameters of B1 mm, 500, 200 and 100 nm, and aspect ratios of 2.5-5. The tapering angle is B2-4°and the top diameters were chosen to calculate engineering stresses.
Mechanical testing. The microcompression tests were carried out in a nanoindenter using a diamond flat-punch tip. To eliminate strain-rate effects, we compressed all the pillars with a strain rate of 2 Â 10 À 3 s À 1 in the displacement control mode that was controlled by a feedback algorithm. It should be noted that a bigger tapering angle (45°), a higher aspect ratio (45) and the misalignment between the pillar top and flat punch could lead to very localized plastic deformation, buckling and bending, respectively. All the pillars were examined using scanning electron microscope (SEM) before and after compression tests, and those showing the above phenomena were eliminated to minimize these influences. The yield stress of pillars were measured as offset flow stress at 0.2% of strain. However, a large stress-strain scatter was usually observed in initial stage of plastic flow during compression, so the flow stresses at 5% of strain were used to compare the size effects.
Heat treatment. To evaluate the thermal stability of the HEA and W films, we equilibrated the films with sapphire substrates at 1,100°C in an argon atmosphere (the purity is Z99,999, PanGas AG, Switzerland) for 3 days (heating and cooling rates are 100°C h À 1 ). Pre-and post-annealing films and pillar strengths were characterized, measured and compared. 45. Alshehri, O., Yavuz, M. & Tsui, T. Manifestation of external size reduction effects on the yield point of nanocrystalline rhodium using nanopillars approach. Acta Mater. 61, 40-50 (2013).
|
v3-fos-license
|
2020-12-10T09:08:02.894Z
|
2020-11-01T00:00:00.000
|
229674248
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://iiste.org/Journals/index.php/JEES/article/download/54735/56551",
"pdf_hash": "db9457b4578fb79b2dab2c481637af9d7c1453d1",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43106",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "8188068afe2bf93164fbf1153c7a2d5f49f6e299",
"year": 2020
}
|
pes2o/s2orc
|
Optimum Municipal Solid Waste Disposal Site Selection Using Gis Based Multi-Criteria Decision Analysis: A Case of Nekemte Town, Oromia Regional State, Ethiopia
Solid waste is a major global concern particularly in developing countries. Municipal landfill site selection is becoming the main challenge as a result of various factors. To make the site to be selected environmentally sound, socially acceptable and economically feasible, GIS based multi-criteria decision analysis ( MCDA ) method that has the capability to combine spatially referenced data with experts’ value judgment was used in this study. The analytic hierarchy process ( AHP ) was the major technique of MCDA used to derive the weights of the nine criteria considered – distance from road network, geology, distance from fault lines, soil permeability, slope, distance from rivers/streams, distance from lakes, distance from the built-up areas and land use/land cover types. After creating Multiple-ring Buffers for road network, fault lines, rivers/streams, lakes and built-up areas by reviewing various literatures, all the criteria were standardized by reclassifying them into suitability classes. The weights of the reclassified criteria were derived using AHP Pair-wise Comparison Matrix in Microsoft Excel and then combined together using the Weighted Overlay tool in ArcGIS to produce the composite suitability map of the study area. Accordingly, 0.43% and 0.02% of the study area are unsuitable and highly suitable, respectively. The remaining 41.64%, 51.12%, and 6.8% of the study area are poorly suitable, moderately suitable and suitable, respectively. The raster composite suitability map of study area was then converted into vector map to select candidate disposal sites. Accordingly, six candidate municipal solid waste disposal sites were selected, evaluated with respect to their area (size), distance from center and distance from the nearby built-up area. They were weighted with respect to these three evaluating criteria using AHP Pair-wise Comparison Matrix and finally mapped and ranked. The first, second and third ranked candidate disposal sites have an area of 46 ha, 29 ha and 35ha, respectively. The first and second candidate sites are located in Burka Jato sub-town, while the third one is located in Sorga sub-town. In order to reduce the adverse impact of surface water pollution in the downstream, runoff should not flow into and out of the MSW disposal sites. To minimize groundwater pollution, detailed investigation on sub-surface condition of the site should be made during design. Greenhouse gases collection should also be designed to reduce air pollution.
Introduction
Solid waste is a major global concern particularly in developing countries [1,2]. Solid waste management like source reduction, reusing, recycling and resource recovery are the foremost techniques to manage solid waste, nevertheless there is always solid waste left after resource recovery and recycling process for disposal. The need for disposing the solid waste residual in environmental and economical point of view is referred as landfilling [3]. Municipal landfill site selection is becoming the main challenge as a result of refusal of funding by government and non-government organization, population booming in urban areas, impact on health concern shortage of land accessibility and increasing environmental awareness by communities [4].
Selecting landfill site is main difficult jobs to achieve since the site selection process consider various rules and procedures. Moreover taking account environmental factors is another issue as the landfill might have negative impact on the bio-physical environment [5]. Various methods can be used for solid waste landfill site selection [6][7][8].
The output of this method is crucial for identifying suitable site from the total study sites using suitability index, which is essential for ranking the best suitable areas.
Various issues should be integrated for landfill site selection decision and GIS is the dominant one because of the capability of manipulating considerable number of spatial data from different sources. It effectively store, and analyze data in accordance with defined requirement of the user [5]. A combination of GIS and Multi-Criteria Decision Analysis (MCDA) is a powerful tool to resolve the landfill site selection problem, since GIS provide effective handling and display of the data and MCDA deliver reliable ranking of the possible landfill sites on the base of different criteria.
According to Nekemte town municipality waste management department report (2017), the volume of the municipal solid waste disposed at the final disposal site has been increasing from year to year. For example, it increased from 9,516 m 3 in 2008 to 13, 330 m 3 in 2016. In addition, the existing disposal site waste selected only based on the distance from the main road. Other important environmental, social criteria were not considered. Hence, appropriate waste disposal site which is environmentally sound, socially acceptable and economically affordable should be selected. The objective of this study is to select sites for an appropriate landfill area of Nekemte Town using the integration of Geographic Information Systems (GIS) and Multi-criteria Decision Analysis (MCDA).
Materials and Methods
Data identified and collected on the socio-economic and environmental criteria identified were land use/land cover types, distance from the built-up areas, distance from rivers/streams, distance from lakes, soil permeability, slope, distance from road network, distance from fault lines, and geology.
The land use/land cover data of the study area was prepared by merging the land use/land cover shape-file prepared by the municipality and land use/land cover shape file of the study area prepared from Landsat 8 image of the study area. The roads network and built-up shape-file of the study area was obtained from the LU/LC shapefile of municipality. The shape-file of rivers/streams was derived from the DEM of the study area. The soil shapefile was acquired by clipping from the Didesa Basin soil shape-file prepared by [9]. And finally, the geologic and faults shape-file of the study area was obtained by digitizing the geologic map of Nekemte area prepared by Geological Survey of Ethiopia [10]. Hand held etrex 10 GPS was used to collect ground control points of the existing waste disposal site, residences and school nearby the existing waste disposal site, rivers/streams nearby the existing waste disposal site, and center of the study area (location of the municipality).
Methodology
After geo-referencing to UTM_Zone_37N coordinate system and Adindan datum, all the datasets were reclassified by giving new values to generate standardized input thematic maps. GIS based multi-criteria decision analysis for municipal solid waste disposal site selection was employed in two steps. In the first step, GIS was used to identify unsuitable sites based on the established criteria mentioned before. Each criterion was categorized into five suitability classes: highly suitable, suitable, moderately suitable, poorly suitable and unsuitable (restricted) with Journal of Environment and Earth Science www.iiste.org ISSN 2224-3216 (Paper) ISSN 2225-0948 (Online) Vol.10, No.11, 2020 3 ranks from 5 to 1, respectively. After reclassifying all the thematic maps, the weight of each criterion was derived using Analytic Hierarchy Process (AHP) which is based on experts' value judgments in comparing the classes and preparing the numerical matrices in Microsoft Excel.
That is, in the first step, each criterion was weighed based on the minimum and maximum buffer distances and/ or suitability requirements. As a result, the criteria were standardized through reclassification and their thematic maps were generated. In the second step, the significance of each criterion relative to the remaining criteria for municipal solid waste selection was expressed by giving weights. AHP weight derivation method using Microsoft Excel was used to compare two criteria at a time based on the expert judgment and a pair-wise comparison matrix from which a set of weights called Eigenvectors along with consistency ratios were produced for each of the criteria being considered. After giving external weights to each thematic layer, Weighted Overlay technique was used to generate the overall suitability map that combined all the weighed layers.
After creating the final suitability map through the Weighted Overlay tool in ArcGIS 10.2, the AHP process was again employed in order to compare the alternative potential disposal sites with one another with respect to their size, their distance from the center of the town, and their distance from the nearby built-up areas so as to choose the most suitable among the alternative potential disposal sites. Finally, field check was undertaken to verify the suitability of those potential disposal sites according to the evaluating criteria. 7 and the two artificial lakes was merged into other land use/land cover shape-file which was prepared by the municipality. Consequently, fifteen land use/land cover types were identified. After reviewing various literatures on the suitability of those land LU/LC types for municipal solid disposal site selection, the study area was reclassified into five suitability classes with respect to the value of the land and its social effects.
Proximity to built-up areas
In municipal solid waste disposal site selection, sites farther from built-up, especially from residential areas are more preferable to those sites closer to the built-up areas in order to reduce public nuisance and opposition. In addition, the farthest sites were excluded from the selection process to reduce transportation cost. Accordingly, the study area was classified into five suitability classes: 0 -500m &> 2,500m, 500 -1000m, 1,000 -1,500m, 1,500 -2,000m, 2,000 -2,500m.
Proximity to Lakes and Rivers/Streams
As contaminated runoff generated from the municipal solid waste disposal site could pollute surface water bodies including lakes, rivers and streams, minimum buffer zones of 150m for rivers/streams and 250m for lakes were created for municipal solid waste site selection and accordingly the study area was classified into five suitability classes.0 -150m,150 -350m, 350 -600m, 600 -850m, and > 850m for rivers/streams and 0-250m, 250 -1,250m, 1,250 -2,250m, 2,250 -3,250m and > 3,250m for lakes.
Soil permeability characteristics
In order to prevent groundwater pollution, municipal solid waste disposal sites should be located on soils with low permeability and high natural attenuation. With this regard, the study area was categorized into suitability classes.
Proximity to roads
By considering transportation cost to the disposal site, traffic congestion and the effect of waste transport on public health, the study area was classified into five suitability classes: 0 -100 &> 1000m, 100 -300m, 300 -500m, 500 -700m and 700 -1000m.
Proximity to faults
The faults data was digitized from the geological map of the Nekemte area prepared by GSE (2000). The existence of faults adversely affects the integrity of the waste disposal site and could cause groundwater pollution. With this regard, a minimum of 100m buffer zone around the faults was created and accordingly the study area was classified into five suitability classes: 0 -100m, 100 -1,500m, 1,500 -3,000m, 3,000 -4,500m, and > 4,500m.
Geology
By considering the existence of fractures, the type, and permeability characteristics of the rocks, the study area was classified into two suitability classes.
Municipal Solid Waste Disposal Site Suitability
The overall suitability analysis revealed that five disposal site suitability classes: unsuitable, poorly suitable, moderately suitable, suitable and highly suitable. However, the area of the highly suitable site was very small. As a result, this suitability class was excluded from the selection process. During the process of this municipal solid waste disposal site selection, built-up areas, surface water bodies, roads, sport fields and recreational areas and riverside green vegetation were excluded due to their social effect and value. While open space/lands in the study area, were considered as highly suitable for municipal solid waste disposal site selection. Sites which are within 150m from rivers/streams, 250m from lakes, 100m from faults, 100m and >1000m from roads and with slopes of 0 -4 % and > 20 % were excluded from the selection process.
With respect to area, the suitability analysis showed that 48.29 % of the study area was unsuitable and only 4.46 % of it was suitable for municipal solid waste disposal site selection. The remaining 46.34 % of the study areas was moderately suitable, which was again not considered to select candidate disposal sites.
Candidate Landfill Sites
With respect to economic advantage, potential disposal sites with areas of less 11 ha were excluded from the selection process. Accordingly, six potential disposal sites were selected, evaluated with respect to their size, their distance from the nearby built-up area and distance from the center of the town and finally ranked and mapped.
With respect to their size, there was one candidate disposal site with an area of 43 ha, which could be considered as highly suitable as it can serve for a long period of time due to its larger capacity than the one with the smallest of 11 ha, which was considered as the poorly suitable. However, with respect to their distance from the center of the town, candidate disposal site six (CDS-6) was the most suitable.
In order to solve such conflicting interests, all the three evaluating criteria were considered simultaneously through Analytic Hierarchy Process (AHP). Weights given to the criteria showed that the size of the candidate disposal sites was more important than the remaining two criteria -distance from center and nearby built-up area. As it can be observed from table 3, CDS-1, CDS-2 and CDS-3 are the first three highly suitable sites with respect to area (size), while the remaining CDS-4, CDS-5 and CDS-6 are the least suitable sites. However, with respect to the distance from center, CDS-6, CDS-2 and CDS-4 &CDS-5 are the first, 2 nd and 3 rd suitable sites, whereas CDS-1 and CDS-3 and CDS-1 are the least suitable sites. In terms of their distance with respect to the nearby built-up area, CDS-1, CDS-3 and CDS-4 take the 1 st , 2 nd and 3 rd rank, respectively. And CDS-2, CDS-5 and CDS-6 each take the 4 th rank.
The aggregate weight of the six candidate disposal sites with respect to the evaluating criteria was computed and they were ranked as shown in table 4. Table 4 reveals that CDS-1, CDS-3 and CDS-2 are the 1 st , 2 nd and 3 rd suitable sites with respect to the area (size), the distance from center and nearby built-up area. CDS-5 is the least suitable candidate disposal site. CDS-1, CDS-2, and CDS-5 are found in Burka Jato Sub-town; CDS-3 is located in Sorga Sub-town; CDS-4 is found in Kaso Sub-town and CDS-6 is found in Darge Sub-town.
CONCLUSION
This study considered nine criteria -land use/land cover types, distance from the built-up areas, distance from rivers/streams, distance from lakes, soil permeability, slope, distance from road network, distance from fault lines, and geology for suitable municipal solid waste disposal site selection for Nekemte town. About 4.46 % of the study area satisfied the socio-economic and environmental criteria established for the site selection. Hence, it was designated as suitable. Of the suitable sites, six candidate municipal disposal sites with areas of 11 ha and above were evaluated in terms of their size, distance from center and distance from the nearby built-up area. The result of the evaluation showed that candidate disposal 1, which is found in the Burka Jato sub-town is the most suitable site. Candidate disposal sites 3 and 2, which are found in the Sorga and Burka Jato sub-town, respectively are the 2 nd and 3 rd suitable sites.
|
v3-fos-license
|
2023-01-22T14:45:32.528Z
|
2011-06-17T00:00:00.000
|
256074288
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13209-011-0070-7.pdf",
"pdf_hash": "02495c635a9b5c27215ef26a1f1c00c98d27729f",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43111",
"s2fieldsofstudy": [
"Business"
],
"sha1": "02495c635a9b5c27215ef26a1f1c00c98d27729f",
"year": 2011
}
|
pes2o/s2orc
|
The role of accounting accruals for the prediction of future cash flows: evidence from Spain
The aim of this study is to determine whether accruals have information value beyond that provided by isolated current cash flows for the prediction of future cash flows. Using a sample of 4,397 Spanish companies (mostly privately held), we estimate in-sample regressions of future cash flows on isolated current cash flows and on accrual-based earnings. We then find that the out-of-sample prediction errors provided by the accrual-based earnings model are significantly lower than those obtained with the cash flows model. We also regress the decrease in prediction errors brought about by the addition of accruals on a set of firm-specific circumstances where accounting manipulation is expected. In all cases the decrease in prediction errors is significantly affected in the hypothesized direction.
generally provide a better indication of the timing and uncertainty of prospective cash flows than the information limited to cash receipts and payments. Accruals, therefore, are essential to accomplish the primary objective of financial reporting. They enhance the relevance for future cash flows prediction, reduce costs introduced by information asymmetries and contribute to a better allocation of economic resources (see SFAC, No 1, FASB 1978).
The aim of this study is to determine whether, for Spanish companies, accrual-based earnings have information value beyond that provided by isolated current cash flows for the prediction of future cash flows. Given that the accrual process is the result of a trade-off between relevance and reliability (Dechow 1994), one of the most important questions in accounting research is whether accruals really provide a better summary measure of firm performance. Although US researchers have already provided empirical support for this role, it is yet to be confirmed whether this evidence holds for a code-law country like Spain. Spanish stock markets are far less developed than those of the US, ownership is highly concentrated and external finance is obtained mostly from banks. This leads to different agency problems and calls into question the interest of accounting information for external users. Furthermore, the role of accounting accruals for the prediction of future cash flows is bound to differ across public and private firms and, although the former are likely to be more similar to Anglo-Saxon firms, most Spanish companies are privately held.
The main motivation for our study comes from the need to provide Spanish investors and creditors with empirical evidence on the beneficial role of accruals for future cash flows prediction. In Spain, most accounting research usually takes for granted that accruals provide relevant information to better assess a firm's future cash flows generation. However, there is little empirical evidence that supports this, especially for private firms. Accounting regulation and enforcement make the Spanish setting an interesting case study. The Spanish legal model of accounting took advantage of the reform that resulted from the implementation of EU Directives to approach the Anglo-Saxon model. Since 1990, Spain has had accounting standards of sufficiently high quality to expect accruals to provide value-relevant information. Enforcement mechanisms, nevertheless, are still ineffective in Spain. The literature highlights that the quality of accruals is not only a matter of the quality of accounting standards but also of how these standards are enforced. Empirical evidence that earnings quality is significantly lower in countries with poor enforcement mechanisms (Leuz et al. 2003) and especially in private firms (Burgstahler et al. 2006) reduces confidence in the Spanish reporting model and calls for an empirical evaluation of the information content of accruals. This is, therefore, the primary purpose of our study.
Using a sample of 4,397 companies (mostly privately held), we test the role of accruals for the prediction of future cash flows (Hypothesis 1) by estimating in-sample (1997)(1998)(1999)(2000)(2001) regressions of future cash flows (up to 4 years ahead) on isolated current cash flows (cash flows model, 1) and on accrual-based earnings (accrual-based model, 2) and, subsequently, looking for differences in the out-of-sample prediction errors between the two models. Overall, our results are consistent with the argument that accruals add relevant information for the prediction of future cash flows, that is, that the prediction errors provided by an accrual-based earnings model are significantly lower than those obtained with isolated current cash flows. This predictive ability increases if we include accruals in the disaggregated fashion (five components) suggested by Barth et al. (2001). On the other hand, we also find that the role of accruals for the prediction of future cash flows is significantly moderated in firmspecific situations where managers are expected to make a more opportunistic use of accounting discretion. For each of the variables used to proxy for these situations (small size, privately held, need for new finance and high level of subjectivity of the firms' accruals), prediction errors decrease significantly in the hypothesized direction.
Our paper adds to the accounting literature by providing evidence that accruals permit a better prediction of future cash flows, thus contributing to a better allocation of resources. We are not aware of any study that has, so far, proved the out-of-sample ability of accruals to predict future cash flows in a representative sample of Spanish firms, that is, using a corresponding proportion of both public and private firms. We also find evidence of a significant moderating effect of accounting manipulation that lowers the informational value of accruals below their otherwise attainable levels. This suggests that, if Spanish regulators do not improve the enforcement of accounting standards, the accrual-based information will not be of high quality. Finally, as in Barth et al. (2001), we find that disaggregating accruals into their major components significantly increases their predictive ability. The rest of the paper is organized as follows. Section 2 reviews prior research on the role of accruals for the prediction of future cash flows. Section 3 focuses on the Spanish case and includes the two hypotheses of our study. Section 4 describes the research methodology and the sample and we present our empirical results in Sect. 5. Finally, the summary and conclusions appear in Sect. 6.
Prior research on the role of accruals: literature review
The benefits of accruals are a direct consequence of increasingly sophisticated accounting regulations. Accruals, nevertheless, are also fraught with measurement error due to the assumptions underlying their determination and the discretion allowed under GAAP. Accrual-based accounting standards involve judgment as they require estimations about future events that are not considered in current cash flows. Although subject to unintentional error, these estimations signal private information and are, thus, expected to be value relevant, that is, to increase users' accuracy in assessing the present value of future cash flows. Managers, however, can use accounting discretion opportunistically to serve other and less benign interests by introducing estimation noise that makes reported earnings misleading about the firm's economic performance. 1 Given that the accrual process is the result of a trade-off between relevance and reliability (Dechow 1994), one of the most important questions in accounting research is whether accruals really provide a better summary measure of firm performance. It is essential to provide users with empirical evidence that supports the assertions of the FASB/IASB about this role. However, although one might expect investors and creditors to demand this empirical evidence, they often accept or reject the role of accruals without having received it. This is the case in Spain. 2 Prior Anglo-Saxon literature has generated a vast number of studies addressing the incremental information content of accruals. Much of this evidence is price-based, that is, it relies on studies that substitute stock prices for real future cash flows (more difficult to obtain). These studies are widely consistent with the hypothesis that accruals are value relevant in the marketplace. Strictly speaking, market prices are the outcome of how investors, as external users, perceive accruals. These prices, however, will not map into future cash flows if the behaviour of stock markets is not efficient. Accordingly, stock prices might not be the most suitable reference for testing the benefits of accruals. Evidence of these benefits for real future cash flows prediction would serve as a guide to investors in the subsequent pricing of accruals. 3 Alternative, non-price, studies provide evidence of the ability of accruals to predict future cash flows. Several studies document that, for a given year, accruals show a strong positive association with years-ahead cash flows after regressing cash flows in period t + i on cash flows and accruals in period t (see Kim and Kross 2005). Barth et al. (2001) and Al-Attar and Hussain (2004) further demonstrate that decomposing accruals into five individual reported items (and giving them separate coefficients in the estimations) provides significant additional explanatory power over and above current cash flows and aggregated accruals. In-sample regressions, however, are not prediction tests and may even provide misleading inferences concerning prediction. 4 A parallel line of studies uses out-of-sample testing as a way of solving the important problems inherent in association tests. Although the earliest out-of-sample studies do not support the role of accruals for the prediction of future cash flows, more recent papers do (Brochet et al. 2008). Yoder (2007) extends Barth et al.'s (2001) evidence by finding that prediction errors using a disaggregated accrual-based model are significantly lower than those using either an isolated current cash flows model or an aggregated accruals model. Overall, and with the exception of Lev et al. (2009), recent US evidence provides support for the value of accruals in directly predicting future cash flows.
The role of accruals in the Spanish reporting framework: hypotheses development
Accounting, like other social disciplines and human activities, is largely a product of its environment. The extension of the Anglo-Saxon empirical support for the bene-2 There is evidence that, in Spain, some banks adjust net income to eliminate the effect of the most subjective accruals (Ansón et al. 1997). 3 Kim and Kross (2005) empirically find that while the role of earnings in pricing securities deteriorates over time there is an increase in the ability of earnings to forecast future cash flows. 4 The results of the so-called association studies do not necessarily suggest an incremental ability of accruals to forecast future cash flows. The superiority in goodness of fit tests (e.g. R 2 ) does not necessarily translate into superiority in predictive ability because the model can 'overfit' the data. The functional relationship may change over time and, therefore, the association found in year t may not be useful in making out-of-sample predictions in year t + i.
fits of accruals to the Spanish case is yet to be confirmed because of the well-known institutional differences between the US and Spain in terms of both corporate governance and accounting regulation and enforcement. Unlike in the US, banks are a major source of finance in Spain (stock markets are far less developed) and there is high ownership concentration. This leads to different agency problems and calls into question the interest of managers (users) in providing (demanding) high quality accounting information. Furthermore, the role of accruals for the prediction of future cash flows is bound to differ across public and private firms and, although the former are likely to be more similar to Anglo-Saxon firms, most Spanish companies are privately held. Agency problems and the mechanisms employed to solve them among private companies are different from those available for public companies. 5 In a bank-based financial system like the Spanish one, close monitoring through private information channels may substitute for monitoring based on accounting information and financial reports might be formulated with other intentions than predicting future cash flows (e.g. taxation or dividend policy purposes). Accounting regulation and enforcement is a further important determinant of the quality of financial reporting and, therefore, of the value relevance of accruals. In Spain, a major reform of accounting law took place with the implementation of the Fourth, Seventh and Eighth EU Directives on company law through the Acts of 1988 and 1989 and a revision in 1990 of the General Accounting Plan. Unlike other EU members that tried to preserve their status quo in accounting, Spain used the reform that resulted from this implementation to approach the Anglo-Saxon model in terms of tax alignment and policy regulation. This more equity-oriented view of accounting rules implies that, since 1990, Spanish accounting standards are of sufficient quality to expect value relevance in the companies' accrual-based outcomes. 6 Adopting high quality standards is a necessary but not sufficient condition for high quality information. The quality of accrual-based earnings is not only a matter of the quality of accounting standards but also of how these standards are enforced and, despite the relative increase in the quality of standards, a parallel increase in the efficiency of enforcing them has not yet been achieved in Spain. Investor protection rights are much weaker than in the Anglo-Saxon countries (La Porta et al. 1998) and litigation risk is almost inexistent for firm managers and auditors. 7 In Spain, most accounting research usually takes for granted that accruals provide relevant information to better assess a firm's future cash flows generation. However, there is little empirical evidence that supports this, especially among private firms, despite their macroeconomic significance. Both Gabás and Apellániz (1994) and Giner and Sancho (1996) provide some evidence that accruals display a significant association with future cash flows, but neither of them tests out-of-sample prediction or extends the analysis beyond the scope of public firms. Working with stock market prices, Íñiguez and Poveda (2008) find that strategies based on accruals help specific investors to generate positive abnormal returns, which is evidence of their economic value. 8 As for private firms, only two studies provide some indirect evidence of the value relevance of accruals. Gill de Albornoz and Illueca (2007) show that Spanish bank lenders perceive accruals as valuable information to set the interest rates charged to their clients, although only for large firms. 9 In the failure prediction area, Lizarraga (1997) offers evidence that accrual-based ratios (e.g. profitability ratios) are a much better predictor of bankruptcy than those built upon cash flows.
Given the above-mentioned arguments and the lack of clear evidence, it is yet to be seen to what extent preparers' incentives, accounting standards and enforcement mechanisms interact to produce high quality accrual-based information in Spain. We still think that, despite their possible counter effects, accruals are value relevant for the prediction of future cash flows. Our first hypothesis is, then:
Hypothesis 1
In Spain, a model that includes current cash flows and accruals will better predict future cash flows than a model that includes only current cash flows.
Corporate governance and oversight systems are critical to secure compliance with accounting standards. The lack of competent enforcement of these standards encourages managerial opportunism with a potential mitigating effect on the role of accruals. Gill de Albornoz and Alcarria (2003), Gallén and Giner (2005) and Mora and Sabater (2008) provide empirical support for the opportunistic use of accruals by Spanish public firms. As for private firms, Arnedo et al. (2007) show that they also engage in accruals' manipulation and Gill de Albornoz and Illueca (2007) find that Spanish bank lenders penalize this manipulation by demanding a higher cost of debt. International studies provide substantial evidence of the higher degree of earnings management in code-law countries (Hung 2001;Leuz et al. 2003) and especially among private firms (Burgstahler et al. 2006). 10 Providing evidence of a significant mitigating effect of earnings manipulation on the role of accruals for the prediction of future cash flows would draw attention to the need to strengthen the enforcement mechanisms over the financial reporting system. Testing Hypothesis 1, however, is not sufficient for this purpose because its non-rejection does not imply that accruals are free from manipulation or that they are being used optimally. The quality of accruals is inversely proportional to the opportunistic discretion exerted by managers and, though significantly better than the cash basis, the accrual basis could still perform well below its possibilities. An overall analysis (Hypothesis 1) does not make it possible to demonstrate whether, in certain cases, the predictive ability of accruals is lower than expected. For this reason, we test, as our second hypothesis, whether the role of accruals in Spain is substantially constrained in firm-specific situations where a high level of manipulation is expected. Formally, this second hypothesis states: Hypothesis 2 In Spain, the ability of accruals to predict future cash flows is significantly constrained in situations where a high level of accounting manipulation is expected.
Methodological approach
To analyze the value relevance of accruals for the prediction of future cash flows, we compare two different regression models: the current cash flows model (CFM, model 1) and the accrual-based model (ABM, model 2). Future cash flows are the dependent variable in both models. Cash flows projections extend up to 4 years ahead. 11 We measure current and future cash flows in their operating version. We calculate operating cash flows in year t (OCF t ) from the expression: If we substitute the five accruals components suggested by Barth et al. (2001) for Total Accruals, we have: where, E t = net(after tax) earnings before extraordinary items in year t; INV t = changes in inventory in year t; RE t = changes in accounts receivable in year t; 11 The FASB/IASB do not specifically address the time period over which future cash flows should be predicted for accounting information to fulfil the objective of financial reporting. Different prediction horizons have been studied in the literature, that include just 1 year, 1-2 years and even 1-5 years. Barth et al. (2001) also limit their analysis to a 1-year-ahead cash flows effect but they complement their paper with sensitivity cash flows prediction tests up to 4 years ahead, obtaining results consistent with those for 1 year. AP t = changes in accounts payable in year t; OCL t = changes in other current liabilities in year t; DEP t = amortization and depreciation expenses in year t.
As for the independent variables, the first model (CFM, model 1) includes exclusively the current operating cash flows and it is taken as our benchmark for the comparison. The second model (ABM, model 2) adds total accruals to the content of the first. We estimate this second model twice. First we allow only one coefficient for total accruals (model 2a) and second we disaggregate them into the five different components mentioned above (model 2b).
If accruals fulfil their role, both these versions of model 2 should have a better predictive ability than model 1. Obtaining a greater increase in this predictive ability with model 2b than with model 2a will show the benefits of accruals' disaggregation. Formally, models 1, 2a and 2b are: where, CFM = Cash flows model; ABMa = Accrual-based model, aggregated; ABMd = Accrual-based model, disaggregated; FOCF t+i = Future operating cash flows for the (t + 1 to t + i) prediction period.
We compute each period-ahead operating cash flows as the mean yearly operating cash flows for all years in the period. FOCF t+i is the mean yearly OCF generated during the period starting in t + 1 and ending in t + i, where i ranges from 1 to 4. 12 We scale all variables by the prior period total sales. We use total sales instead of total assets because, as evidenced by Dechow (1994), the intensity of the operating cycle affects both net operating assets and the association between earnings and operating cash flows. Companies with large operating cycles inherently present low operating cash flows and high total assets, so long-cycled companies will tend to have low values of the estimation errors as a consequence of the inherent behaviour of both the numerator (OCF) and denominator (total assets) in the dependent variable. 13 As we said in Sect. 2, many papers follow an association-based methodology to test the role of accruals for the prediction of future cash flows. Superiority in goodness of fit tests in the estimations (e.g. R 2 ), however, does not necessarily translate into superiority in predictive ability. The functional relationship may change over time and, therefore, the association found in year t may not be useful in making out-ofsample predictions in year t + i. To solve this problem we apply an out-of-sample methodology. Its implementation requires estimating in-sample coefficients in a first (estimation) period and then applying them in a subsequent (holdout) period so as to obtain the predictive values. We carry out our estimations cross-sectionally by year and industry.
Absolute prediction errors
To test the increase in predictive ability brought about by the inclusion of accruals (Hypothesis 1), we compare the absolute prediction errors of the two models (CFM and ABM). Following Brochet et al. (2008), we calculate absolute prediction errors as the absolute value of the difference between the actual and predicted values of future operating cash flows for each prediction period (FOCF t+i ). Mathematically, the absolute prediction errors can be expressed as follows: The subscript j indicates the model ( j = 1, 2a or 2b) used to compute the predicted value of FOCF. We compare mean and median absolute prediction errors in pairs using the t test statistic for the means and the Wilcoxon signed test for the medians.
Multivariate regression analysis
Our second hypothesis tests whether, after controlling for some business determinants, the role of accruals for the prediction of future cash flows is significantly moderated in firm-specific situations with a higher level of expected manipulation. To do so, and continuing with the error prediction methodology, we specify the following multivariate regression: where, β x : β 1 to β 4 = coefficients for the four variables proxying for expected manipulation; β y :β 5 to β 10 = coefficients for the six control variables.
Dependent variable
ABSE 1 − ABSE 2b is the dependent variable in the model. We obtain it as the difference between the absolute prediction error using the cash flows model (CFM, model 1) and the same error using the disaggregated accrual-based model (ABMd, model 2b). This dependent variable measures the extent to which accruals improve on current cash flows in predicting future cash flows. 14 The greater the value of the difference, the greater the improvement provided by the accruals model (model 2b) compared to the isolated cash flows model (model 1). The multivariate model is estimated for each (t + 1 to t + i) prediction period, where i = 1. . .4.
Explanatory variables: expected manipulation
The literature highlights a wide variety of situations where managers are especially prone to engage in accruals manipulation. Among these, we choose four firm-specific circumstances that are particularly suitable for the Spanish environment, namely, small size, privately held, the need for new finance and the level of subjectivity of accruals.
Small firm size (SIZE)
There are several reasons to expect that small and medium-sized firms engage in more accruals manipulation than their larger counterparts. Large firms are exposed to greater political and reputation costs, have stronger internal controls and tend to hire high quality auditors. Their accruals are usually more permanent and they have greater incentives to use them to convey private information (Gill de Albornoz and Illueca 2007). Small client size is also an important attribute that leads banks to disregard the financial reports as a source of information in their credit-granting decisions. These arguments lead us to expect a positive effect of firm size on the ability of accruals to predict future cash flows. We measure the firm size variable (SIZE) as the natural logarithm of total assets.
Privately held (PRIV)
Public firms use a wider range of corporate governance mechanisms than private ones and are also under the supervision of the market authorities. In Spain, except for statutory auditing, which is compulsory for firms above a certain size (Audit Law 1988), specific corporate governance instruments (e.g. audit committees) have been implemented recently and only for public firms (see Finance Law 2002). There is evidence of a constraining effect of these instruments on earnings manipulation in Spain (García Lara et al. 2007). Further, unlike the Anglo-Saxon countries, Spanish public companies are not under so much pressure to live up to the market's expectations and, consequently, to make use of income-increasing manipulation to do so. As we do with small firms, we predict that private firms will engage in accruals manipulation to a greater extent. We analyze public companies and large ones separately because, although firms quoting on the Spanish Stock Exchange are predominantly large, most big companies in Spain are still privately held. We use a PRIV dummy variable that equals 1 if the firm is privately held (does not quote on the Spanish stock market), and 0 otherwise.
Need for New Finance (NNF)
Uncomfortably high debt levels make it difficult for a company to secure additional cash to finance its new commitments. In bank-dependent/weakly-protected countries like Spain, banks often structure debt as short term. This allows them to review their credit terms on a more frequent basis and encourages borrowers to engage in the necessary manipulation to avoid violating the covenants on which these terms are based. Gupta et al. (2008) empirically find that short-term debt induces greater earnings management and that this is a common behaviour in countries with weak legal regimes. The bankruptcy literature also highlights a high leverage (especially in the short term) as a symptom of impending failure and, consequently, as a clear incentive for manipulation. Following these arguments, we expect that the higher the level of short-term debt, the lower the ability of accruals to predict future cash flows. We use the NNF variable to proxy for short-term indebtedness which we calculate as the ratio of short-term to long-term banking debt.
The level of subjectivity of accruals (SUB)
Subjective accruals provide managers with better opportunities to communicate private information but they also leave more room for discretion. Financial statement items such as inventory or receivables are affected by more arbitrary judgement and allocation problems than others (e.g. accounts payable) and, consequently, offer greater opportunities for manipulation. In fact, subjectivity has been labelled "the lifeblood of the creative accountant". Taking this into account, Richardson et al. (2005) find that the predictive ability of accruals is negatively linked to their degree of subjectivity. This leads us to posit a negative relation between the subjectivity of accruals and their ability to predict future cash flows. Following Richardson et al. (2005), we obtain a subjectivity index by dividing the absolute value of the variation in inventories and in receivables by the whole operating accruals' structure of the firm (absolute value of the variation in inventories, receivables and payables).
Control variables
As in Francis et al. (2005) we include certain additional variables to control for the innate behaviour of accruals, that is, the behaviour driven by the firm's business model and operating environment.
Growth (GRW)
In the growth stage, sales increase faster than average, as do the necessary investments in production facilities. Cash flows are more volatile increasing the need of accruals to correct timing and mismatching problems. Dechow (1994) in the US and Charitou (1997) in the UK empirically find that accruals reflect more value-relevant information in growing firms. We measure our growth variable (GRW) as sales in year t minus sales in year t − 1 divided by sales in year t − 1.
Cash-flows volatility (VOL)
Cash receipts and disbursements generally suffer from a lack of coordination and yield a noisy measure of firm performance with disproportionate variability. As the main purpose of accruals is to solve these mismatching problems (Dechow 1994), the higher the variability of cash flows, the higher will be the improvement in predictive ability after the inclusion of accruals. We measure the cash flows volatility variable (VOL) as the firm-year OCF minus the median OCF for the whole sample period. OCF values are scaled by total sales.
Operating cycle and industry (IND)
There is a general consensus in the literature that the increased predictive ability of accruals has a positive relationship with the firms' operating cycle (Dechow 1994;Charitou 1997). The literature also agrees that the operating cycle is inherently linked to the industry sector in which the firm operates. We include four dummy variables to control for five industrial sectors: IND 1 = energy and water; IND 2 = manufacturing; IND 3 = wholesale trade and IND 4 = services. The fifth one, construction, is taken as the industry of reference because of its especially long operating cycle. In its extended version, model 3 is as follows: ABSE 1,t+i −ABSE 2b,t+i = β 0 +β 1 SIZE t +β 2 PRIV t +β 3 NNF t +β 4 SUB t +β 5 GRW t +β 6 VOL t + β 7 to β 10 IND 1 to 4 + ε t
Sample
We take data for the analysis from the Sabi data base, the Spanish section of Amadeus from Bureau van Dijk, which provides standardized financial statements for a large set of Spanish private and public companies. Our sample consists of all the available industrial and commercial firms from the period 1997-2003. As we are interested in analyzing the predictive ability of accruals, we separate the total sample into two periods: an estimation period (in-sample estimations) and a holdout period (out-ofsample predictions). We use the 1997-2001 period to estimate in-sample coefficients for the models under comparison (cash flows model and accrual-based model). We use the holdout 2002-2003 period to obtain 1 to 4-years-ahead cash flows predictions (FOCF t+i ) and their corresponding absolute prediction errors (ABSE j,t+i ) after comparison with cash flows values. As private firms must fulfil the same statutory audit requirements as public firms in Spain (external audit is compulsory for all firms above a certain size), we require that firms in our sample include an audit report together with their financial statements. Our sampling criterion provides an estimation sample of 14,002 firm-year observations and a holdout sample of 7,428 firm-year observations. Due to missing data, the number of FOCF observations falls slightly as the length of the prediction period increases, ranging from 7,426 for the period comprising only t + 1 to 6,907 for the 4-year period ending in t + 4. Most of these observations belong to privately-held companies. We trim both samples to exclude the 1% of the observations with the highest and lowest values of some variables depending on the model estimated.
Results
In Table 1 we present the descriptive statistics for the variables used in the analysis. Average OCF is positive (mean = 0.062) and clearly higher than average Earnings (mean = 0.041) which is in line with prior US evidence. The reason for this is the negative effect of amortization and depreciation (mean = 0.037) which is well above the positive mean value of short-term accruals (mean = 0.016). 15 Also as expected, OCF shows a much higher volatility than earnings (std. dev. 0.149 vs. 0.084), which is consistent with Dechow (1994) when she says that OCF suffers from more timing and mismatching problems, making it a less stable indicator. The firms in our sample show great heterogeneity in size (total assets equal 6, 10 and 21 million euros) for Q25, median and Q75, respectively) and a substantial use of the short term in their banking debt requests (mean NNF = 1.26). Average growth is almost 9% (mean GRW = 0.083) and subjective accruals account for an average 63% of all operating accruals. Table 2 shows how models 1, 2a and 2b fit average 1 to 4 years-ahead operating cash flows in the estimation period. All three models are well specified and produce significant F statistics. Confirming previous results, and for all future cash flows projections (columns FOCF t+1 to FOCF t+4 ), the two accrual-based models (2a and 2b) explain the future cash flows clearly better than the cash flows model (CFM, model 1: Adj. R 2 = 0.081-0.197). Moreover, in all cases, the disaggregated accruals specification suggested by Barth et al. 2001 (ABMd, model 2b: Adj. R 2 = 0.294-0.468) has substantially more explanatory power than the aggregated one (ABMa, model 2a, Adj. R 2 = 0.135-0.225). The likelihood ratio test for nested models (panel B) also shows that the above differences are statistically significant at the one percent level. These first results are consistent with the US evidence in future cash flows prediction. However, our analysis, so far, is based on associations rather than predictions.
Association tests are unable to help investors make future cash flows predictions. In fact, the usefulness of the previous estimations lies in the stability of their coefficients in different time periods and their superiority in goodness of fit does not necessarily INV changes in inventory; RE changes in accounts receivable; AP changes in accounts payable; OCL changes in other current liabilities; DEP amortization and depreciation expenses. The above variables have been scaled by total sales; SIZE total assets (e million); PRIV 1 if the firm is privately held (does not quote in the Spanish stock market), 0 otherwise; NNF need for new finance, measured as the ratio short-term to long-term banking debt; SUB subjectivity of accruals, measured using the Richardson et al.'s (2005) index; it is obtained by dividing the absolute value of the variation in inventories and in receivables by the whole operating accruals' structure of the firm (absolute value of the variation in inventories, receivables and payables); GRW growth, measured as sales in year t minus sales in year t − 1 divided by sales in year t − 1; VOL cash flows volatility, measured as the firm-year OCF minus the median OCF for the whole sample period translate into superiority in predictive ability. Consequently, we must apply an out-ofsample methodology (holdout period). Table 3 presents the means and medians of the absolute out-of-sample forecast errors provided by our three models (1, 2a and 2b) for each (t + 1 to t + i) future cash flows prediction period. We apply the t-test and Wilcoxon's signed test to analyze differences in the models' means and median errors, respectively, in pairs. In line with previous results, the disaggregated accrual-based model (ABMd, model 2b) presents significantly lower mean and median forecast errors than the other two models in all four periods (mean = 0.068, 0.051, 0.045 and 0.046 for forecasting periods ending in t + 1 to t + 4, respectively). The aggregate model (ABMa, model 2a) shows a better forecast ability than the current cash flows model (CFM, model 1), which is the worst specified according to its errors (mean 0.079, 0.062, 0.056 and 0.053 for periods ending in t + 1 to t + 4, respectively). Overall, Table 3 confirms that, in Spain, accrual-based accounting provides more value-relevant indicators than the primitive cash flows basis and, consequently, we cannot reject H 1 . Our results show that the efforts made by Spanish accounting regulators towards an increasingly sophisticated accrual-based accounting model have been worthwhile. This new evidence sheds some light on why prior priced-based findings in certain Panel A shows in-sample adjusted R 2 for the three models: CFM the cash flows model (model 1); ABMa the aggregated accrual-based model (model 2a); and ABMd the disaggregated accrual-based model (model 2b). The estimation sample comprises 4,147 firms for the period 1997-2001. FOCF t+i is the average OCF generated during the period starting in t + 1 and ending in t + i, where i ranges from 1 to 4. All variables have been scaled by total sales. *** Model F is statistically significant at the 0.01 level Panel B shows the results for the likelihood ratio test for R 2 comparison in nested models. *** Likelihood ratio test is statistically significant at the 0.01 level code-law European countries find such limited support for the informational value of accruals. Stock prices in these markets may not be efficiently capturing the present value of future cash flows. Although significance on the role of accruals for future cash flows prediction has been found from an overall perspective (Hypothesis 1), there are certain firm-specific situations where opportunistic manipulation could be lowering the quality of accruals well below their desirable levels. Hypothesis 2 states that, in Spain, earnings manipulation exerts a considerable moderating effect on this role. As we apply an errors-based methodology where all determinants are simultaneously tested in the same regression, a correlation matrix is necessary to ensure the absence of multicollinearity. Table 4 shows above/below diagonal Pearson/Spearman correlation coefficients, respectively, for all the firm-specific circumstances under study. All coefficients are well below 0.5, which means there are no serious problems of multicollinearity in our holdout sample. 16 In Table 5 we present the estimation results from model 3, which we use to test Hypothesis 2. The dependent variable is the difference between the absolute forecast errors of models 1 and 2b (ABSE 1 − ABSE 2d ). It shows the extent to which accruals improve on current cash flows in predicting future cash flows. The greater its value, the FOCF t+i is the average OCF generated during the period starting in t + 1 and ending in t + i, where i ranges from 1 to 4. All variables are scaled by total sales Significance in error differences have been tested using the t test for mean differences and the Wilcoxon signed test for the medians *, **, *** The difference in the error between the two models is significant at the 0.10, 0.05 and 0.01 level, respectively greater the decrease in prediction error and, hence, the better the quality of accruals. The four experimental variables (SIZE, PRIV, NNF and SUB) proxy for firm-specific circumstances identified by the literature as especially prone to manipulation. Three additional variables have been included to control for the intrinsic behaviour of accruals (GRW, VOL and IND). The table includes four columns for each of the different future cash flows prediction horizons (periods ending in t + 1 to t + 4) considered in the calculation of errors. In general, all the variables show significant coefficients in the expected direction and are robust to most future cash flows projections. The results confirm the positive effect of firm size on the quality of accruals (SIZE, p value < 0.05 in all prediction periods). Monitoring and reputation costs affect large firms in Spain giving their accruals a significantly greater forecasting ability than those of smaller firms. Also in line with our expectations, the accruals of privately- Richardson et al.'s (2005) index; it is obtained by dividing the absolute value of the variation in inventories and in receivables by the whole operating accruals' structure of the firm (absolute value of the variation in inventories, receivables and payables); GRW growth, measured as sales in year t minus sales in year t − 1 divided by sales in year t − 1; VOL cash flows volatility, measured as the firm-year OCF minus the median OCF for the whole sample period *, **, *** The coefficient is significant at the 0.10, 0.05 and 0.01 level, respectively held firms exhibit a lower ability to predict the companies' future cash flows. As in Burgstahler et al. (2006), the negative sign of the PRIV variable, which is significant in all columns ( p value = 0.000) shows that, in code law countries (e.g. Spain) private firms engage in manipulation practices to a greater extent. The negative sign of the NNF variable shows that accruals experience a significant decline in predictive value as firms approach their upper level of short-term banking debt. Company directors in need of new finance exhibit a clear tendency to make use of opportunistic accounting choices to achieve their immediate targets. Following Richardson et al. (2005), we have also incorporated a variable that measures the degree of subjectivity inherent in each firm's set of accruals (SUB). Given their more arbitrary allocations, subjective accruals have been hypothesized as negatively related to reliability and, by extension, to their level of quality. Our results confirm these expectations. The higher the SUB index (percentage of the variations of inventory and receivables in the accruals of the firm), the lower the incremental contribution of accruals to the prediction of future cash flows. In sum, the four experimental variables taken to proxy for opportunistic manipulation present significant coefficients in the expected direction. This is evidence that accounting manipulation exerts a strong constraining effect on the information value of accruals by substantially lowering their predictive ability. It is also consistent with Hypothesis 2.
Except for industry, the control variables exhibit the expected signs. For firms in the growth stage, accruals produce a significantly greater decline in the out-of-sample forecast errors as shown by the positive sign of the GRW variable. Unlike firms in financial distress, growing firms seem to make a less aggressive use of accruals. Their use comes more from the inherent needs of this business stage than from manipulation Likewise, the coefficient of the cash flows volatility variable (VOL) is also significantly positive, that is, the more volatile cash flows are, the greater the predictive ability of accruals, which is consistent with their smoothing properties. Finally, with respect to industry, all four variables show a positive sign, which is contrary to our expectations. As construction firms, which are our reference industry, have a longer operating cycle than most other companies, accruals should play a more important role in solving the problems suffered by their mismatched cash inflows and outflows. A possible explanation for the unexpected positive sign found in the four IND variables (all industries show a better predictive ability than construction) is that book values in the construction sector failed to correctly anticipate the decline in market value suffered by their assets in the second half of this decade. 17 5.1 Robustness checks
Free cash flows
Operating cash flows might not be the most adequate variable to capture firm value because they ignore investment cash outflows made to maintain the economic capacity of the firm (Subramanyam and Venkatachalam 2007). Using operating cash flows, firms with a greater need for long-term investments may be overvalued, unless the risk associated with their cash flows significantly reduces the value of the firm. Free cash flows represent the residual cash after these investments, so they could be a more adequate variable to measure firm value. To analyze whether our results on the forecasting behaviour of accruals and current cash flows are robust to the use of free cash flows, we have re-estimated all our models using free cash flows instead of operating cash flows predictions for each of our t + i periods from i = 1 to 4. 18 Results are robust in the sense that the disaggregated accrual-based model (ABMd, model 2b) still provides significantly lower forecast errors than both the aggregate model (ABMa, model 2a) and the isolated cash flows model (CFM, model 1).
Private-only firms
Given that our main purpose is to test the role of accruals for the prediction of future cash flows in Spain, we tested our first hypothesis using the whole sample with the corresponding proportion of public and private firms. We carry out a sensitivity analysis of Hypothesis 1 after ruling out public firms and we find that accruals provide significant incremental insight into future cash flows for a sample of only private firms. As we also included the PRIV dummy variable in the multivariate model estimated to test Hypothesis 2, we are in fact providing evidence that, although significant in both groups, the quality of accruals is significantly better for public than for private firms.
Robust standard errors
One of the requisites of the ordinary least squares technique used for multivariate regression is that the standard errors of the estimated coefficients are independent and identically distributed. The use of panel data sets (e.g., data sets that contain observations on multiple firms in multiple years) increase the likelihood that this requisite is not fulfilled (standard errors are biased) because the residuals might be correlated across firms (firm effect) and across time (time effect). As we use panel data to estimate the multivariate model that tests Hypothesis 2 (model 5), we consider it convenient, as an additional check, to re-estimate it assuming both (firm and time) effects are present in our sample. Following Petersen (2009), for panel data with a short-time series, the best method to obtain unbiased standard errors is to parametrically estimate the time variable and then estimate the coefficients with the standard errors clustered by firm. Our results do not change substantially after this new estimation with the exception of the private variable (PRIV), which is not significant. A possible explanation for this is that the cluster for public companies is fairly small in our sample.
Summary and conclusions
In this paper we analyze whether, in the Spanish context, accrual-based earnings are better able to predict future cash flows than current cash flows. To date most empirical evidence of an incremental value of accruals comes from Anglo-Saxon countries. However, the extension of the Anglo-Saxon empirical support for the benefits of accruals to the Spanish case is yet to be confirmed. In Spain, institutional differences lead to different agency problems and call into question the interest of accounting information for external users. Furthermore, the role of accruals for the prediction of future cash flows is bound to differ across public and private firms and, although the former are likely to be more similar to Anglo-Saxon firms, most Spanish companies are privately held. The Spanish reporting system is especially interesting for this research purpose. While the accounting model took advantage of its adaptation to the EU Directives to approach the Anglo-Saxon model, enforcement mechanisms are still poor in Spain. Motivated by the lack of empirical evidence, the primary purpose of our study is to provide Spanish investors with empirical evidence on the informational value of accruals.
Using a sample of 4,397 companies (mostly privately held), we estimate in-sample regressions of future cash flows on isolated current cash flows (model 1) and on accruals-based earnings (model 2) and we subsequently test for differences in the outof-sample prediction errors between the two models. Overall, our results are consistent with the argument that accruals add relevant information for the prediction of future cash flows, that is, that errors provided by an accrual-based model are significantly lower than those obtained with isolated cash flows. This predictive ability increases if we include accruals in the disaggregated fashion (five components) suggested by Barth et al. (2001).
As a second hypothesis, we argue that, after controlling for the effect of some business fundamentals, the role of accruals for future cash flows prediction is severely constrained by accounting manipulation. To provide evidence of this, we estimate a multivariate model that regresses the decrease in prediction errors brought about by the addition of accruals on a set of firm-specific circumstances where a high degree of manipulation is expected. For each of the variables used to proxy for these circumstances (small firm size, privately held, the need for new finance and the subjectivity of accruals) prediction errors decrease significantly in the hypothesized direction. The main contribution of the paper is to provide empirical evidence of the incremental ability of accruals to predict future cash flows in Spain. We do this using out-ofsample tests and not mere association tests. Further, we find that the role of accruals for future cash flows prediction is seriously lowered by earnings manipulation. Our results, then, have important implications for Spanish users, auditors and regulators as they highlight the need to strengthen the enforcement of accounting standards.
|
v3-fos-license
|
2019-09-10T20:24:05.413Z
|
2019-07-01T00:00:00.000
|
202183951
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1088/1742-6596/1240/1/012113",
"pdf_hash": "2c8b1403fce669e9bf417716d42c7a76143ecd3f",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43113",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "d3bd7c81281fc5653ce67f9c60cf904d90add0a9",
"year": 2019
}
|
pes2o/s2orc
|
3-E analyses of a natural gas fired multi-generation plant with back pressure steam turbine
Energetic, exergetic and environmental (3-E) analyses of a natural gas fired multigeneration (NGFMG) plant is carried out in this study. The plant is consisting of a topping gas turbine (GT) block with fixed 30 MWe output, bottoming back pressure steam turbine (ST) block with variable electrical output and utility hot water generation block with variable generation capacity. Variation of topping cycle pressure ratio (rp=2-18) and gas turbine inlet temperature (TIT=750-850 °C) as plant operational parameters on the 3-E performance of the plant of are reported here. Base case performance (at rp=4 & TIT=800°C) of the plant shows that the plant is about 33% electrically efficient at base case with fuel energy savings ratio (FESR) value of 40 %. Electrical efficiency of the plant along with FESR increases with either increase in rp or in TIT. However ST output and hot water production rate decreases with increase in the value of plant operational parameters. Exergy analysis of the plant shows that maximum exergy destruction occurs at the combustion chamber. Exergy analysis also signifies that isentropic efficiency of the GT can also be a plant influencing parameter. From emission point of view it is observed that electrical specific CO2 emission is 0.6 kg/kWeh at base case. Specific CO2 emission rate gets lowered with either increase in rp or in TIT.
Introduction
Energy can be utilized through different applicable pathways viz., generation of power, cooling and heating of systems as well as for refrigeration purpose, depending on the requirement. Researchers are waving their attention towards the clean, efficient and economic generation of energy for sustainability in now-a-days [1][2][3]. Overall efficiency of a system can be improved, if it is designed for multigeneration purpose, to recover the low grade energy [4][5]. Also, environmental impacts of such systems are almost negligible along with their minimal cost of generation [6][7].
Worldwide LNG gas revolution and United State's shell revolution has reinforced the Globes' natural gas market [8]. Therefore, usage of natural gas has become reliable and beneficial to meet the energy demand in a sustainable way. Significant advantages of utilization of natural gas in a multigeneration system are their higher overall efficiency and the lower environmental impacts [9].
Utilization of energy is governed by the laws of thermodynamics. However, exergy analysis of a system and its components help to identify systems' possible improvement in terms of overall efficiency. Environmental analysis is carried out to measure the pollutant emissions (especially CO 2 ) from a plant. Thus, together energy, exergy and environmental analysis of a system are called as 3-E analysis. Recently, 3-E analyses technique is gaining popularity due to the motivation from efficiency improvement and global warming challenges.
A large number of research works on based on thermodynamic analysis of multi-generation and combined cycle systems have been conducted during last decade. Exergy analysis of gas turbine based combined cycle plant with post combustion CO 2 capture has been carried out by Eritesvag et al., [10]. Reddy and Mohamed [11] have performed the exergy analysis of a natural gas fired combined power generation unit. Conventional and advanced exergetic analyses of a combined cycle plant have been reported by Petrakopoulon et al. [12]. Reddy et al., [13] have carried the exergetic analysis of a solar concentrator aided natural gas fired combined cycle plant. Chiesa and Consonni [14] have studied the emissions from a natural gas fired combined cycle plant. Thermodynamic and exergo-environmental analysis along with multi-objective optimization of a gas turbine power plant has been conducted by Ahamadi and Dincer [15]. Ahamadi et al., [16] have carried out the exergy, exergo-economic and environmental analysis and optimization of a combined cycle plant.
Research works carried out by different groups, as conferred above, have advanced the knowledge of the natural gas based combined cycle plant in general and some of the analysis dealt with exergetic and environmental aspects. 3-E analyses of a novel multi-generation plant consisting of a bottoming back pressure steam turbine and fuelled with natural gas have been carried out and reported in this study. The plant is capable of producing 30 MW e GT output, variable bottoming ST electrical output along with variable utility heat generation. Combined heat and work output capability of the plant is evaluated in terms of fuel energy savings ratio (FESR). Furthermore, exergy analysis of the plant is carried out by computing the exergy destruction and exergy efficiency of the plant components at different thermodynamic conditions. Environmental assessment of the plant is measured in terms of specific CO 2 emission and different operating conditions. Finally a vivid picture of optimized thermodynamic condition of the designed plant is also reported in this study.
Plant layout and description
Schematic diagram of the proposed NGFMG plant is shown in Figure 1. Natural gas enters the combustion chamber (stream 7) of the topping Brayton cycle and gets combusted in the presence of hot and compressed air, coming out from the compressor (stream 2). Figure 1. Natural gas enters the combustion chamber (stream 7) of the topping Brayton cycle and gets combusted in the presence of hot and compressed air, coming out from the compressor (stream 2). Flue gas from the combustion chamber enters the GT (stream 3) and gets expanded. GT drives the compressor and rest of the GT shaft work is used to generate electricity. Exhaust from the GT (stream 4) then enters the HRSG to run a bottoming back pressure steam turbine cycle. The HRSG is composed of four components viz., superheater, evaporator, economizer and steam drum. Superheated steam from the HRSG enters the ST (stream 10) and produces electrical power. Significant amount of steam is extracted from the ST (stream 11) and enters the deareator. Exhaust from the ST (stream 13) enters the condenser and after then it enters the feed pump 1. Pumped feed water enters the deareator (stream 19) and mixes with the bleed steam. Hot water coming out from the deareator (stream 14) enters the feed pump 2 and recirculated to the HRSG.
Flue gas exhaust from the HRSG and condensate from the condenser are further used to run a utility water heater (UWH). Condensate coming out from the condenser (stream 20) enters the UWH. Then cold water from the UH (stream 21) enters the feed pump 3 and followed by the UHG (stream 23) where the water gets heated and recirculated back again to the condenser (stream 21) again. Thus the complete system produces combined electrical power as well as hot water from a single energy source. The entire system is referred as multi-generation system.
Model equations x x
Following assumptions are made during the analyses of the plant: x Isentropic efficiencies of both the compressor and GT are 85%. The value is same for the feed pumps also. x Pressure drop occurs across the combustion chamber is 0.1 bar. Pressure drop for each of the heat exchangers used in HRSG is 1 bar across the water side and 0.005 bar across the gas side. The value is 2 bar for the bottoming UHG across its water side. x ST operates at 20 bar pressure and 450 o C temperature, while the bleed stream pressure is 2 bar and condenser pressure is 1.6 bar. Standard thermodynamic relations are considered while the detailed thermodynamic analysis of the plant [5][6]. Adiabatic flame temperature is calculated at the outlet of the combustor. Stream based mass and exhaust mole flow rate has also been calculated. However model equations related to the performance analysis of the plant are listed below: Net power output from combined cycle is the sum of power outputs from gas turbine and from steam turbine and calculated as: Fuel saving of the designed plant is articulated assuming alongside a duo of separate heating and power plants as [17]: The exergetic efficiency of any component is given by: fuel product exergetic ex ex n (7) where, ex product and ex fuel represent the fuel and product exergy of the individual plant components as well as those of the whole plant.
Results and discussions
3-E performances of the plant at r p =4 & TIT=800 o C is shown in Table. 1. The thermodynamic state points as stated above is considered because of all the modern GTs are in operation, closely following the thermodynamic conditions. It is observed from the table that the GT provides a fixed electrical output of 30 MWe along with a bottoming back pressure ST output of about 18 MWe as well as a heat output of 87 MWt. This huge amount of heat produces utility steam of about 517 kg/s. Overall electrical efficiency of the plant is about 33% at this point of operation of the plant. Fuel energy savings ratio (FESR) value is found to be about 41% and electrical specific NG consumption (ESNGC) is 281.7 kg/MWeh at the stated thermodynamic conditions. Furthermore, it is observed from the table that exergy efficiency values are to be 31.37 % and 11.96%, respectively, for power and heat generation. Finally specific CO 2 emission from the plant is found to be about 600 kg/MWeh at r p =4 & TIT=800 o C. Also, the hot water generation is free from the CO 2 emission with the stated rate of emission. (2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) and TIT (750-850 o C) is graphically shown in subsequent figures. Figure 2 represents the variation in overall electrical efficiency with the variation in said design parameters.
The graph shows that electrical efficiency of the plant continuously increases with increase in compressor pressure ratio. This is because for a fixed GT TIT, with increase in r p , required heat input to the topping GT cycle decreases which ultimately resulting in overall efficiency to increase. Although ST from the plant decreases with increase in r p (shown in Figure 3) however, the effect is Variations in both ST output and utility water generation from the plant with r p at different TITs shown in Figure 3. It is well understood that as the required heat input to the plant decreases with increase r p , resulting in total flue gas generation from the combustor to decrease. Decreased rate of flue gas generation, at elevated r p values influence the bottoming steam generation (of the ST cycle) and therefore the ST output decreases. However increase in TIT value slightly helps in increasing the ST output. For the same reason utility heat generation rate and therefore the hot water generation also decreases with increase in r p . However, in contradiction to ST output, heat output (and therefore the hot water generation) from the bottoming utility heater is higher at lower TITs, although marginal.
Fuel Energy Savings Ratio (FESR) value also sharply increases with increase in r p for considered range of TIT, as shown in Figure 4. As the GT output is fixed in this study, with increase in r p although the ST output and heat output decrease but required heat input also decreases resulting in FESR to increase. It is also seen from the from the graph that higher GT TIT resulting in better performance of the plant in terms of fuel savings for combined power and heat generation.
Variation in electrical specific natural gas consumption (ESNGC) with variation in r p is shown in Figure 5. As explained earlier, required heat input decreases with increase in r p and to compensate the lower heat input, the required gas flow rate also decreases as experienced from Figure 5. Again higher TIT requires less driving fuel, as seen from the figure.
The term 'electrical specific CO 2 emission' signifies the hourly CO 2 emission from the plant per MW of electrical output. Therefore heat related CO 2 emission is nil form the plant. Emission from the plant is lower compared to a conventional coal based thermal power plant even at lower r p and TITs, Figure 7. It is evident from the figures that higher TIT results in improved exergetic performance of the plant. It is also observed from the figures that majority of the input exergy is destructed in the combustion chamber due to occurrence of chemical reactions. Figure 8 shows that exergy efficiency of the combustion chamber increase (and therefore the destruction decreases) with increase in r p for each GT TIT. Also the ex. efficiency and destruction values get reduced at the elevated TITs. This is due to the fact the higher r p and TIT values helps in reduction of mass and energy interactions among the component.
Therefore it is obvious from the above discussions that r p of the compressor and TIT of GT are the two major plant influencing parameters. Furthermore isentropic efficiencies of the topping prime movers influence the exergetic performance of CC (which in turn influences the performance of the whole plant) as experienced from the exergy analysis. Changes in isentropic efficiencies of the topping prime movers lead towards the change in outlet temperatures, which ultimately influence the variation in mass and energy transaction of the system.
Isentropic efficiency of the GT effects the ex. efficiency and ex. destruction of the combustion chamber as indicated by Figure 9. Exergy efficiency of the said component increases (and therefore the destruction decreases) with increase in Ƞ isen,GT . This is due to the fact that higher Ƞ isen,GT yields in lower GT exhaust temperature and therefore higher specific GT output. As the work output from the topping GT cycle is fixed therefore, mass flow rate of driving fuel decreases with increase in Ƞ isen,GT . Hence the exergy efficiency value is increasing and accordingly the exergy destruction of the CC is decreasing with increase in Ƞ isen,GT. Figure 10 shows the changes in overall electrical efficiency and FESR of the plant with change in Ƞ isen,GT . It is evident from the graph that both these values increase with increase in the isentropic efficiency of the GT. As stated earlier that with increase in the Ƞ isen,GT value, required heat input and therefore the required fuel input decreases (as also seen from Figure 12), resulting in both these values to increase.
Variation in ST output as well in the utility water generation with variation in Ƞ isen,GT is shown in Figure 11. Value of both these parameters decreases with increase in Ƞ isen,GT due to reduced rate of working fluid flow from the topping cycle. Furthermore, Specific CO 2 emission from the plant also reduces with increase in Ƞ isen,GT , as seen from Figure 12. As conferred above that mass flow rate driving fluid through topping cycle decreases, resulting in emission from the plant to decrease with increase in Ƞ isen,GT .
Conclusions
3-E analysis of the developed natural gas fired multi-generation (NGFMG) plant concludes followings: x The plant can deliver a combined power output of about 50 MW along with about 518 kg/s of hot water at base case. The electrical efficiency is about 33% and FESR is about 41% along with specific CO2 emission of about 600 kg/MWeh at this thermodynamic state points. x Overall electrical efficiency, FESR continuously increases and ST output, NG consumption, CO2 emission and steam generation continuously decreases with increase in rp. However, the efficiency value becomes linear with further increase in rp value, beyond the considered range. Also higher TIT yields in better thermodynamic performance.
x Exergy analysis shows that maximum destruction occurs at the combustion chamber. However, the destruction rate of the component decreases with increase in rp as well as TIT. Furthermore, isentropic efficiency of the compressor is also found to be a plant influencing parameter. x Increase in isentropic efficiency of the GT yields in better energetic, exergetic and economic performance from the plant. Economic analysis of the plant can be carried out further, to cross-check the economic viability of the modelled plant.
|
v3-fos-license
|
2022-12-01T15:46:46.458Z
|
2018-05-31T00:00:00.000
|
254113026
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-018-5925-7.pdf",
"pdf_hash": "99cd43ce17f955709fd4ba547210abfceb52e6bf",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43114",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "99cd43ce17f955709fd4ba547210abfceb52e6bf",
"year": 2018
}
|
pes2o/s2orc
|
Sensitivity of a low threshold directional detector to CNO-cycle solar neutrinos
A first measurement of neutrinos from the CNO fusion cycle in the Sun would allow a resolution to the current solar metallicity problem. Detection of these low-energy neutrinos requires a low-threshold detector, while discrimination from radioactive backgrounds in the region of interest is significantly enhanced via directional sensitivity. This combination can be achieved in a water-based liquid scintillator target, which offers enhanced energy resolution beyond a standard water Cherenkov detector. We study the sensitivity of such a detector to CNO neutrinos under various detector and background scenarios, and draw conclusions about the requirements for such a detector to successfully measure the CNO neutrino flux. A detector designed to measure CNO neutrinos could also achieve a few-percent measurement of pep neutrinos.
Solar neutrinos
Solar neutrino experiments were pivotal in the groundbreaking discovery of neutrino oscillation and, hence, massive neutrinos, while at the same time confirming our understanding of fusion processes in the Sun. The Sudbury Neutrino Observatory (SNO) experiment resolved the so-called Solar Neutrino Problem by detecting the Sun's "missing" neutrinos, confirming the theory of neutrino flavor change. The combination of a charged-current (CC) measurement from SNO with SuperKamiokande's high-precision elastic scattering (ES) measurement demonstrated that the electron neutrinos produced in the Sun were transitioning to other flavors prior to detection [1], a result later confirmed at 5σ by SNO's measurement of the flavor-independent 8 B flux using the neutral current (NC) interaction [2]. The KamLAND a e-mail: rbonventre@lbl.gov reactor experiment confirmed this flavour change as being due to oscillation [3]. This opened the door to a precision regime, allowing neutrinos to be used to probe the structure of the Sun, as well as the Earth and far-distant stars. Solar neutrinos remain the only sector of neutrinos with a confirmed observation of the effect of matter on neutrino oscillation at high significance, providing a unique opportunity to further probe this interaction to search for non-standard interactions and other effects.
The Borexino experiment made the first direct measurements of the 7 Be, pp and pep fluxes [4][5][6], as a result of which the pp fusion cycle in our Sun has been well studied, with measurements of all neutrino sources bar the highenergy hep neutrinos. The subdominant CNO cycle is less well understood and yet has the potential to shed light on remaining mysteries within the Sun. One of the critical factors that engendered confidence in the Standard Solar Model [7] (SSM) was the excellent agreement (∼ 0.1%) of SSM predictions for the speed of sound with helioseismological measurements. The speed of sound predicted by the SSM is highly dependent on solar dynamics and opacity, which are affected by the Sun's composition [8]. In recent years the theoretical prediction for the abundance of metals (elements heavier than H or He) in the photosphere has fallen due to improvements in the modeling of the solar atmosphere, including replacing previous one-dimensional models with fully three-dimensional modeling, and inclusion of effects such as stratification and inhomogeneities [9]. The new results are more consistent with neighboring stars of similar type, and yield improved agreement with absorptionline shapes [10], but at the same time reduce the prediction for the metal abundance by ∼30%. When these new values are input to the SSM, the result is a discrepancy in the speed of sound with helioseismological observations. This new disagreement has become known as the "Solar Metallicity Problem" [11,12]. A measurement of the CNO neutrino flux may help in resolving this problem [13,14]. The impact of metal-licity on pp-chain neutrinos is small relative to theoretical uncertainties, but the neutrino flux from the sub-dominant CNO cycle depends linearly on the metallicity of the solar core, and the predictions for the two models differ by greater than 30% [15]. The theoretical uncertainty on these predictions is roughly 14-18% in the so-called AGS05-Opt model, although greater in other models [15]. However, these uncertainties can be reduced to < 10% using correlations in the theoretical uncertainties between the CNO and 8 B neutrino fluxes: the two have similar dependence on environmental factors, thus a precision measurement of the 8 B neutrino flux can be used to "calibrate" the core temperature of the Sun and, thus, constrain the CNO neutrino flux prediction [13]. In [13], the final uncertainty is dominated by the nuclear physics. A precision measurement of the CNO flux then has the potential to resolve the current uncertainty in heavy element abundance.
Borexino has placed the most stringent limits on the CNO neutrino flux to date [6], and continues to pursue a first observation. However, extraction of this flux is extremely challenging due to the similarity of the spectrum of ES recoil electrons with background 210 Bi decays in the target. Borexino propose to use the time evolution of the α decay of the daughter, 210 Po, to constrain the level of 210 Bi. This method requires both a stable α-particle detection efficiency and a lack of external sources of the 210 Po daughter, which can be challenging to achieve [16]. A recent paper discusses the sensitivity of several current and future experiments to the CNO flux [17]. A detector with directional sensitivity could discriminate between the directional solar neutrino signal and the isotropic background, without the need for a time-series analysis.
Low-threshold directional detection
Water Cherenkov detectors (WCD) are limited in energy threshold by the relatively low Cherenkov photon yield. Scintillator-based detectors can achieve the thresholds required to observe CNO neutrinos, but lose the advantage of the directional information provided by Cherenkov light. The novel water-based liquid scintillator (WbLS) target medium [18] offers the potential to benefit from both the abundant scintillation and directional Cherenkov signals, thus achieving a massive, low-threshold directional detector.
The sensitivity of a 50 kT pure LS detector has been studied by the LENA collaboration [19]. While the threshold of a WbLS detector will not be as low as for a pure LS target (LENA studies assumed a 250 keV threshold), the additional information provided by the Cherenkov component provides a strong benefit in signal/background separation. WbLS offers a uniquely broad, multi-parameter phase space that can be optimized to maximize sensitivity to a particular physics goal. The WbLS "cocktail" can range from a high-LS fraction, oil-like mixture, with > 90% LS, to a water-like mixture with anything from 10% to sub-percent levels of LS. The choice of fluor affects the scintillation yield, timing, and emission spectrum, and metallic isotopes can be deployed to provide additional targets for neutrino interaction [20].
In this article we study the sensitivity of a large WbLS detector to CNO neutrinos under a range of detector scenarios, including target size, LS fraction, photo-coverage, angular resolution, and the level of intrinsic radioactive contaminants. These studies can inform the design of a future experiment targeting a CNO flux observation, such as Theia [21,22].
There is much interest in the community in developing low-threshold directional detectors. Monte Carlo studies in [23,24] discuss how the potential for separation of a Cherenkov signal in a scintillating target could be used to extract particle direction. The CHESS experiment has recently demonstrated first detection of a Cherenkov signal in pure LS (both LAB and LAB/PPO) [25,26]. While highenergy muons were used for this demonstration, they were in the MIP regime and thus the energy deposited along the fewcm track in the CHESS target was only a few MeV, within the regime relevant for this work. Studies based on data from the KamLAND detector show the potential for directional reconstruction using time-of-flight of the isotropic scintillation light [27,28].
In Sect. 2 we describe the analysis method for evaluating the uncertainty on the CNO and pep neutrino fluxes, and the simulation of each signal and background source. In Sect. 3 we describe various scenarios for both detector configuration and background assumptions under which the CNO flux is evaluated. Section 4 presents the results, and Sect. 5 describes the conclusions.
Analysis methods
Neutrino flux sensitivities are determined using a binned maximum likelihood fit over two-dimensional PDFs in energy and direction relative to the Sun, cos θ . The energy dimension allows separation of neutrino fluxes from each other, as well as discrimination from certain background events. The direction dimension is critical for a full separation of the CNO flux from radioactive background.
Simulation of expected signals
Simulations for this paper were produced using the RAT-PAC software (https://github.com/rat-pac/rat-pac), which is based on Geant4. Optical properties for the WbLS were constructed from weighted combinations of water and scintillator optics as determined by the SNO+ collaboration. The optical simulation was tuned to allow multicomponent absorption and reemission with separate absorption lengths and reemis- sion probabilities for each component of the WbLS cocktail. Radioactive decays were simulated using the decaychain generator developed by Joe Formaggio and Jason Detwiler (Private communication), and solar neutrino interactions were simulated using an elastic scattering generator also developed by Joe Formaggio. Solar signals were simulated assuming fluxes from the BS05OP solar model [7], which assumes the higher solar metallicity, and using LMA-MSW survival probabilities from the three flavor best fit oscillation values from [29]. The reconstructed effective electron energy spectrum for each signal was determined semi-analytically. First, the distribution of the number of PMT hits (NHit) per event for each signal was found, handling the scintillation and Cherenkov components separately. Given a WbLS cocktail and detector configuration, the scintillation contribution to the NHit for an event at a specific position scales linearly with the quenched energy deposition. This scaling was determined by simulating electrons at each position with the Cherenkov light production disabled. The Cherenkov contribution to the NHit was determined by simulating each signal with the scintillation light yield set to zero, but with absorption and reemission of the Cherenkov light in the scintillator and wavelength shifter enabled. The expected NHit for each event was taken to be the sum of the Cherenkov Nhit plus a number of scintillation hits drawn from a Poisson distribution with a mean given by the scintillator energy deposition in that event times the scaling factor for the relevant event position. This method made efficient use of computing resources to simulate a full set of background event types. The result of this procedure was compared with the NHit distribution from a full simulation for 1 MeV electrons, and both the mean and width were found to agree to within 0.5%, as shown in The conversion from NHit to reconstructed effective electron energy was determined using a position-and directiondependent lookup table. The lookup table was generated by simulating electrons at various positions, directions, and energies using the above procedure. The resultant energy PDFs for 5% scintillator are shown in Fig. 2.
For each signal the cos θ distribution was determined fully analytically. All non-neutrino signals were assumed to be flat. For the solar signals the ES generator determined the electron direction relative to the Sun based on the differential cross sections. This was then convolved with a chosen angular resolution (Sect. 3).
Baseline fit configuration
The normalizations for 11 signals are floated in the fit. 238 U and 234 Th chain backgrounds are assumed to be in equilibrium except for 210 Bi, 210 Po, and 210 Pb, and the backgrounds in each chain are floated together as a single parameter. The various flavor components of the 8 B, 7 Be, pep, and CNO signals are combined into one parameter per flux, assuming survival probabilities from [29]. The CNO signal contains the sum of the 17 F, 15 O, and 13 N solar neutrino signals. The pp solar signal is not included as it falls below the energy threshold, and the hep solar signal is fixed in magnitude as it is too small to be reconstructed. 210 Bi, 40 K, 85 Kr, and cosmogenically activated 11 C are each included as a separate parameter. The 39 Ar and 210 Po backgrounds are floated together as their energy spectra above 600 keV are similar.
The baseline fit uses 40 bins in cos θ and 20 keV bins in energy from 600 keV to 6.5 MeV. A 5 year livetime is assumed. Projection of two dimensional fit in cos θ for energies between 1 and 1.5 MeV for a randomly generated fake dataset with the baseline detector configuration and background assumptions A 50% fiducial volume cut is applied in order to reduce the impact of background contaminants in external regions of the detector, such as γ s from 208 Tl in the PMTs, to negligible.
Full bias and pull studies were performed, and the fit was observed to be unbiased, with the expected pull distribution. Figure 3 shows the pull distribution for fits with the baseline configuration. Figure 4 shows the projection of one two-dimensional fit in the cos θ dimension. This figure illustrates the importance of angular resolution in extracting the solar neutrino signal even with backgrounds many order of magnitudes larger, using the high statistics achievable in a large detector.
Detector configuration and background assumptions
The sensitivity to CNO neutrinos is studied under a range of detector scenarios and background assumptions: For the purposes of comparison we define the baseline detector configuration to be a 50-kT detector with 90% PMT coverage, 5% WbLS, and 25 • angular resolution, with baseline background levels as given in Table 2 in Sect. 3. All results assume a five year livetime.
Target volume We consider both a 25 and 50 kT total detector volume, corresponding to a 31.7 or 40-m sized rightcylindrical vessel. The PMTs are positioned at the edge of this volume, and a fiducial volume is selected for analysis (Sect. 2). The 50% fiducial volume corresponds to a 4.15-m buffer between the PMTs and the target volume for the 50 kT detector, and a 3.27-m buffer for the 25 kT detector.
Increasing the target volume scales the exposure accordingly, but at the same time reduces the overall light collection of the detector due to absorption, thus impacting the achievable energy resolution. This additional absorption would also negatively impact the angular resolution. Since this work does not perform a full directional reconstruction, this correlation is not explicitly included. However, a range of possible angular resolutions are considered for each target volume.
Angular resolution A critical factor in this work is the assumed angular resolution. The angle between the incoming particle direction and the direction to the Sun, cos θ , can be used to differentiate signal from background. Due to the kinematics of the ES interaction, solar neutrinos are predominantly directed away from the Sun. This provides a key handle to discriminate solar neutrino events from an isotropic radioactive background.
All radioactive backgrounds are assumed to be isotropically distributed and thus have a flat distribution in cos θ . For solar neutrinos the direction of the electron relative to the Sun was determined as a function of energy via simulations that take as input the full differential cross sections. The resulting electron direction was then convolved with a detector angular resolution of the form with a width given by σ . The angular resolution was assumed to be constant with energy, with the value defined at threshold. Any improvement in sensitivity as the angular resolution improves at energies significantly above the energy threshold was observed to be a second order effect. This work does not attempt a full reconstruction of event direction. Instead, we consider a range of possible angular resolutions in order to determine the impact on the final neutrino flux sensitivity. In the best case, we consider a resolution of 25 • -similar to that achieved by SNO. In SNO, a 1-kT heavy-water detector with 55% PMT coverage, an angular resolution of 26.7 • was achieved for 16 N events (approximately 5 MeV), which had an average of 36 PMT hits [30]. In Super Kamiokande, a 50-kT light-water detector with 40% coverage, an angular resolution of approximately 35 • was achieved at 6 MeV, with 41 PMT hits [31]. Table 1 shows the expected number of hits from Cherenkov photons in the proposed 0.5% WbLS detector for a range of energy thresholds. As shown, by 1 MeV we expect as many hits from Cherenkov photons as in SNO and Super Kamiokande at 5 or 6 MeV. While many of these photons may be scattered, or absorbed and reemitted by the scintillator, absorption lengths in WbLS are significantly longer than a pure LS detector. Additionally, some directional information is retained by considering the offset of the reemitted photon from the original production point. The proposed detector has several further advantages that could allow for greatly improved angular resolution at lower energies. Increased coverage and use of high quantum efficiency PMTs allows detection of many more direct Cherenkov photons than in SNO and Super Kamiokande at equivalent energies. The inclusion of wavelength shifter in the WbLS absorbs and reemits Cherenkov photons that would otherwise be at too small a wavelength to be detected by the PMTs and, depending on the absorption length, may also retain some directional information. As demonstrated in [27,28], even a pure LS detector can provide some directional information by considering photon time of flight. With these advantages, a resolution of 25 • may be achievable.
As the detector size increases, coverage is reduced, or the LS fraction is increased, the resolution will naturally be degraded by increased scattering and absorption, and reduced light collection. We find that a large fraction of wavelength shifted photons are absorbed within a short distance, and so it is possible a lower concentration of PPO would be desirable to better retain directional information. To estimate the impact of these effects we consider degraded resolutions of 35 • , 45 • , and 55 • for each detector scenario.
WbLS cocktail
The scintillator component of the WbLS cocktail is taken to be LAB with 2g/L of PPO as a fluor (hereafter referred to as LAB/PPO). LAB/PPO properties have been determined by the SNO+ collaboration [32]. We consider fractions of LS from 0.5 to 5%, as well as a pure LS The Cherenkov light yield is unaffected at first order by this change to the target cocktail, although there is a non-zero impact through absorption and reemission in the LS, which is fully modeled in the simulation. While this change can therefore be expected to affect the angular resolution, this effect is not included in the PDFs since this work does not perform a full directional reconstruction. Instead, we consider the impact of a range of angular resolutions for each target cocktail.
Photocathode coverage We assume instrumentation with Hamamatsu R11780 12 inch high quantum efficiency (HQE) PMTs, which have a peak efficiency of 32% at 390 nm [33]. We study photocathode coverages of up to 90%, which would require approximately 100k PMTs for the 50 kT detector. Changing the photocathode coverage effectively scales the Figure 6 shows the PMT QE, and the emission spectra for both Cherenkov and scintillation light.
Energy scale and resolution
The energy resolution is determined by the overall light yield, which is dependent on the WbLS cocktail and the photocathode coverage, the impact of each of which is studied separately (Sect. 3). Here, we consider the impact of systematic uncertainties in the energy scale and resolution on the analysis. We investigate the effect of uncertainties in energy by modifying the PDFs relative to the spectra from which fake datasets are drawn. For the energy scale, a linear shift in energy is applied to the energy spectra. For the resolution a Gaussian smearing is applied to the energy spectra, with the width of the Gaussian set to a constant value.
Energy threshold The ability to reach a lower energy threshold than a water detector is one of the main advantages of a WbLS detector besides increased energy resolution. We consider energy thresholds from 600 keV up to 1 MeV. The possibility of reaching thresholds this low depend on the background rate, the PMT dark noise rate, the trigger setup, and the sustainable data rate. At 0.5% scintillator, we expect a total of 19.3 PMT hits at 0.6 MeV in a 50 kT detector, and by 5% scintillator we expect 93.0, compared to 5.4 PMT hits in a water only detector. Assuming a trigger window of 200 ns as used in Super Kamiokande, with ∼ 94,000 PMTs, we expect 18.8 noise hits per trigger window per kHz PMT dark rate, and so at 0.5% scintillator a dark rate lower than 1 kHz would be required. In a 25 kT detector we expect 21.1 PMT hits at 0.6 MeV, and 11.7 noise hits per trigger window per kHz PMT dark rate. [4] a The 40 K level in water is taken to be 0.1× the Borexino measurement [34] b The 85 Kr, 39 Ar, and 210 Bi levels in water are taken to be the Borexino measured level in scintillator [36], although levels increased by several orders of magnitude are explored Background assumptions Radioactive contaminants in the target material are calculated as the sum of contamination from the LS and the water components, weighted by the corresponding mass fractions of the WbLS cocktail. Table 2 details the numbers assumed in the baseline analysis, taken from measurements by SNO and Borexino [34][35][36][37]. In the analysis we consider a range of levels for the intrinsic contamination in the WbLS target, as well as the degree of α-β separation and Bi-Po pile-up rejection achievable.
-U-and Th-chain: The LS components of uranium-and thorium-chain background levels are assumed to be at Borexino levels and the water components are assumed to be at the level of the heavy water in SNO [35,37]. The baseline alpha rejection is assumed to be 95%. For 212 Bi-212 Po and 214 Bi-214 Po events, an event window of 400 ns is assumed, with a 95% rejection for in-window coincident events and 100% rejection for tagging out of window events. While this level of discrimination has not yet been demonstrated in a WbLS target, this is a future goal of the CHESS experiment [25]. The discrimination achieved will depend on both the α quenching and overall light yield of the target, as well as specific timing properties. These microphysical properties must be fully understood in order to quantify the level of α and Bi-Po coincidence rejection that can be achieved. The impact of the efficiency of both α rejection and Bi-Po tagging is studied. -40 K: The level of 40 K in LS is a conservative estimate from the upper limit of Borexino's initial measurement. The level in water is taken from an upper limit measured in the Borexino Counting Test Facility (CTF) [34]. SNO measured a level of 2e−9 gK/gH 2 O in the light water, although this background was below threshold in SNO and thus little effort was made to reduce it [38]. These measurements are therefore taken as conservative upper bounds on the level. We use 0.1× the Borexino level in water as the baseline level for this study, and investigate the impact of a contamination an order of magnitude higher. -210 Bi: 210 Pb, 210 Bi, and 210 Po cannot be assumed to be in equilibrium with the rest of the uranium chain due to the long half life of 210 Pb and the possibility of Rn contamination. The baseline level of 210 Bi in LS is taken from Borexino. The level of out-of-equilibrium background achievable in ultra-pure water has not been measured to the precision needed for this kind of experiment. The uranium-chain contribution from the water component is orders of magnitude larger than the 210 Bi level measured in Borexino, thus the contribution from any out-ofequilibrium component in water must be many orders of magnitude larger than in LS in order to impact the sensitivity. Values of 10×, 100× and 1000× the Borexinomeasured value in scintillator are explored for the contamination in water. to shield from cosmogenic backgrounds. The 11 C level in Borexino, with 3800 m.w.e. overburden, is approximately 1e5 events per kiloton year. This is used as a conservative initial estimate for the rate, adjusted for the carbon content of the different target materials. Rates of an order of magnitude higher and lower are studied, to simulate the effect of different possible detector sites. Production of cosmogenic backgrounds on the water component of the target were considered according to [39], which provides a complete list of potential spallation products. The dominant sources in the energy range considered in this work are 11 C and 15 O. Inclusion of these backgrounds was observed to change the CNO sensitivity by 0.03% for the baseline configuration, and the pep sensitivity by an unobservable amount. Thus, these backgrounds were omitted for the remainder of this work. -Externals: Background contributions from radioactive contamination external to the target region (for example 208 Tl γ s from the PMTs and any support structures) are assumed to be negligible inside a chosen 50% fiducial volume. In future studies, vertex reconstruction could be used to constrain such sources and thus potentially expand the fiducial volume.
Results
The fit uncertainty for each signal with the baseline detector configuration and background assumptions is shown in Table 3.
Detector size, target, and angular resolution The CNO solar neutrino sensitivity as a function of detector size, LS fraction, and angular resolution for the baseline background assumptions is shown in Table 4 and Fig. 7. As a comparison we look at simulated spectra from a 50 kT pure LS detector and study the results of a one dimensional fit in energy (under the assumption that there would be no directional resolution in pure LS). Here we find a CNO sensitivity of 3.5%, but pull distributions show that the fit is not able to converge on the full errors in the CNO and 210 Bi signals (Fig. 8). The correlation between the two values in the fit is −0.84. This suggests than energy alone is not sufficient to distinguish these signals even at very high statistics. PMT coverage The above results show that the impact of the improved energy resolution due to a larger scintillator fraction (0.5-5%) is marginal. However, changes in PMT coverage can have a potentially greater effect. At low energies a significant fraction of the resolution comes from Cherenkov photons, which does not scale with scintillator fraction. Reducing the PMT coverage will reduce the angular resolution as well as the overall light collection. The fit uncertainty for CNO for 5 years of data using a detector with 60% PMT coverage instead of 90% is shown in Table 5.
We can see as predicted that at low scintillator fractions the change in PMT coverage has a larger effect than an equivalent fractional change in scintillator fraction. At higher scintillator fractions the effect is smaller, and suggests that the main consideration for the PMT coverage requirement will be the achievable angular resolution. (Figs. 9, 10). For the 50 kT detector at 5% WbLS and 25 • angular resolution the energy scale uncertainty can be constrained to less than 0.006%, where the change in the fitted CNO normalization is 1.7% (as shown in Fig. 9).
By smearing the PDFs with a Gaussian of pre-defined width, we investigate the impact of an uncertainty on the energy resolution. A scan of the likelihood space for the 50 kT detector at 5% WbLS and 25 • angular resolution demonstrates the capability to constrain this uncertainty with the data itself down to 5.5 keV, at which point the systematic change in the fitted CNO normalization is 3.6%, as shown in Fig. 10.
Energy threshold
The CNO solar neutrino sensitivity as a function of the energy threshold for the likelihood fit is shown in Table 6. These results show that a precise measurement of the CNO flux requires maintaining sensitivity below 1 MeV. It can be seen in Fig. 2 that this energy region is where the CNO and pep solar neutrino signals can be distinguished from each other, and at higher energy thresholds these signals become highly correlated.
Background assumptions As seen in Fig. 2, 40 K is a dominant background at low energies, even assuming an order of magnitude improvement over the level measured in water by Borexino and SNO. This is due to the much higher contamination in the water component of the WbLS compared to the relatively cleaner scintillator. The 40 K background in water was not critical for these previous measurements, and so it may be possible to further reduce the level with additional effort. The SNO water processing plant could be improved by increasing the frequency of replacing ion exchange columns or by distilling the water. Table 7 shows that at the Borexino measured level this background greatly decreases the sensitivity in all configurations compared to the baseline background assumptions which includes the order of magnitude reduction. Table 8 shows the sensitivity to each floated signal in the fit for the baseline detector configuration.
Due to the small fraction of scintillator in the WbLS, the relative 11 C background is already reduced by an order of magnitude or more compared to pure scintillator experiments. Combined with directional reconstruction, the solar sensitivity becomes mostly insensitive to the 11 C event rate at around the Borexino level, as shown in Table 9. This suggests that the overburden is not critical for a solar measurement with an experiment such as Theia. Table 9 Fit uncertainty on the CNO neutrino flux for 5 years of data with the baseline detector configuration as a function of isotope contamination level relative to baseline background level. For 210 Bi this is the level of the out-of-equilibrium component in water relative to the out-of-equilibrium level in scintillator. For 85 Kr and 39 Ar this is the level of contamination in water relative to baseline Fraction of nominal Isotope contamination 11 No BiPo in-window rejection 5.4 No α, no BiPo in-window rejection 5.4 No α, no BiPo rejection 6.4 210 Pb, 210 Bi, and 210 Po are not necessarily in equilibrium with the rest of the uranium chain due to the long halflife of 210 Pb and the possibility of Rn contamination. The level of out-of-equilibrium background achievable in ultrapure water has not been measured to the precision needed for this kind of experiment. The uranium chain contribution from the water component is already orders of magnitude larger than the non-equilibrium level measured in Borexino, so the nonequilibrium component in water must be many orders of magnitude larger than in scintillator as well to have any effect on the sensitivity. Table 9 shows that there begins to be a moderate impact when the out-of-equilibrium level in water is 1000× the out-of-equilibrium level in scintillator. 85 Kr and 39 Ar are also found to have little impact on the sensitivity at up to 10,000× the expected level in scintillator, as shown in Table 9.
We find that alpha and Bi-Po rejection is not critical for this measurement. The largest effect comes from the ability to reject Bi-Po coincidences that occur in separate trigger windows, but there is little impact even assuming zero rejection, as shown in Table 10.
Conclusions
The feasibility of a low energy solar neutrino measurement with a large WbLS detector depends on the backgrounds and angular resolution achievable. At currently measured levels, the limiting background appears to be 40 K in water. As this has not been a critical background in previous measurements, it is possible that lower levels may be achievable with more effort. The remaining unknown backgrounds in water have been shown to have little effect if they are kept to 1000-10,000 times the level achieved in scintillator. The 11 C background is shown to be unimportant, which suggests that the overburden does not matter for this measurement. Finally, changes in scintillator and PMT coverage have been shown to have relatively small effects, which suggest that even by a 0.5% scintillator fraction, energy resolution is no longer the critical parameter to optimize. Instead, the impact of these changes on threshold, angular resolution, and energy systematic uncertainties will be the deciding factor. With a baseline detector of 50-kT total volume (50% fiducial), 90% PMT coverage and a 5% WbLS target, assuming a 25 • angular resolution, a precision of several percent is possible for both CNO and pep neutrino fluxes.
Further studies hinge on additional R&D, including the Cherenkov detection efficiency enhancement provided by deployment of fast photon sensors [40][41][42][43], and demonstration of quenching and particle ID capabilities in WbLS. Development of a directional reconstruction algorithm would allow a direct demonstration of the required angular resolution discussed in this article. An exciting avenue for further exploration would be isotope loading of the WbLS target. Loading the WbLS target with an isotope such as 7 Li for CC detection would provide an additional handle for signal/background separation via improved spectral information. Neutrinos interact in a pure (Wb)LS detector via ES. While the ES differential cross section is almost maximally broad, thus providing little handle on the underlying neutrino spectrum, that for the CC interaction on 7 Li is extremely sharply peaked, resulting in the potential for high-precision measurement of the underlying neutrino energy spectrum. 7 Li was proposed as an additive to a WCD in [44] for this reason; addition of this isotope to a WbLS detector would further improve the discrimination power. Initial studies are presented in [22]. A more quantitative study could yield significant improvements to the results presented here.
to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Funded by SCOAP 3 .
|
v3-fos-license
|
2018-08-14T12:22:47.716Z
|
2018-07-03T00:00:00.000
|
51879146
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/bca/2018/3463724.pdf",
"pdf_hash": "08259e90915f6ac85a4e443c50a4d85d9dbae568",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43115",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "3937cd879f7784c2d073b682d9aa772baf70e3d0",
"year": 2018
}
|
pes2o/s2orc
|
Study of Isothermal, Kinetic, and Thermodynamic Parameters for Adsorption of Cadmium: An Overview of Linear and Nonlinear Approach and Error Analysis
Reports about presence and toxicity of Cd2+ in different chemical industrial effluents prompted the researchers to explore some economical, rapid, sensitive, and accurate methods for its determination and removal from aqueous systems. In continuation of series of investigations, adsorption of Cd2+ onto the stem of Saccharum arundinaceum is proposed in the present work. Optimization of parameters affecting sorption potential of Cd2+ including pH, contact time, temperature, sorbent dose, and concentration of sorbate was carried out to determine best suited conditions for maximum removal of sorbate. To understand the nature of sorption process, linear and nonlinear forms of five sorption isotherms including Freundlich and Langmuir models were employed. Feasibility and viability of sorption process were evaluated by calculating kinetics and thermodynamics of the process, while error analysis suggested best fitted sorption model on sorption data. Thermodynamic studies demonstrated exothermic nature of reaction, while kinetic studies suggested pseudo-second order of reaction.
Introduction
Environmental pollution should be taken into special consideration because it is a very serious matter affecting every type of organism at every level. e most adversely affected environmental resource is water [1]. As water is an essential element for the survival of living beings, it is very necessary to keep it pure and clean [2]. Quality of drinking water is of prime importance for mankind because waterborne diseases can decimate population of the whole area. ese diseases arise due to toxic release of chemicals from industrial zones [3]. Particularly in industrial areas, these waterborne diseases are a great threat towards safety of water supplies. Other sources which may pollute water include domestic waste, pesticides run off from agricultural land, metal plating operations, and so on. Key contaminants present in water include heavy metals, chlorinated hydrocarbons, pathogens, detergents, pesticides, algal nutrients, trace organic compounds, dyes, and so on. ese hazardous substances are of concern because of their ultimate effect on survival of human life [4]. Heavy metals like Cd, Zn, Ni, and Pb are present in relatively major amounts in industrial effluents and enter in rivers and oceans and ultimately pollute groundwater leading to adverse effects on aquatic life. Metals resist the process of biodegradability and hence remain in ecosystem, affecting food chain and human health [5]. In order to provide clean environment and healthy lifestyle to our coming generations, it is necessary to remove hazardous pollutants from the environment. In environmental restoration areas, conventional techniques are practiced to eradicate those pollutants from the environment which include chemical precipitation, evaporative method, electrolytic extraction, reverse osmosis, ion exchange, and electrochemical and membrane processes [6]. All these methods are costly and produce large amount of sludge which is difficult to dispose of. Use of biological materials for the removal of pollutants from aqueous media is considered superior to other methods in terms of cost effectiveness and simple design. It is a surface phenomenon, in which pollutants get accumulated on the surface of the adsorbent material. Binding nature is based on type of sorbent and sorbate, but mostly physisorption or chemisorption takes place [7]. Materials with ease in availability and low cost are preferred for the purpose. In this context, agrowastes are considered a significant material for adsorption. Binding capacity of these materials can be intensified by physical and chemical treatments and heat therapy [8].
To explore the appropriate adsorbent, it is necessary to establish equilibrium correlation of sorbent to predict behavior of sorbent under different experimental conditions. is equilibrium correlation is developed by using equilibrium isotherms.
ese isotherms express way of sorbent interaction with the surface of adsorbent, that is, whether it is monolayer or multilayer sorption [9]. Similarly, thermodynamic studies are of prime importance to predict whether the adsorption is spontaneous or not. Furthermore, it provides information about suitable temperature range for sorption and nature of sorbent and sorbate at equilibrium [10]. e aim of the present research was to explore Saccharum arundinaceum for adsorption of cadmium under different operating conditions including pH, contact time, initial concentration, and temperature. e application of linear and nonlinear forms of equilibrium isotherms was to determine appropriate isotherm for the purpose, and thermodynamic and kinetic studies were performed to check reaction nature of the adsorption phenomenon. Error analysis based on five different error functions was also performed.
Preparation of Adsorbent.
On the basis of literature survey and indigenous availability of agrowaste materials, the stem of Saccharum arundinaceum (hardy sugar cane) was collected from different regions of Sargodha District, Pakistan. After collection, the sample was properly washed with deionized water to remove dust and surface impurities. e sorbent was initially dried in an open container at room temperature and later in an electric oven (Model, LEB-1-20) at 105°C for 24 h to remove all the moisture contents. e dried sorbent was ground, and appropriate particle size was separated by sieves and was stored for further analyses.
Chemicals.
All the chemicals, reagents, and solvents used in the present work were of analytical reagent grade and purchased from Merck (Germany) or Sigma-Aldrich (Germany). Standard solutions were prepared, and successive dilutions were made with double-distilled water to make working solutions.
Pretreatment of Sorbents.
Saccharum arundinaceum was pretreated with HCl (0.1 M) and NaOH (0.1 M) to evaluate the effects of acid and base treatments on pore size, that is, pore area, pore volume, and sorption capacity. For chemical treatment, the sorbent (20 g) was stirred for 4 h in 1 L solution of 0.1 M NaOH or HCl followed by filtration and extensive washing with distilled water to remove any traces of acid/base. After that, the treated sorbent material was dried at 110°C and stored in airtight zipper bags at −4°C before further use.
Characterization of Sorbent.
To determine different physical and chemical parameters affecting adsorption, it is necessary to characterize the sorbent. erefore, physical and chemical characterization was done by scanning electron microscopy (SEM) and Fourier transform infrared spectroscopy (FTIR).
Scanning Electron
Microscopy. Surface analysis was performed using scanning electron microscope JEOL model 2300. SEM provides information about surface area available for adsorption and morphology of sorbent [11]. Analysis of each sorbent was carried out in optimized conditions under argon atmosphere.
Fourier Transform Infrared Spectroscopy.
Functional groups present in structure of sorbent were determined by Fourier transform infrared spectrophotometer (Model Shimadzu AIM-8800). ese functional groups are responsible for adsorption of sorbate on the surface of sorbent, and their detection helps in determining the nature of binding interactions between sorbate and sorbent surface [12]. Diffused reflectance infrared technique (DRIFT) was used for analysis taking KBr as a background reagent.
Equilibrium Isotherms.
In order to study adsorption pathway and equilibrium relationship between sorbent and sorbate, it is necessary to design proper adsorption isotherms. Isotherms predict the appropriate parameters and behavior of sorbent towards different sorption systems [13]. In this context, linear and nonlinear models are utilized using Microsoft Excel ® 2007 (equilibrium isotherms applied on the present work are given in Table 1s of supplementary data).
Error Functions.
In order to determine best fitting of linear or nonlinear models onto adsorption data, it is necessary to calculate the error function [14]. ese error functions include sum square error, hybrid functional error, average relative error, sum of absolute error, nonlinear chisquare, and so on (calculated error functions and their equations are present in Table 2s of supplementary data).
ermodynamic Investigations.
ermodynamic investigations are another important parameter of adsorption studies. For thermodynamic studies, the adsorption experiment was carried out at different temperature conditions and calculated parameters included enthalpy (ΔH), entropy (ΔS), and Gibbs free energy (ΔG).
For this purpose, (1)-(3) were applied where R is the natural gas constant and K C is the constant at equilibrium and is calculated as where C e is the concentration of sorbent at equilibrium condition.
Adsorption Kinetics.
In batch adsorption process, kinetic studies provide information about optimum conditions, mechanism of sorption, and possible rate controlling step. For this purpose, linear and nonlinear form of pseudofirst-and pseudo-second-order kinetics is applied on adsorption data [15]. In order to check the effect of contact time (10-70 min) on adsorption, initial concentration of 100 mg/L for cadmium was prepared and 100 ml of this sample was used for study. Sorbent (0.5 g) was added in this cadmium solution and applied for shaking at 150 rpm speed. After fixed interval of time, the sample was removed from flask and analyzed for cadmium concentration by atomic absorption spectrophotometer. e amount of cadmium adsorbed at different time intervals was calculated by employing the following formula: where Q t is the amount of cadmium adsorbed at any time t, Q o and Q e are initial and equilibrium concentrations, respectively. e volume of cadmium solution taken is represented by V(L), and W sorbent is the amount of sorbent in g.
Pseudo-First-Order Kinetics.
In order to calculate pseudo-first-order kinetics for adsorption system, following equations were used: ln Q e − Q t � ln Q e − k 1 t linear form, where Q t is the amount adsorbed at time t, Q e is the equilibrium amount, t is time in minutes, and k 1 is the rate constant.
Pseudo-Second-Order Kinetics.
For pseudo-secondorder kinetics, linear and nonlinear forms were applied as follows:
Effect of Pretreatment.
Pretreatment has promising effect on adsorption potential of Saccharum arundinaceum. Results reveal that base-treated (97.5%) sorbent shows good efficiency for cadmium sorption as compared to raw (91.15%) and acid-treated (57.6) sorbent as shown in Figure 1. Adsorption capacity depends upon functional groups present on the surface of sorbent and its microporous structure [16]. Increase in sorption capacity by base treatment can be attributed to hydroxyl groups created on the surface of adsorbent by base treatment or modification of cell wall components by base [17]. Decrease in adsorption after acid treatments was found as the binding sites available on the surface of biosorbent got destructed due to acid [18]. So, base-treated Saccharum arundinaceum was used for adsorption analysis.
Characterization of Sorbents. Saccharum arundinaceum
was characterized in terms of surface morphology and functional group analysis by scanning electron microscopy and Fourier transform infrared spectroscopy.
Scanning Electron Microscopy.
ree native and two treated sorbents (acid-and base-treated Saccharum arundinaceum) were analyzed through scanning electron microscope to study surface morphology. Large pore size available on the surfaces of native and base-treated sorbent was responsible for enhanced adsorption on the surface of these agrowaste materials. Results for SEM analysis are given in Figure 2. Hollow cavities appear in the structure of raw adsorbent, which were responsible for binding of sorbate onto the surface of sorbent. Acid treatment decreases these cavities by deforming surface of the sorbent, so adsorption decreases after acid treatment because surface becomes smooth and thin adsorption layer is formed on the sorbent surface. Raw and base-treated sorbent surface is found rough and cylindrical, which make possible multilayer and thick adsorption on the sorbent surface as compared to smooth surface. Results obtained in the SEM micrograph are in good agreement with reported data [19].
Fourier Transform Infrared Spectroscopy.
Fourier transform infrared spectrometer provides information about functional groups present on the surface of sorbent and makes possible attachment of sorbate [20]. FTIR spectra of sorbent obtained in the range of 4000-450 cm −1 wavenumber and major functional groups present in adsorbent are listed in Table 1. A broadband appears in the range of 3000-3700 cm −1 which was due to -OH stretching vibration of hydroxyl functional groups including hydrogen bonding because broadband of -OH group in this range is indication of hydrogen bonding present in the compound. is peak appears in raw and base-treated sorbent but disappears Bioinorganic Chemistry and Applications in case of acid treated due to reaction of -OH group with acid hydrogen. Stretching band of -CH appears in 2900-3000 cm −1 wavenumber range for all sorbents. e peak at 1750 cm −1 appears due to C�O and 1200 cm −1 due to C-O functional group. In some cases, -CN also appears at 1049 cm −1 value. Vibration due to secondary amide appears at 1645 cm −1 [21]. For adsorption purpose, significant role is played by -OH group and heteroatoms to attach sorbate on the surface.
Adsorption Study.
Adsorption study was performed by the batch adsorption method by varying different parameters including pH, contact time, and initial concentration of sorbate to find best suited conditions for the removal of cadmium from aqueous media. Maximum adsorption was achieved at 60 minutes time interval and no significant increase found by further increase in time. Initially, excess of vacant places are available on the surface of sorbent, and uptake of metal ions was more, so there was continuous increase in adsorption capacity by increasing time slot from zero to 60 minutes. But, further increase could not cause sufficient change in adsorption of metals as vacant spaces are already filled, and equilibrium is achieved [22].
Effect of pH.
Initial pH of the adsorption system has significant role in adsorption of sorbate, as it affects the surface morphology of the sorbent and binding nature of sorbate. e range of pH selected was 2-10 with 1 g sorbent and 60 ppm initial concentration of sorbate. e result given in Figure 2s (Supplementary material) reveals the fact that adsorption capacity is quite low under acidic conditions. When pH is increased, it causes increase in adsorbed amount of sorbate on the surface of sorbent. At low pH value, metals have to compete with H + ions for adsorption on sorbent surface since H + ions are present in excess at that pH value. But when pH value is raised, it causes significant increase in adsorption due to attraction developed between negatively charged surfaces of sorbent by -OH groups and positively charged metal ions [23]. So, for cadmium, optimum pH range was found from 6 to 8; in this range, cadmium shows best adsorption behavior. When pH is further increased, there is decline in adsorption capacity due to formation of metal hydrides.
Effect of Initial Concentration.
Initial concentration of sorbate is another important parameter, which affects the adsorption phenomenon. For this purpose, initial concentration of cadmium was varied in range of 10-100 ppm by keeping all other parameters constant (Figure 3s Supplementary material).
Rapid increase in adsorption capacity was observed initially for adsorption of cadmium on Saccharum arundinaceum as vacant spaces were available on the surface of sorbent. So, rise in concentration also raised adsorption of sorbate on available sites [24]. Adsorbent readily occupies these adsorption sites, and adsorption capacity has positive influence of concentration in this range. Further increase in concentration from 60 to 100 ppm has no significant effect on adsorption phenomenon. Surface of adsorbent becomes saturated with sorbate, and after establishment of equilibria, increase in concentration has no significant influence on adsorption phenomenon. Previous studies also report that accommodation for sorbate decreases as concentration is very high due to unavailability of resident sites [25].
Effect of Temperature.
e effect of temperature on adsorption was studied by using temperature range 20, 30, 40, and 50°C with pH 6 ( Figure 4s) (Supplementary material). Adsorption of cadmium onto Saccharum arundinaceum agrowaste was found to increase with increase in temperature. At high temperature, intraparticle diffusion increases and more adsorption sites are created which boost up adsorption phenomenon [26].
Equilibrium Isotherms.
Adsorption system can be designed by adsorption isotherms commonly known as equilibrium isotherms which represent the amount of solute adsorbed per unit weight of sorbent [27]. ese isotherms use equilibrium concentration of sorbent at constant temperature. In order to remove effluents from the system, particular design is optimized to generate proper correlation for experimental data which is called adsorption isotherm. Researches proposed many isotherms in this regard which are based on the adsorption system including Langmuir, Freundlich, Redlich-Peterson, Temkin, and Elovich [28,29]. Sorption was carried out by employing linear as well as nonlinear adsorption models by varying initial concentration from 10 to 100 ppm.
Freundlich Isotherm.
Freundlich adsorption isotherm was developed for the heterogeneous system, and it gives concept of multilayer adsorption on the surface of sorbent (Figure 5s Supplementary material).
Parameters calculated for Freundlich isotherm by employing its linear and nonlinear form are given in Table 4. Freundlich isotherm was obtained by plotting log Cad versus log Ce. K F and n are constants obtained from intercept and slope, respectively. Freundlich adsorption capacity (K F ) is an indicator of a system, whether it is favorable for adsorption or not. Adsorption is considered promising if value of K F is found in range of 1-20, and results reveal that in the present study, K F was 9.5 and 12.2, respectively, for linear and nonlinear approaches of Freundlich adsorption isotherm. Similarly, adsorption intensity represented by n indicates fitness of model for adsorption purposes if value of n is above 1. Value of R 2 obtained from the plot is significant Bioinorganic Chemistry and Applications (0.9446) representing good fitness of this model for adsorption of cadmium onto Saccharum arundinaceum.
Langmuir Isotherm.
Langmuir adsorption isotherm is based on monolayer adsorption of metal ions on the surface of agrowastes, and energy of adsorption system is considered constant. In order to calculate Langmuir model, initial concentrations were changed from 10 to 100 ppm with 1 g sorbent amount and 1 h shaking time. Distribution of metal ions between liquid and solid surface was calculated by equations given in Table 1, employing linear and nonlinear forms of the model. Langmuir adsorption isotherm was obtained by plotting Ce/Cad versus Ce as shown in Figure 6s Supplementary material. R 2 value obtained for plot was found satisfactory showing fitness of model on the adsorption experiment. Q o represents metal ion uptake per unit mass of adsorbent (mg/g) and b is Langmuir constant [15]. A dimensionless constant R L is calculated by using Langmuir constant, and initial concentration represents model fitness for a particular system. If value of R L falls between 0 and 1, the system is considered appropriate for adsorption purpose and Table 2 shows results which are in this range. Furthermore, experimental data and predicted results obtained for the present work were found in close correlation with low value of residual sum of square (0.006) making this model applicable for the present work.
Dubinin-Radushkevich
Isotherm. Dubinin-Radushkevich isotherm was designed as an empirical model for adsorption of vapors onto solid surface. It is successfully applied for adsorption of heterogeneous system including solid and liquid. is model is considered more general than Langmuir because in its derivation homogenous surface and constant sorption potential are not assumed [30]. e relationship given in Table 1s (supplementary data) was employed to relate ln Cad with ε 2 , where ε 2 is Polanyi potential which is based on temperature, natural gas constant, and equilibrium concentration as given in the following equation: Slope of the plot gives value of k ad and the intercept is q s . e model showed good applicability on the adsorption system in nonlinear form with high value of R 2 .
Dubinin-Radushkevich isotherm has found very promising applications for determination of nature of sorption, whether it is physical or chemical. For this purpose, k ad obtained from the slope of the plot is used in the following equation: e value of E calculated for the present research was 0.764 and suggests physical nature of sorption. Because the value of E below 8 kj/mol reflects physical sorption and 8-16 kj/mol reflects chemical sorption (Figure 7s Supplementary material).
Temkin Isotherm.
Temkin adsorption isotherm discusses interaction of sorbent and sorbate, and the model is based on assumption that heat of adsorption will not remain constant. It decreases due to interaction between sorbent and sorbate during adsorption phenomenon [31]. Linear and nonlinear forms of Temkin model are given in Table 1s (Supplementary data). Equilibrium constant of binding K T provides information about binding energy, and β expresses heat of adsorption for a particular adsorption experiment (Figure 8s Supplementary material). Linear form of Temkin model is found more suitable with high value of binding constant as given in Table 2. e model indicates the exothermic nature of adsorption reaction as B > 0 which is an indicator of heat release during the process [32].
Elovich
Isotherm. According to Elovich model, mechanism of adsorption is based on chemical reactions which are responsible for adsorption. Plot of ln C ad /C e versus C ad gives R 2 value close to unity. K E and Q m are obtained from intercept and slope of plot, respectively. K E shows initial sorption rate and Q m is adsorption constant. Initial sorption rate obtained from linear form of Elovich model is quite high (35,100.411) as compared to nonlinear form (11.7891), so making linear form adequate to describe adsorption of cadmium onto Saccharum arundinaceum. Furthermore, R 2 value (0.9033) for linear form is also high than nonlinear form (0.835) (Figure 9s-Supplementary material).
Error Analysis for Equilibrium Isotherms.
In order to check the fit of adsorption model to experimental data, error functions are used [33]. In the present work, six error functions were applied on linear and nonlinear form of data by minimizing the error function in a range of concentration Table 3. For meaningful results, a comparison of each error function for linear and nonlinear forms was made (Figure 10s Supplementary material). For linear form of adsorption isotherms, a comparison of error functions reflects that Langmuir, Freundlich, and Elovich isotherms have good correlation with experimental values for the present adsorption study. ese isotherms give low values for most of error functions. Applicability of these models for removal of cadmium ions from aqueous media is also studied by many researchers [34,35].
Similar study was carried out by employing nonlinear form of adsorption models, and results are summarized below. Nonlinear form of Temkin isotherm was not found suitable for adsorption of cadmium onto Saccharum arundinaceum agrowaste because of high value of error functions. Freundlich and Elovich isotherms have been proved to be suitable models for this study with low value for error functions. Linear form of Elovich isotherm, which is also based on multilayer sorption on the surface of sorbent, shows small value of ARE and EABS, but other error functions were found lower for nonlinear form. Linear approach for Temkin isotherm was found favorable for adsorption of cadmium ions onto Saccharum arundinaceum with low error function. R 2 : Temkin (linear approach) > Temkin (nonlinear approach) RSS: Temkin (linear approach) < Temkin (nonlinear approach) ARE: Temkin (linear approach) < Temkin (nonlinear approach) EABS: Temkin (linear approach) < Temkin (nonlinear approach) Chi-square (χ 2 ): Temkin (linear approach) < Temkin (nonlinear approach) 3.6. ermodynamic Studies. Effect of temperature on adsorption was studied by using temperature range 20, 30, 40, and 50°C with pH 6 and variable initial concentration (30-120 ppm). Adsorption of cadmium onto Saccharum arundinaceum agrowaste was found to increase with increase in temperature. At high temperature, intraparticle diffusion increases and more adsorption sites are created which boost up adsorption phenomenon. Results for thermodynamic study are given in Table 4: Plot of log Cad/Ce versus 1/T was obtained with R-squared value 0.927. Slop and intercept provide value of ∆H°and ∆S°, respectively, as shown in (10).
∆G°was calculated by employing (1) given in Section 2 in temperature range 292-328 K. Results show a negative value for Gibb's free energy at all temperature ranges, and ∆G°i ncreases with the increase in temperature. ese negative values represent spontaneous nature as well as feasibility of adsorption reaction [36]. Decrease in ∆G°with the increase in temperature reflects better sorption at elevated temperature. e positive value for change in enthalpy is due to endothermic nature of adsorption of cadmium. Enthalpy was also found positive because randomness in system increases due to solid-liquid interaction during adsorption phenomenon. Sorption energy calculated by Dubinin-Radushkevich model was found below 1 which is an indication of physical nature of cadmium sorption on the surface of sorbent. E < 8 kJ/mol is representative of physical sorption, and E > 8-16 kJ/mol is due to chemical sorption [37]. For adsorption of cadmium value of E is found below 8, so adsorption of cadmium occurred on the surface and no chemical bonding took place between sorbent and sorbate. Similar results for adsorption of cadmium onto agrowaste were reported in the literature [38].
Adsorption Kinetics.
Adsorption kinetics has prime importance in describing solute uptake rate and time required for adsorption process. In the present work, kinetic study was performed at different time intervals for cadmium adsorption by employing linear and nonlinear forms of pseudo-first-and second-order kinetics. Results indicate that amount of cadmium adsorbed increases with the increase in time interval; however, this increase was sharp in the start of reaction and gradually magnitude of adsorption decreases down. Initially, plenty of active sites were available on the surface of sorbent, so sharp rise in adsorption occurred, but these sites got occupied with the passage of time, so magnitude of adsorption gradually decreases [39].
Pseudo-First-Order Kinetics.
For pseudo-first-order kinetic model, log (Q e − Q t ) was plotted against time interval and value of k was obtained from slope of the line and Q e from intercept. Initial sorption rate, h, was calculated by the following equation: Poor correlation was obtained for linear form of the model with low value of R 2 (0.0918). Results indicate that adsorption of cadmium onto Saccharum arundinaceum does not follow pseudo-first-order kinetics. Nonlinear form of pseudo-first-order kinetics was obtained by using Microsoft Excel 2010 [40]. Table 3s (Supplementary material). ese four linear forms of pseudo-second-order kinetic models were applied on experimental data, and Figures 11s-15s (Supplementary material) show results for these models. Coefficient of determination (R 2 ), found for type 1, was quite high indicating the best fitting of this model on adsorption data of cadmium. Results obtained for pseudo-second-order kinetic are given in Table 4s (Supplementary material). eoretical results obtained for amount of cadmium adsorbed at equilibrium are found best fitted with experimental data for pseudosecond-order kinetics. For nonlinear form of pseudosecond-order, a computer-based procedure was used in Microsoft Excel 2010 using solver add-in method as reported in the literature [41]. For sorption of cadmium onto Saccharum arundinaceum pseudo-second-order kinetic may describe the method of adsorption in quite appropriate way as compared to pseudo-first-order approach. Furthermore, nonlinear form of pseudo-second-order gives close results to experimental data.
Effect of Interfering Ions.
Process of adsorption becomes complicated in case of multicomponent adsorption as many interactions of sorbent and sorbate are involved. Effect of interfering ions was measured by observing adsorption of one metal ion, and then, by addition of interfering ion change in adsorption capacity was noted. Interfering effect was calculated by employing the following equation: where C mix is the % adsorption of mixture of two metal ions and C is the % adsorption of pure metal on selected sorbent. If the value of C mix /C is found equal to 1, then there is no effect of interfering ions on adsorption phenomenon. However, if it is found less than 1, then adsorption capacity is found to be affected by addition of these interfering ions.
To study interference, ions were divided into three categories as monovalent, bivalent, and trivalent ions based on valences of ions. One ion was selected from each class of ions to check the interfering effect on adsorption of cadmium ions. Metals are attached on the surface of sorbent through electrostatic forces, and competition of metals for sorbent place is mainly based on metal ion charge and its attraction towards functional groups present on the surface on adsorbent.
Results for interference of metal ions are summarized in Table 5. Metals with high charge value were found to have maximum effect on adsorption of cadmium as compared with those with low charge. Anions have also found to affect the adsorption phenomenon of metal ions but their interference is comparatively very less than cations. Adsorption of anions on sorbent is dependent on charge present on the surface of sorbent. Since negatively charged hydroxyl groups are present on the surface of adsorbent, adsorption of anions is not as favored as cations [42].
Conclusion
Removal of cadmium was performed by employing the stem powder of Saccharum arundinaceum. In order to generate proper correlation for the removal of cadmium, five adsorption isotherms were applied on experimental data including Freundlich, Langmuir, Dubinin-Radushkevich, Elovich, and Temkin. Error analysis provides information about fitness of these models on experimental data. e model with minimum error was selected best for adsorption data. Order of equilibrium isotherms according to increasing RSS value was found as Temkin > Dubinin-Radushkevich > Elovich > Freundlich > Langmuir. Linear form of Freundlich and Langmuir models was found best fitted with minimum value of error. Effect of temperature on cadmium adsorption was investigated by thermodynamic analysis, and it was found to increase with increase in temperature. Gibb's free energy (∆G°� −612.34 at 303 K) revealed spontaneous nature of sorbent-sorbate binding reaction, and it followed pseudo-second-order kinetics.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
Supplementary Materials
Figures 1s-4s: effect of different parameters (contact time, pH, initial concentration of cadmium, and temperature) which were adjusted during adsorption for maximum removal of cadmium from aqueous media. Figures 5s-9s: graphical results of isothermal study. Data derived from these figures are present in Table 4 of the main manuscript. Figure 10s: thermodynamic studies. Results of Figures 11s-15s of kinetic studies are described in these figures. Linear and nonlinear forms of equilibrium isotherms are given in Tables 1s and 2s. Table 2s provides information about equations of error functions applied on the results. Tables 3s and 4s describe kinetic studies and formulas applied on work for calculation of pseudo-firstand second-order kinetics. (Supplementary Materials)
|
v3-fos-license
|
2018-09-15T14:06:10.960Z
|
2018-09-14T00:00:00.000
|
52276863
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-018-32276-7.pdf",
"pdf_hash": "d82d2726478e1c1317c6cc88033301bf83809c7c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43116",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"sha1": "d82d2726478e1c1317c6cc88033301bf83809c7c",
"year": 2018
}
|
pes2o/s2orc
|
Nicotine and sleep deprivation: impact on pain sensitivity and immune modulation in rats
Repeated nicotine administration has been associated with increased paradoxical sleep in rats and antinociceptive properties, whereas paradoxical sleep deprivation (PSD) elicits pronociceptive and inflammatory responses. Thus, we aimed to evaluate the effect of repeated nicotine administration and its withdrawal combined with PSD on pain sensitivity and inflammatory markers. Sixty adult male Wistar rats were subjected to repeated injections of saline (SAL) or nicotine (NIC) for 12 days or 7 days of nicotine followed by acute mecamylamine administration on day 8 to precipitate nicotine abstinence (ABST). On day 9, the animals were submitted to PSD for 72 h or remained in control condition (CTRL); on day 12, thermal pain threshold was assessed by the hot plate test. PSD significantly decreased the latency to paw withdrawal in all groups compared to their respective controls. ABST-PSD animals presented higher levels of interleukin (IL)-6 compared to all groups, except ABST-CTRL. After adjustment for weight loss, IL-6, IL-4 and tumor necrosis factor alpha, ABST-PSD was associated with the lowest pain threshold. Nicotine and IL-4 levels were predictors of higher pain threshold. Hyperalgesia induced by PSD prevailed over the antinociceptive action of nicotine, while the association between PSD and ABST synergistically increased IL-6 concentrations and decreased pain threshold.
Figure 2.
Body weight. Difference between final (Day 12) and initial (Day 1) body weight in animals submitted to a 12-day saline (SAL) treatment, a 12-day nicotine (NIC, 3.2 mg/kg/day) treatment or a 7-day NIC treatment followed by mecamylamine-induced withdrawal (1.5 mg/kg) and subsequently exposed to control sleep condition (CTRL) or 72 h paradoxical sleep deprivation (PSD) (n = 10 per group). *p < 0.0001 compared to the respective SAL group (CTRL or PSD); # p < 0.0001 compared to the respective ABST group (CTRL or PSD); **p < 0.0001 significant effect of PSD.
SCieNtiFiC RePoRtS | (2018) 8:13837 | DOI: 10.1038/s41598-018-32276-7 Sleep Deprivation Overcame the Analgesic Effect of Nicotine. Nociceptive sensitivity was assessed by the hot plate test, in which the animal is placed over a heated plate and the latency to paw withdrawal is used as a measure of pain threshold and sensitivity. This result is shown in Fig. 3. When not adjusted for any confounder, the analysis of the latency to paw withdrawal demonstrated treatment (F 2,54 = 9.4, p < 0.0001), PSD (F 1,54 = 93.2, p < 0.0001) and interaction effects (F 2,54 = 5.2, p < 0.01). Overall, NIC animals showed a higher latency to paw withdrawal when compared to both SAL (p < 0.0001) and ABST groups (p < 0.0001). Alternatively, PSD led to a shorter latency in all the three groups (SAL-PSD, NIC-PSD and ABST-PSD) compared to SAL-CTRL (p < 0.0001), NIC-CTRL (p < 0.0001) and ABST-CTRL (p < 0.0001), respectively. However, a post-hoc test of the interaction effect revealed that only the NIC-CTRL group had a significant increase in pain threshold compared to SAL-CTRL (p < 0.0001), SAL-PSD (p < 0.0001), NIC-PSD (p < 0.0001), ABST-CTRL (p = 0.005), and ABST-PSD (p < 0.0001) groups. No significant changes in pain sensitivity were observed among the PSD groups (SAL-PSD, NIC-PSD, and ABST-PSD).
Sleep Deprivation and Nicotine Abstinence Increased IL-6 concentrations. No significant effects
were observed in the plasmatic concentrations of the anti-inflammatory cytokines IL-4 and IL-10. (Fig. 4A,B). However, there was a significant interaction effect on plasmatic TNF-α concentrations (F 2,54 = 5.1, p < 0.01); and treatment (F 2,54 = 6.3, p < 0.01) and interaction (F 2,54 = 6.1, p < 0.05) effects on IL-6 concentrations. Post-hoc tests showed that the ABST-PSD group presented higher concentrations of TNF-α compared to SAL-PSD (p = 0.001), which in turn showed lower concentrations compared to the SAL-CTRL (p = 0.001) group (Fig. 4C). Overall, higher levels of IL-6 were found in the ABST animals compared to both SAL (p = 0.001) and NIC groups (p < 0.0001). Post-hoc analysis of the interaction effect revealed, however, that this increase in IL-6 levels occurred specifically in the ABST-PSD group compared to SAL-SAL-PSD (p = 0.020) and NIC-PSD (p = 0.006) as well as compared with CTRL (p = 0.026) and NIC-CTRL (p = 0.006) groups (Fig. 4D). No other differences were observed between groups.
Nicotine Treatment, Sleep Deprivation, and IL-4 as Predictors of Pain Sensitivity. A correlation
matrix between the latency to paw withdrawal, the delta body weight and the immunological parameters was calculated for the whole sample to reveal possible factors associated with pain threshold (Table 1).
Then, a generalized linear model was fitted considering the latency to paw withdrawal as the dependent variable ( Table 2). The best model included PSD (Wald: 15.7, df: 1, p < 0.0001), treatment (Wald: 24.5, df: 1, p < 0.0001) and IL-4 (Wald: 5.9, df: 1, p < 0.05) as the independent predictors of the latency to paw withdrawal (Likelihood ratio χ 2 = 96.3, df: 9, p < 0.0001, n = 60). An increase in 10 pg/mL of plasmatic IL-4 corresponded to an average increase of 6% in the latency to paw withdrawal, thus revealing an association between higher plasmatic concentrations of IL-4 and a higher pain threshold. Additionally, PSD was associated with an average decrease of 49.9% in the latency to paw withdrawal compared to CTRL condition, while repeated nicotine administration was associated with an average increase of 63.8% in comparison with saline administration, as shown in Table 2.
Within SAL-CTRL and NIC-CTRL groups, it was observed a moderate positive correlation between paw withdrawal latency and IL-4 levels (Supplementary Table 1). A trend (p = 0.07) was also found in the Figure 3. Pain sensitivity. Mean ± standard deviation of latency to paw withdrawal (s) in the hot plate test in animals submitted to a 12-day saline (SAL) treatment, a 12-day nicotine (NIC, 3.2 mg/kg/day) treatment or a 7-day NIC treatment followed by mecamylamine-induced withdrawal (1.5 mg/kg) and subsequently exposed to control sleep condition (CTRL) or 72 h paradoxical sleep deprivation (PSD) (n = 10 per group). *p < 0.001 compared to SAL-CTRL; # p < 0.0001 compared to ABST-CTRL; **p < 0.05 significant effect of PSD.
ABST-CTRL group, which showed a positive correlation of IL-6 and IL-10 levels with paw withdrawal latency (Supplementary Table 1). Body weight loss was only correlated with paw withdrawal latency in the SAL-PSD group (Supplementary Table 1).
Discussion
Our results have shown that both nicotine and PSD independently affected thermal pain sensitivity, leading either to a decrease or to an increase in paw withdrawal latency, respectively. When associated, the pronociceptive effects of PSD prevailed over the antinociceptive effects of nicotine treatment, with PSD animals displaying hyperalgesia regardless of previous repeated nicotine administration. PSD combined with nicotine abstinence synergistically increased the plasmatic concentrations of IL-6. Additionally, when body weight loss, TNF-α, IL-4 and IL-6 levels were included in the model as confounders, ABST-PSD animals showed the lowest pain threshold compared to all groups, with no significant difference compared to the SAL-PSD group. The regression model demonstrated that repeated nicotine administration and higher concentrations of IL-4 were independent predictors of higher thermal pain threshold, while PSD was associated with a 50% decrease in pain threshold in the hot plate test.
As part of the mesolimbic dopaminergic pathway, nucleus accumbens seems to mediate the reinforcing and aversive effects of nicotine and its withdrawal 16,17 , as well as to play a role in pain modulation 18,19 . Nicotine has been shown to increase synaptic dopamine and D2 receptor sensitivity in the nucleus accumbens [20][21][22] . Both D1 and D2 dopamine receptors are expressed by neurons in the ventrolateral periaqueductal gray (vlPAG), playing a central role in morphine and dopamine-induced antinociception 23,24 . The vlPAG act as an output system responsible for integrating afferent inputs from multiple forebrain areas to modulate nociception at the spinal dorsal horn 25 . Recently, Umana et al. has shown that 63% of the projections from vlPAG to the rostral ventromedial medulla express α7 nicotinic acetylcholine receptors (nAChR), suggesting a model of α7 nAChR-mediated analgesia in the vlPAG 26 . In their study, systemic and intra-vlPAG administration of an α7 nAChR-selective agonist showed an antinociceptive effect in the formalin assay, which was blocked by intra-vlPAG α7 antagonist pretreatment 27 . Nicotine administration increased GABAergic transmission via presynaptic nAChRs of both α7-lacking and α7-expressing neurons in the vlPAG of rats 27 . Previous evidence indicates that nicotine might exert its antinociceptive effect mainly through modulation of α4β2 nAChRs 6,28,29 . Activation of α4β2 nAChRs results in stimulation of dorsal raphe, nucleus raphe magnus, and locus coeruleus in a norepinephrine (NE)-dependent fashion [27][28][29][30] . These areas play a significant role in pain modulation through descending inhibitory pathways, underlying in part the nicotine-induced antinociception 31 . In addition, systemic administration of nicotine significantly increases the release of endogenous opioids such as endorphins, enkephalins, and dynorphins in the supraspinal cord via α7 nAChR 30 .
Of note, vlPAG descending pathway seems to be the key site by which PSD increases nociceptive responses in rats. A recent study showed that PSD decreased morphine-induced analgesia by modulation of the vlPAG in rats 31 . Moreover, Sardi and colleagues 27 have shown that the nucleus accumbens also mediates the pronociceptive effect of PSD through activation of A2A adenosine receptors and inhibition of D2 dopamine receptors. In this study, an excitotoxic lesion of the nucleus accumbens prevented the PSD-induced hyperalgesia, which was reverted by an acute blockade of this region through either an A2A adenosine antagonist or a D2 dopamine agonist 27 . Considering that A2A receptors are also largely involved in the homeostatic regulation of the sleep-wake cycle 32 , we might speculate that the role of nucleus accumbens A2A receptors in the pronociceptive effect of PSD would also be linked to sleep pressure. The greater the sleep need, the greater the A2A receptors activity, and, thus, the lower the pain threshold 27 .
Sleep deprivation has been shown to decrease D2 receptor expression in the nucleus accumbens of humans 33 but to increase its sensitivity in rats subjected to 96 h of PSD 34 , though it has been shown that freely moving rats exhibit higher concentrations of dopamine in the nucleus accumbens during REM sleep 35 . Considering that vlPAG receives projections from nucleus accumbens neurons expressing A2A receptors, a possible integrative mechanism for the pronociceptive effects of PSD could rely on the increased nucleus accumbens A2A activity, leading to an activation of the vlPAG descending pathway. Even though there is no study assessing a role of nucleus accumbens adenosine receptors in pain processing, the use of adenosine receptor agonists has shown potent antinociceptive effects in animal models of chronic pain 36 . Spinal cord neurons expressing A2A adenosine receptors seems to mediate antinociception, inhibiting symptoms of neuropathic pain 37 . On the other hand, theophylline (an adenosine receptor antagonist) reduced antinociception induced by nicotine in the formalin test 38 . Taken together the evidence from the literature, we can speculate that in our study PSD overcame the antinociceptive effects of nicotine pretreatment by inhibiting the pain inhibitory descending vlPAG pathway through A2A receptor activation and D2 receptor inhibition in the nucleus accumbens 27 . Possibly, the antinociceptive effects of nicotine involved multiple pathways, including activation of α4β2 nAChRs 29 and of spinal A2A receptors 36 , release of opioids at the spinal cord 30 as well as nucleus accumbens activation with projections to vlPAG through α7 nAChR 26 .
Pro-inflammatory cytokines, such as IL-6 and TNF-α, exert known pro-nociceptive effects through central and peripheral action and are implied in the pathophysiology of neuropathic pain [39][40][41][42] . We observed an increase in plasmatic concentrations of IL-6 when nicotine abstinence and PSD were associated. Nonetheless, there was no correlation between pain threshold and the cytokines in the ABST-PSD group possibly due to the strong pro-nociceptive effect of PSD itself, leading to a ceiling effect. However, when we adjusted the analysis of pain threshold for delta body weight, IL-6, IL-4 and TNF-α, the ABST-PSD group showed the lowest latency to paw withdrawal, suggesting a synergic effect of nicotine withdrawal and PSD. Our findings point towards an inflammatory component in thermal sensitivity, as IL-4 was a predictor associated with pain threshold, although it did not differ statistically between the groups. With the observed OR of 1.006 and the mean concentration of IL-4 = 32.04 pg/mL, IL-4 may account on average for 21.13% of the thermal pain threshold variability in our sample. Thus, IL-4 may partially explain endogenous spontaneous individual differences in pain threshold, independent of nicotine treatment or of PSD. Animal studies have shown that both IL-4 and IL-10 exert an antinociceptive effect in various models of inflammatory pain; however, both cytokines were unable to increase pain threshold in control animals [43][44][45] . Although IL-4 up-regulates the expression of opioid receptors, the opioid antagonist naloxone did not reverse the antinociceptive effect of IL-4 in a model of inflammatory pain 45 IL-4 knockout mice did not demonstrate a lower latency in the hot plate test, yet showed a lower pain threshold in the von-Frey test 46 . This discrepancy of results might be explained by different species used and different physiological conditions, as IL-4 seems to exert an antinociceptive effect more prominently in hyperalgesic conditions. From a translational point of view, a clinical study has found lower proteic and mRNA expression of IL-4 and IL-10 in patients with chronic pain 47 .
PSD is known to cause a low-grade inflammation in rats primarily through an elevation of the cytokines IL-1β, IL-6, and TNF-α compared to controls [48][49][50][51] . However, data from the literature is contradictory, possibly due to differences in sleep deprivation models, protocol specificities, and different animal strains used. In our study, 72 h of PSD did not have an independent effect on TNF-α and IL-6 levels. On the contrary, the SAL-PSD group showed a decrease in the TNF-α levels compared to the SAL-CTRL group, contributing to the statistical difference found between ABST-PSD and SAL-PSD. We should consider that our study design involved chronic daily subcutaneous injections (twice a day), which is not the same as having naïve animals. All animals displayed similar high levels of corticosterone at the end of the protocol (data not shown), indicating that the daily injections were possibly stressful to the animals and should be considered in the interpretation of the data. Taking into consideration the possible effect of stress in a context of a pro-inflammatory stimulus, we found evidence from the literature that may help explain the unexpected finding of TNF-α. Rats exposed to heat stress or to sodium arsenite 18 h prior to lipopolysaccharide (LPS) administration, a well-known pro-inflammatory stimulus, had significantly lower levels of plasma TNF-α -instead of higher -leading to a decreased mortality and lung injury 52 . This finding suggests a protective or preconditioning effect of stress response before a pro-inflammatory stimulus. Thus, we could speculate that the stress involved in the subcutaneous daily injections altered the expected pro-inflammatory effect of PSD in saline-treated animals as a preconditioning stimulus.
Nicotine, on the contrary, exerts an anti-inflammatory effect in animal models, decreasing the levels of pro-inflammatory cytokines, namely, IL-1β, IL-6, TNF-α, and IL-17 13,14,53 . In our study, the lack of statistical differences in plasma concentrations of pro-inflammatory and anti-inflammatory cytokines between NIC-CTRL and SAL-CTRL groups may be due to the lack of an injury stimulus, since the previous studies in which nicotine showed an anti-inflammatory action were performed in different contexts, such as LPS administration, virus infection, and lung injury 13,14,53 . The anti-inflammatory effects of nicotine seem to be modulated by the cholinergic anti-inflammatory pathway 15 , mainly through α7 nAChRs in immune cells. Prolonged exposure to nicotine inactivates both α7 and α4β2 nAChRs [54][55][56] . Thus, another possibility could be the development of tolerance to the anti-inflammatory effects of nicotine via α7 desensitization induced by the chronic nicotine treatment, contributing to the effect of nicotine abstinence.
Our study indicates that nicotine abstinence combined with PSD may synergistically increase IL-6 levels. Vagal tone is decreased in sleep-deprived rats 57 , which, in addition to desensitization of α7 receptors in response to repeated nicotine administration, might explain the synergistic interaction effect of nicotine withdrawal and PSD on plasmatic concentrations of IL-6. Our data suggest that, during nicotine abstinence, sleep deprivation may predispose the organism to inflammation.
It is important to consider the limitations of the current study. Although we followed a very consistent and standardized protocol of mecamylamine-induced abstinence, we did not assess the behavioral signs of nicotine withdrawal in the animals. We recognize that the use of additional tests based on mechanical or chemical pain sensitivity could avoid a ceiling effect of PSD in the pain threshold assessment, and possibly allow a further understanding of the interactions between PSD and nicotine administration/withdrawal. Also, the concentrations were determined in animals that had undergone the hot plate test and an effect of test exposure on cytokine concentrations should not be ruled out. Additionally, central nervous systems cytokine concentrations could yield additional information about neural pathways underlying the interaction between PSD and nicotine and its effects on nociception. Lastly, we did not include a reinstatement group of nicotine after its withdrawal.
Conclusion
Our study confirmed the previously observed effects of nicotine and PSD on nociception and showed that the PSD-induced pronociceptive effects largely prevailed over nicotine-induced antinociception. When associated with PSD, however, nicotine abstinence synergistically increased IL-6 levels and independently decreased pain threshold. Higher levels of IL-4 were independently associated with higher pain threshold. Drugs and Treatment. Nicotine-treated animals received 3.2 mg/kg/day of nicotine (nicotine hydrogen tartrate, Sigma, USA) in 2 daily subcutaneous (s.c.) injections for 12 days. This dose has been known to reliably induce nicotine dependence [59][60][61] . In order to establish nicotine abstinence, rats were first sensitized to nicotine during 7 days, a period that has been shown to be sufficient to induce sensitization to nicotine in adult rats 62,63 . Animals in the abstinence group were then acutely treated with mecamylamine hydrochloride twice a day (s.c.). Mecamylamine is a non-competitive nicotinic cholinergic receptor antagonist that precipitates symptoms of nicotine abstinence in sensitized rats 1 day after its administration 64 . A previous study demonstrated significant effects of mecamylamine dose on intracranial self-stimulation thresholds and total somatic signs (withdrawal-like signs), with significant increases in both measures at the 1.5 mg/kg dose 64 . The nicotine dose chosen for inducing abstinence symptoms upon mecamylamine treatment has been previously described elsewhere 64,65 .
Animals.
All drugs were diluted in sterile physiologic saline (0.9%) and had their pH corrected for 7.4. Subcutaneous injections were always administered in 1.0 mL/kg volume. Animals from the sham groups were treated with saline similarly to the other groups.
Paradoxical Sleep Deprivation. Animals were submitted to 72 h of PSD using the modified multiple platform method 66 . Groups of 5 animals were housed in a water-filled tank (143 × 41 × 30 cm) containing 12 circular platforms (6.5 cm in diameter), whose surface was 1 cm above the water level. Rats could move jumping from one platform to another. When animals reach paradoxical sleep phase, they experience loss of muscle tone and fall into the water, being awakened. Groups of 5 sleep control animals were housed in home cages out of ventilated racks in the same room as the PSD animals during this protocol. We elected to choose neither a too long (96 h) nor a too short period (24 h) of PSD. However, the literature is not consistent about the effects of 48 h of PSD on thermal pain sensitivity 67 . Asakura and colleagues did not find significant differences in the latency of paw withdrawal in the hot plate test after 48 h of PSD. With regard to 72 h of PSD, however, most of the studies showed a significant hyperalgesic effect in thermal pain sensitivity 9,68 . Thus, we chose 72 h of PSD since it would certainly lead to hyperalgesia in the hot plate test and not be considered ethically aggressive as 96 h of PSD.
Nociceptive Evaluation. Thermal pain sensitivity was evaluated using the hot plate test 69 in a protocol previously used in sleep-deprived animals 9,70 . During the test, each animal was individually placed in a 50 °C-heated hot plate apparatus, and the latency to paw withdrawal was measured as an estimation of pain threshold. At the first sign of paw withdrawal, i.e., a behavior of paw licking or jumping as an attempt to escape, the rat was removed from the hot plate. The test had a maximum duration of 90 s to avoid paw lesions and burns. The hot plate was cleaned with ethanol 30% between each test. Sample Collection. At the end of the experimental protocol (Day 12), rats were rapidly decapitated with minimum discomfort after the hot plate test. The euthanasia schedule was standardized at 8 h. Blood was collected into sterile tubes with liquid EDTA and centrifuged at 4 °C and 1300 g for 10 min to obtain separated plasma. Plasma samples were frozen at −20 °C for further analyses.
Cytokines Concentrations. For cytokine quantification, the Luminex ® platform (Millipore, USA) was used following manufacturer's instructions. Milliplex ® Map kits (Rat Cytokine/Chemokine Panel) were used to determine the plasma concentration of interleukin (IL)-4, IL-6, IL-8, IL-10, and tumor necrosis factor (TNF)-α. Briefly, each cytokine binds to its specific antibody-coated microsphere, which contains 2 fluorochromes. This combination of fluorochromes allows for the determination of which cytokine is bound to each microsphere. Additionally, another antibody binds to the cytokine-microsphere complex, thus determining the concentration of each cytokine.
Statistical Analysis.
All continuous data were firstly tested for normality and homogeneity. Variables without normal distribution or homogeneity among the groups were standardized by z-score. Between-group comparisons were performed using 2-way analysis of variance (ANOVA) test considering treatment (SAL, NIC or ABST) and sleep condition (CTRL or PSD) as independent variables. For treatment effect or interaction effect, the statistical difference was calculated using Bonferroni post-hoc. Body weight at the beginning and the end of the protocol was compared by repeated measures 2-way ANOVA, followed by Bonferroni's post-hoc if necessary. To further understand the independent effects of treatment, sleep condition and its interactions, multiple analysis of covariance (MANCOVA) was performed for latency to paw withdrawal (dependent variable) with adjustment for the potential covariates (delta body weight, TNF-α, IL-4 and IL-6). Pairwise comparisons were performed by Bonferroni's post-hoc. Correlations between continuous data were calculated through Pearson's correlation test. Finally, a generalized linear model with tweedie distribution and log link function was applied to establish predictor variables for pain sensitivity using latency to paw withdrawal as the dependent variable and body weight variation, treatment, sleep deprivation, and cytokines as the independent variables. For statistical significance, we adopted α = 0.05.
|
v3-fos-license
|
2020-10-21T13:09:49.029Z
|
2020-10-19T00:00:00.000
|
226595438
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2071-1050/12/20/8669/pdf",
"pdf_hash": "c57531c38c526f3f602ddf102208c0b191b7717e",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43119",
"s2fieldsofstudy": [
"Business"
],
"sha1": "081ae00fccbd75739d24bdd55d1e5360bdec7714",
"year": 2020
}
|
pes2o/s2orc
|
A Study on the Transformation of Accounting Based on New Technologies: Evidence from Korea
This study identifies the new accounting technologies into Cloud, Artificial Intelligence, Big Data, and Blockchain, and introduces the case of Korean companies applying new technologies to their accounting process. The purpose of this study is to help understand accounting technologies and provide examples of the adoption of these technologies in actual practice. To achieve this aim of the study, a systematic review of the literature of the major academic publications and professional reports and websites was used as a research methodology. In order to select the cases, it performed the analytical process of reviewing Korean major business and economic newspaper articles. This study provides evidence from Korea to companies contemplating the transformation of their accounting process using technology. Such companies can consider the cases presented in this study as a benchmark. It also offers guidance for the application of technologies to accounting practices for businesses and related researchers. The technology transformation is expected to be accelerated, especially after COVID-19. Therefore, it is necessary to understand and explore ways to effectively apply them. Further, while new technologies offer many opportunities, associated risks and threats should be addressed.
Introduction
In recent years, the technological advance of Artificial Intelligence (AI), Big Data, and Cloud has become the core of the era of Industry 4.0 worldwide. This new trend has reduced the need for employment in specific areas, and the possibility of changing industrial structures has been greatly addressed (Frey and Osborne, 2017) [1]. According to PwC's Global Industry 4.0 Survey (2016) [2], the efforts of responding to Industry 4.0 will have significant impacts in all areas of industry, such as cost reduction, efficiency improvement, and profit expansion. For example, AI has functions of Robotic Process Automation (RPA) and Deep Learning (DL), and this will enable computer processing capability to be significantly improved. A process that required a considerable amount of time in the past can be done instantly now. These new technologies have already been introduced in various areas. Piccarozzi et al. (2018) [3] reviewed the topics of Industry 4.0 in management literature and stated that the Fourth Industrial Revolution leads to adopting information technologies in manufacturing and services in a private environment. Milian et al. (2019) [4] address financial technologies such as fintech, and Arundel et al. (2019) [5] discuss technological innovation adopted in the public sector. Rikhardsson and Yigitbasioglu (2018) [6] address business intelligence and big data analytics in management accounting areas as well. The new technologies have been adopted not only in corporations and in private sectors including autonomous driving, business support, and marketing, but also in national institutions and public sectors such as education, finance, fintech, medical care, environment, security, the military, and so on.
In order to achieve the aim of this study, a systematic review of the literatures discussing the adoption of new technologies in accounting areas is used as methodology. Therefore, the next section briefly discusses the background of accounting technologies and prior studies. Section 3 presents the research methodology of a systematic review of the literatures. Section 4 covers Korean cases that have adopted Cloud, AI, Big Data, and Blockchain. Section 5 integrates the findings of this study and provides a framework on the impact of technologies on accounting. The final section includes the conclusion, limitations of this study, and closing remarks.
Backgrounds of Accounting Technologies
Many research papers and reports discussed accounting technologies, defining them in various ways. The Association of Charted Certified Accountants (ACCA) and the Institute of Management Accounts (IMA) reported on the future of accounting titled "Digital Darwinism" [7]. This report discussed 10 technology trends with the potential of significantly reshaping the business and professional environment, namely, Mobile, Big Data, AI and robotics, Cybersecurity, Educational, Cloud, Payment systems, Virtual and augmented reality, Digital service delivery, and Social (ACCA/IMA, 2013; 10) [7]. On the one hand, the Institute of Chartered Accountants in England and Wales (ICAEW) identified AI, Big Data, Blockchain, and Cybersecurity as technologies transforming the accounting industry (IFAC, 2019) [8]. Forbes (2018) [9] reported that harnessing the power of the Cloud, accelerating automation, breakthroughs via Blockchain were the future accounting trends. Accounting technology can be identified in various ways, and these technologies are not limited to accounting. In fact, they are used actively in other areas, such as autonomous vehicles, financial engineering, and so on.
There are many articles presenting the advantages of Cloud technology in accounting. According to Ionescu et al. (2013) [10], simplification of accounting documents and migration of certain accounting operations to cloud-based electronic platforms have significantly changed accounting information system. Ionescu et al. (2013) [10] verified the cost saving generated by the utilization of a cloud computing-based application and stated that this is an important and relevant criterion when selecting the internet-based accounting solution. Christauskas and Miseviciene (2012) [11] believed that digital technologies including Cloud potentially increase the quality of the business-related decision process. In addition, Phillips (2012) [12] mentioned that clients and accountants could always be communicated with through the Cloud. The security of data can be ensured by the cloud provider and the risk of unsynchronized data can be eliminated.
The Financial Stability Board reported AI technology would enable accountants to focus on more valuable tasks such as decision-making, problem solving, advising, strategy development, and leadership (FSB, 2017) [13]. Deloitte (2017) [14] presented that RPA accelerates greater automation of process and AI improves productivity in public sectors. Accuracy and efficiency can be increased and operating costs and time can be reduced in performing accounting tasks and process by AI technology. AI can provide higher quality information by machine or deep learning, and contribute to generate more transparent accounting information ( [15][16][17]. Big Data present many important implications for accounting. Warren et al. (2015) [18] stated that video and image data, audio data, and textual data are different types of Big Data as a supplement to existing accounting records, and this information made available through Big Data can provide improved accounting practices. In an increasingly complex and high-volume data environment, the use of technology and Big Data analytics offers greater opportunities in all accounting areas. For example, auditors can obtain a more effective and robust understanding of the entity and its environment, and enhance the quality of the auditor's risk assessment and response (IAASB, 2016: 7) [19].
Prior studies stated that Blockchain is a technology with a direct impact on the accounting profession. Blockchain is a data decentralization-based technique (Raval 2016) [20]. Various data are saved on a list of records called blocks and these blocks are linked like chains using cryptography. It is an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way (Iansiti and Lakhani 2017) [21]. Once recorded, the data in any given block cannot be altered retroactively without alteration of all subsequent blocks, which requires consensus of the network majority (Raval 2016) [20]. Therefore, modification, alternation, and manipulation of data become improbable (if not impossible). PwC (2016) [22] presented that Blockchain is considered as the next generation of business-processing software where transactions are shared among customers, competitors, and suppliers. Particularly, Blockchain with functions that enable data integrity, rapid processing and sharing, and programmatic and automatic control processing will significantly contribute to developing new accounting systems.
Given these studies mentioned above, this study summarizes the new technologies ushering significant changes in accounting into Cloud, AI including RPA and ML, Big Data, and Blockchain. In the following section, I present the research methodology and explain the selection backgrounds and how these technologies and cases are adopted for this study.
Research Methodology
For this study, I followed the guidelines for a systematic literature review described by Schmitz and Leoni (2019) [23]. Using their two-phase approach, academic research, professional reports, and websites were reviewed. Then, a thematic analysis to identify the main themes and topics was performed.
First of all, this study performed a systematic review of academic publications and literature on the new technologies in accounting areas. The review period ranges from 2000 to 2019. I collected the relevant publications by searching on Google Scholar with the keywords of "new technology," "Industry 4.0," and "digital transformation". From this initial process, only peer-reviewed academic journal articles and book chapters written in English and Korean were considered. Following Cockcroft and Russell's (2018) [24] approach, I conducted a comprehensive screen of search results and excluded the publications whose content was not related to technology in the accounting domain. The result can be summarized in 22 academic publications that explicitly address accounting technologies and Table 2 shows the list of selected academic sources. 17 Seo and Kim (2016) 18 Schmitz and Leoni (2019) 19 Shin (2017) 20 Vasarhelyi et al. (2010) 21 Warren et al. (2015) 22 Yook (2019) In order to provide a more comprehensive perspective of the current development of practical applications of new technologies in the accounting industry, I further searched for professional reports and websites of the major professional accounting firms and associations worldwide. I defined the main professional accounting association as American Institute of Certified Public Accountants (AICPA), Association of Chartered Certified Accountants (ACCA), Institute of Chartered Accountants in England and Wales (ICAEW), Financial Stability Board (FSB), International Auditing and Assurance Standards Board (IAASB), International Federation of Accountants (IFAC), and the Big 4 audit firms including PwC, Deloitte, KPMG, and EY. I retrieved all relevant online sources from the websites of AICPA, ACCA, ICAEW, FSB, IAASB, IFAC, PwC, Deloitte, KPMG, and EY. Similar to the academic literature search, online sources that mentioned "new technology", "Industry 4.0", and "digital transformation" in accounting and auditing areas were selected. This web-based analysis resulted in a total of 10 publicly available sources in websites and reports, and Table 3 summarizes the results. This analytical process resulted in a list of four key technologies that can bring significant impact in accounting areas. The key topics are (1) Cloud, (2) AI, (3) Big Data, and (4) Blockchain. Besides Schmitz and Leoni's (2019) [23] two-phase analysis, this study performed the additional step analyzing Korean major newspaper articles to select cases adopting new technologies in the Korean market. I first searched the articles reporting new technologies in the financial or accounting sectors in 5 major Korean business and economic newspapers. Table 4 presents the cases reported, related topics, and the number of reports in the newspapers from the periods of August 2017 to August 2020. Based on the systematic literature review performed above, in the following Section 4, this study explains each of these four technologies selected focusing on presenting cases wherein these technologies were adopted in the Korean market. Although each technology is described separately for convenience, these technologies are intertwined with, and difficult to clearly separate from, each other. For example, the key driver of the development of AI is Big Data (Cho et al., 2018) [17], and combining Cloud computing with Big Data provides countless opportunities and benefits.
Cloud-Based Accounting
Cloud is an internet-based technology resource, offering software applications, computing power, and data storage provided remotely as a service (ACCA/IMA, 2013: 10) [7]. Cloud accounting is an online accounting information system based on cloud computing, and customers use computers or other devices to achieve accounting and financial analysis functions (Feng, 2015: 207) [25].
The major objective of an accounting information system is collection and booking of data and information related to events with an economic impact on the organization, as well as the management, processing, and disclosure of information to internal and external users (Christauskas and Miseviciene, 2012) [11]. Therefore, the accounting system plays a key role in providing financial information used in the decision-making process (Christauskas and Miseviciene, 2012;Ionescu et al., 2013) [10,11]. Cloud is expected to play a key role in collecting and producing accounting data and information. Figure 1 shows the communication models between accounting firms and clients presented by Phillips (2012) [12]. He stated that in the past, accountants communicated with their clients through FTP (file transfer protocol), RDP (remote desktop protocol), emails, or in-person meetings. The accounting process was inefficient, expensive, time consuming, and highly complex. However, recent accounting systems through Cloud simultaneously enable both clients and accountants to efficiently perform their jobs, ensure data security, improve data synchronization, and reduce the risk of unsynchronized data. In an interview with the Financial News Korea, Juergen Lindner, senior vice president of Oracle, mentioned the impending drastic changes due to the pandemic, and that Cloud could make prompt response to crises possible, strengthening the resilience of recovery. He also mentioned that the number of cases of adopting Cloud applications increased since COVID-19 (Kim A, 2020) [26].
Oracle ERP Cloud is widely used by large companies in the Korean market as it enables companies to handle large volumes of data on the cloud without their own data center. Oracle recently announced new Cloud Applications, including AI, digital assistant, and Analytics designed to experience benefits of these technologies, such as saving costs and improving productivity and management capabilities. Companies that adopt this system can make predictions by identifying and utilizing trends and patterns in financial and operational data, enabling timely decisions. The system automatically recognizes financial documents, such as PDFs, and minimizes manual invoicing operations, maintaining high accuracy even when the business environment changes frequently. This One of the well-known cloud accounting systems is Enterprise Resource Planning (ERP). It is managed collectively, including all information from the company as well as supply chain management and customer order information. Accounting forms the core of the ERP system because accounting data are the key information managing all levels of business in an integrated manner. Therefore, they must be accurately aggregated.
In an interview with the Financial News Korea, Juergen Lindner, senior vice president of Oracle, mentioned the impending drastic changes due to the pandemic, and that Cloud could make prompt response to crises possible, strengthening the resilience of recovery. He also mentioned that the number of cases of adopting Cloud applications increased since COVID-19 (Kim A, 2020) [26].
Oracle ERP Cloud is widely used by large companies in the Korean market as it enables companies to handle large volumes of data on the cloud without their own data center. Oracle recently announced new Cloud Applications, including AI, digital assistant, and Analytics designed to experience benefits of these technologies, such as saving costs and improving productivity and management capabilities. Companies that adopt this system can make predictions by identifying and utilizing trends and patterns in financial and operational data, enabling timely decisions. The system automatically recognizes financial documents, such as PDFs, and minimizes manual invoicing operations, maintaining high accuracy even when the business environment changes frequently. This is also used in joint ventures where disputes among partners is frequent. Disputes can be resolved with the increased accounting visibility. Additionally, the cost of information processing technology (IT) has been reduced by more than 60% (Kim A, 2020) [26].
While Oracle provides Cloud services mainly to large-scale companies, WebCash is a Korean fintech start-up which runs an inter-business e-commerce (B2B) fintech platform. WebCash offers a different platform by company size. It provides a simpler accounting program called "Gyeongninara" for sole or small business owners and a more integrated accounting system called "Branch" to mid-sized business owners. Particularly, Gyeongninara is quickly settling down in the market since it enables accounting, taxation, receipt management, payment, and remittance all at once. The Cloud server automatically records purchases and sales, and collects detailed transaction data by client so that outstanding or unpaid bills can be automatically managed. Meanwhile, client companies experience a report-related work reduction by 90% by automatically collecting receipts of corporate credit cards, managing corporate credit cards in an integrated manner, and providing real-time trial balance through mobile devices. Consequently, WebCash's operating income increased by 39% compared to last year since COVID-19 (Kim G, 2020) [27].
WebCash provides cloud services based on Platform as a Service (PaaS) (Cloud Services fall into the following three categories: Software as a Service (SaaS), Infrastructure as a Service (IaaS), and Platform as a Service (PaaS). Any user of free email service and social networking tools has used SaaS, and stored some of their data in a "public cloud," which is where IT resources, such as software, computing power, data storage, and related services are stored on the third-party computers. Resources are managed and maintained to be made available "on demand" to any individual or organization. Some are made available free to the end user, and some must be paid for (so-called "freemium"). IaaS means that users access remote computers and use them for storing data and performing computer-based processes. PaaS implies online access to the software and hardware needed to design, develop, test, and deploy applications, applications-hosting, and various associated services (ACCA/IMA, 2013) [7]. Bizplay is another Korean firm that provides cloud-based Software as a Service (SaaS). Bizplay clients download the app and all members of the organization can access the system. This lets them handle all expenses of registered corporate cards, manage and process receipts, check the usage history in real time and the credit card limit. This eliminates tedious and unproductive repetitive tasks for the organization's financial accounting team. The Bizplay app automatically matches the usage records of each employee and generates quality reports with paperless receipts. Thus, the financial accounting staff can monitor the status of company expenses in real time. Additionally, since COVID-19 has rendered working from the office difficult, Bizplay permits employees to handle related expenses without having to go to the office by simply uploading business-related documents using a smartphone camera. Bizplay also offers customized services for the customers' convenience, such as providing the Pro program for companies that want to manage their budgets more thoroughly in conjunction with the Lite program for small businesses (Lee S, 2020) [28].
These cloud-based accounting innovations have brought changes not only to financial reporting, but also to the management accounting aspect related to inventory management. One of the leading bakery companies in Korea, Orion, reported a product return rate of 0.6% (The average return rate of bakery industry in Korea is usually 2-3% (Lee H, 2020) [29] in the first quarter of 2020. This was nearly 80% lower than the rate 4 years ago (Lee H, 2020) [29]. Orion's sales and operating income in the first quarter increased by 7% and 29%, respectively, from the same period last year, and this result was derived from the use of cloud-based Post of Sales (POS) (The definition of Post of Sales (POS) is a retail store, a cashier at the store, or the location where the transaction occurred. More specifically, POS refers to the hardware and software used for checkouts (Sularto et al., 2015) [30] data. POS data refers to real-time recorded data about goods sold. Orion increased its sales through real-time identification of rapidly changing consumption trends. The return rate significantly declined because the inventory was reduced in the production plan by reflecting on the consumption data in real time. New products with poor consumer response were promptly halted to minimize the costs of return, while production of new products with good consumer response were increased for more sales opportunities. The use of POS data is not a new concept. However, it is not easy for manufacturers to collect and utilize retailers' point of sales information, although many companies rely mainly on information from market research institutes, already a month old. Of those that utilize POS data from limited distribution channels, Orion was able to utilize meaningful data through continuous improvement in the system since 2015 by establishing a web-like collection network for each distribution channel as well as large wholesaler, supermarket, and convenience stores (Lee H, 2020), Ref. [29].
eBlueChannel is another Korean company that utilizes cloud-based POS data for inventory management. It is a data-based integrated drug management solution that connects manufacturers, pharmacies, and wholesalers. In this medical care solution, pharmacies, hospitals, and pharmaceutical manufacturers are all connected. Hence, they can quickly identify drug and medicine inventory. It is a complete medical platform combining all three categories (SaaS, IaaS, and PaaS) of cloud services. eBlueChannel manages the entire process related to the drug with the QR code from manufacturers to the merchant pharmacy. Since all processes require QR code scanning, it can analyze systematic sales data, simplify payment process, automate an ordering system by predicting consumption, and realize zero of unused inventory as well. Traditional medication management programs are limited to specialized medication (ethical-the-counter: ETC) management, making it almost impossible to manage general medicines (over-the-counter: OTC). Due to pharmacists' heavy workload entailing pharmacy preparation, patient response, and payment, inventory management of medicines is difficult for them. There are also problems of being exposed to various social risks due to illegal drug manufacturing or distribution of expired medicines through mass purchase of OTC medicines. eBlueChannel provides both inventory management functions of ETC and OTC, allowing a comprehensive check of inventory status and inventory information, as well as the management of the exact effective date of the medication. It also maximizes management efficiency through a data analysis system that supports the entire sales, payment, ordering, and statistical analysis process. eBlueChannel is an example of upgrading the value of medical business by realizing an automated process using new technology (eBlueChannel, 2020) [31]. eBlueChannel's technology is very similar to the system built during the COVID-19 pandemic to find out the available stock of surgical masks in drug stores.
As mentioned above, cloud computing enables companies to handle large volumes of data without their own data center and to manage key information of business in more integrated manner. Recent accounting system through Cloud simultaneously enables both clients and accountants to efficiently perform their jobs, ensures data security, improves data synchronization, and reduces the risk of unsynchronized data. Cloud is expected to play a key role in collecting and producing accounting data and information. Cloud is not just a technology. It is a trend leading the digital transformation of accounting. Cloud computing combined with Big Data has brought numerous opportunities and benefits. Big Data is the critical driver of the development of AI. Digital transformation does not merely mean a simple adoption or acceptance of technologies. It rather refers to leading a larger or structural shift comingled with the new technologies.
AI-Based Accounting Process
Artificial Intelligence (AI) has functions, such as machine learning and deep learning, which allow users to process a massive amount of work in a very short time-period, using a significantly improved computer processing ability (Yook, 2019) [32]. AI has been introduced in various fields, including autonomous driving, medical care, business support, finance, education, marketing, environment, security, and military. Particularly in accounting, the analysis and utilization of accounting data is promoted, enabling quicker high-level analysis, and using the results, it can be linked to management strategies or various other initiatives. Detecting fraudulent transactions, suspecting irregularities or errors early on, and recognizing and improving them in advance before problems occur is made possible (Yook, 2019) [32].
South Korea is currently finding ways to actively introduce AI to prevent accounting fraud. In the conference of Research on the Future Accounting, Shin (2017) [33] presented several key issues in building fraudulent accounting detection systems. First, it is important to build fraudulent financial statements' database. Collection of financial fraud-related data is critical because they can be important sources of AI using machine learning. Standardizing and connecting with the data is also crucial. Therefore, establishing extensible business reporting language (XBRL) is essential. The second issue is the use of unstructured or semi-structured data. Most systems used structured (numeric) data, such as financial ratios. However, a knowledge acquisition approach using various sources of data, such as verbal data from news, social network service (SNS), and footnotes also should go together.
The amount and quality of data are very important to establish an automated accounting system applying machine learning and artificial intelligence techniques. Data related to accounting fraud are lacking so far, research results are unreliable, and data quality is not guaranteed. This limits the practical use of these technologies in financial accounting and audit. However, there are examples of the use of AI techniques in taxation and managerial accounting.
In February 2018, AGREEMENT, an accounting software supplier, announced a management accounting enhancement solution called "Attack Board." This is a system wherein all employees and managers can access and visualize key performance index (KPI) (This is a Korean performance evaluation system which is adopted from the concept of Balanced Scorecard (BSC).), setting up to achieve the business plan and goal. To develop this accounting solution, AGREEMENT established a service that performs revenue forecasting using AI based on a framework for a risk approach that aims at working properly, identifying hidden risks behind the business, and utilizing the aggregation or reports of collected data. Specifically, the company developed a service that uses AI to predict sales by visualizing various qualitative risks that can be related to sales risk as a score, linking evaluation of intangible assets or positive factors with statistical data. Using a risk approach, the system predicts future figures from historical data considered to impact or correlate with sales. "Attack Board" uses a variety of machine learning algorithms that can be handled by Python to make sales forecasts. To do this, multiple algorithms can be used as a combination while adjusting each parameter to produce more precise predictions, thereby clarifying the issues that need to be improved, contributing to achieving sales goals (Yook, 2019) [32]. Further, if the actual performance differs from the plan, through the variance analysis, appropriate resource reorganization or re-examination of the process is provided.
AI is also used to estimate target costs. Prediction or estimation is a process of filling in missing information and generating information that does not currently exist, based on the data currently possessed. Until now, the following three methods are commonly used to estimate product cost: an analog method for estimating the cost of a new product compared to similar products produced or purchased in the past, analytical methods for estimating optimized theoretical costs by modeling manufacturing processes, and parametric methods for estimating product and service costs through statistical modeling according to specific parameters called cost drivers using a similar product or service history. The last statistical method including regression analysis is particularly useful in the early stages of the life cycle, but it also has some limitations because it rarely takes into account qualitative parameters, does not efficiently manage missing data, and requires a complete data set. In comparison, AI provides directions for a new model of cost estimation. Recent advances in algorithms and machine learning have largely resolved the shortcomings of traditional parameter methods and improved performance and application. For example, one of the latest statistical methods, the Random Forests algorithm, formally proposed by Breiman (2001) [34] and Cutler et al. (2012) [35], is a nonparametric approach to perform learning. This is a technique that selects the elements used to create each decision tree as random, and it builds multiple decision trees for the same data to aggregate the results to enhance predictive performance (Yook, 2019;Breiman, 2001;Cutler et al., 2012) [32,34,35]. The representative software that utilizes this AI algorithm is "EasyKost." This helps determine the cost of new products or services in seconds by utilizing the massive amounts of data without the need for specialized knowledge of industrial technologies and process. Many Korean software companies are trying to establish such cost estimation program in the Korean market.
Taxation is another area where advantages of AI can be witnessed. Figure 2 shows the differences between manual process, Robotic Process Automation (RPA), and RPA with cognitive technology.
South Korean local governments understand these differences and apply AI for accounting of collected tax. For example, the "Standard for Execution of Local Government Expenditure" states that assets with a useful life of less than one year should account for general administration expense, while other asset purchases are not subject to accounting for general administration expense. It is important to accurately input the items out of the numerous items in the execution of the expenditure. However, since it is done manually, there is a high likelihood of an input error. To obtain high quality data, there is no other way than to collect information accurately at the beginning of the task (Bauguess, 2017) [16].
If cognitive technology is applied to automatically classify as appropriate accounting items for the corresponding content, it may increase the probability of matching. The use of cognitive technology can provide benefits of more accurate prediction, improved resource allocation, anomaly detection, and real-time tracking without manual recognition of patterns and missing on key patterns, thereby helping to make better decisions and increase effectiveness (Deloitte, 2017: 10) [36]. Figure 3 is an example of how RPA applies to tax accounting and shows the potential for AI technology to be used in practice. For manual, repetitive, and time-consuming tasks, RPA can be adopted to enable the implementation of tax compliance and reporting-related technology solutions. Currently, there is a problem with local government accounting systems, which is an abnormal accounting process wherein only expenditure occurs without income because it is not accounted for when taxes are collected but accounted in a lump sum when collected taxes are expensed. Implementing accounting for tax collection with RPA can lead to a more proper accounting process.
O'Neill (2016) [38] also introduced intelligence technology that analyzes documents and contracts. He said that KPMG and IBM developed the cognitive computer, Watson, and Watson Analytics reads and summarizes thousands of pages of contract or agreement documents instantly. Figure 3 is an example of how RPA applies to tax accounting and shows the potential for AI technology to be used in practice. For manual, repetitive, and time-consuming tasks, RPA can be adopted to enable the implementation of tax compliance and reporting-related technology solutions. Currently, there is a problem with local government accounting systems, which is an abnormal accounting process wherein only expenditure occurs without income because it is not accounted for when taxes are collected but accounted in a lump sum when collected taxes are expensed. Implementing accounting for tax collection with RPA can lead to a more proper accounting process.
O'Neill (2016) [38] also introduced intelligence technology that analyzes documents and contracts. He said that KPMG and IBM developed the cognitive computer, Watson, and Watson Analytics reads and summarizes thousands of pages of contract or agreement documents instantly. It shows virtual images, visualizes important parts, and distinguishes between what users should and should not be interested in. Deloitte partnered with Kira Systems to help review contracts and documents and provide evidence to support them. Kira (2018) [39] can visualize the terms of the contract in a picture that can be quickly identified, respond to law revisions, and review anti-bribery and force majeure cases. This system can also scan original documents and easily compare them with summary text. The Kira Review Platform API can understand all files in any format, and Kira API can be used to recall files and folders on the network or bring up future contracts with the repository. It can search a document with or without a clause and look for specific text within a clause. A built-in model of provisions, such as general terms of contract, compliance, and organization is implemented. Kira may also be taught to find legal provisions in a foreign language. Many modifications can be quickly presented in the contract. It is easy to identify revisions and changes across the entire set of contracts and analyze deviations through data visualization. The Korean government tried adopting AI technology like Kira for local government contract management. Adopting this has the potential of reducing serious disputes in government procurement contracts due to the failure of the public official in charge of checking the changes or errors caused by human beings. Since several kinds of extensive audits are carried out in the public sector, such as financial statement and performance audits (Cho et al., 2018: 274) [17], Korean local governments intend to actively use data analysis, including AI, RPA, and machine learning.
Adoption of Big Data
In addition to the above-mentioned use of AI technology, Korean National Tax Service also actively adopted Big Data technology. Korean National Tax Service formed a task force to prepare a roadmap for introducing Big Data, established a Big Data center in 2019, and utilized Big Data in all areas of tax administration, including tax payment services and tax investigations (Song, 2017) [41]. It established a new investigation division and enhanced its ability to cope with tax evasion by scientific investigation, such as the development of advanced forensics techniques and big data analysis (Korean National Tax Service, 2016) [42]. Examples of the use of Big Data by the National Tax Service include regular and non-regular earned income, past tax payment, credit card spending, credit card records, real estate status, car purchase records, overseas fund information sharing with more than 60 countries, online market access, online transaction, social media (Twitter, Facebook,
Adoption of Big Data
In addition to the above-mentioned use of AI technology, Korean National Tax Service also actively adopted Big Data technology. Korean National Tax Service formed a task force to prepare a roadmap for introducing Big Data, established a Big Data center in 2019, and utilized Big Data in all areas of tax administration, including tax payment services and tax investigations (Song, 2017) [41]. It established a new investigation division and enhanced its ability to cope with tax evasion by scientific investigation, such as the development of advanced forensics techniques and big data analysis (Korean National Tax Service, 2016) [42]. Examples of the use of Big Data by the National Tax Service include regular and non-regular earned income, past tax payment, credit card spending, credit card records, real estate status, car purchase records, overseas fund information sharing with more than 60 countries, online market access, online transaction, social media (Twitter, Facebook, Instagram, etc.), web search, emails, and so on. It reviews and collects these Big Data, and it analyzes and compares with previous tax payments.
Regarding the implementation of policies through the analysis of government 3.0 public data, Korean local governments conduct data analysis focusing on the economy, transportation, culture, communication, and safety (Lee et al., 2015: 10) [43]. The sources of Big Data for local administrative information, which are classified and opened by 244 local governments according to the contents of the service, include policy information, local government data (performance evaluation, audit, etc.), and local administrative statistical information (from 7 departments, including welfare and finance, 27 sectors, and 179 indicators) (Lee et al., 2015: 51) [43]. Korean local governments are indeed building a massive amount of Big Data.
Additionally, both public officials and accounting experts consider the national financial support program as an area very likely to utilize Big Data in financial accounting (Kwon et al., 2014: 274) [44]. The federal government-granted national financial aids are recorded in e-program, and all funds are managed in the e-program from the budget to expense stages, enabling public officials to always check the status of the national funds. Local governments can also automatically send the results to metropolitan and provincial government through e-program. Therefore, the government can effectively manage illegal execution of funds, such as embezzlement, bribery, poor execution, and fraudulent documents (Seo and Kim, 2016) [45].
Warren et al. (2015) [18] stated that video, image, audio, and textual data are all Big Data as a supplement to existing accounting records. These data are also used to manage public property in Korea. Suwon City, Gyeonggi Province introduced a new land investigation program using drones to secure accurate taxation data, shortening the investigation time and supplementing insufficient administrative power (Kim A, 2018) [46]. Up to this point, the land investigation method of imposing taxes was only a way for public officials to visit the site by comparing the land register with aerial photographs taken a year ago or by receiving reports. However, they now use drones to find lands that are being illegally and differently used from the original filing, and collect acquisition taxes on them (Kim I, 2018) [47].
Big Data affects the overall accounting areas and taxation. Conducting audits using Big Data is advantageous to auditors. According to the PwC (2015) [48] report, data analysts are changing the audit procedures with the new system, and auditors can use more information, including financial and non-financial data sets, and visualize the meaningful data. According to Vasarhelyi et al. (2010) [49], Big Data analytics can provide continuous audits and help mitigate audit risks. Hoogduin et al. (2014) [50] also state that the application of data scientists makes auditors perform more efficient audits, offering a way to provide new audit evidence not applied in the past. In financial accounting, Big Data will improve the quality and relevance of accounting information. Various accounting programs that can utilize Big Data are being actively developed in the Korean industry. For example, Smart A released by Douzone is an algorithm that applies Cloud and Big Data to allow users to proactively explore meaningful corporate information rather than just ex post facto exploration of financial information. Big Data is a new technology that can be used to provide relevant and meaningful information for better decision-making.
Blockchain-Based Accounting
Blockchain technology was first proposed by Satoshi Nakamoto (2008) [51] as a payment system for encrypted digital currencies. It was used as a security technology for transactions of the cryptocurrency Bitcoin developed in 2009. Consequently, there is a tendency to misinterpret Blockchain technology as a cryptocurrency, perceiving it with a negative image because of cryptocurrency's speculative nature.
However, Blockchain is an information recording technology that uses encryption to prevent forgery or manipulation of information.
Schmitz and Leoni (2019) [23] describe Blockchain technology as an internet-based peer-to-peer network that uses cryptography. Peer-to-peer networks use a distributed application that allocates and shares tasks among peers participating in the network. Because of the distributed nature and its consensus mechanism, Blockchain technology provides a solution to control the ledger of recorded transactions. Every new record is added to existing blocks and these blocks are cryptographically linked. Due to this chain-shaped link, Blockchain can overcome the limits of double-entry bookkeeping such as the need for external assurance on companies' financial statements and the potential for fraud.
Although Blockchain technology offers these benefits, it has a number of limitations that need to be factored into any business case for adoption. Hughes et al. (2019) [52] identify the following limitations with Blockchain technology: lack of privacy, high costs, security model, flexibility limitations, latency, and governance. Besides, they mention that non-technical limitations including lack of acceptance from legal and regulatory authorities, and lack of user acceptance are also present.
However, recently, Blockchain has been adopted in various ways, transforming transactions throughout the industry. Fanning and Centers (2016) [53] claimed that Blockchain would significantly impact financial services because of its anti-corruption and information validity characteristics. It is expected to be widely used in medical and public sectors as well. Figure 4 shows the areas in which Blockchain can be applied. Korea is also searching for ways to introduce Blockchain technology in line with the changing era. The Korean government announced Blockchain as the core technology of the fourth industrial revolution. It is making efforts to promote and educate about technology to preempt the future technology industry. On November 8 2018, a forum was held jointly by the Korea FinTech Association and the government. On this forum, the vice prime minister of Science and Information and Communication Technology (ICT) announced that 40 billion Korean won (US$1 is approximately 1200 Korean won.)was set for the coming year's budget for Blockchain industry to be used for educating Blockchain specialists and expanding public demonstration projects, such as livestock product management and privately led pilot projects (Kim GY, 2018) [54].
Blockchain refers to a technology that distributes and stores all transactions and various data of all participants in a shared network. It is very secure, and it is difficult to manipulate transaction records because they are shared by all participants in the network. Blockchain verifies the validity of a transaction and secures reliability of the data (Lee et al., 2019) [55]. Due to its security and decentralization, it is applied to various industrial sectors, such as healthcare, finance, and supply chain management. It can be applied to all forms of record management and contracts in which Korea is also searching for ways to introduce Blockchain technology in line with the changing era. The Korean government announced Blockchain as the core technology of the fourth industrial revolution. It is making efforts to promote and educate about technology to preempt the future technology industry. On November 8 2018, a forum was held jointly by the Korea FinTech Association and the government. On this forum, the vice prime minister of Science and Information and Communication Technology (ICT) announced that 40 billion Korean won (US$1 is approximately 1200 Korean won.) was set for the coming year's budget for Blockchain industry to be used for educating Blockchain specialists and expanding public demonstration projects, such as livestock product management and privately led pilot projects (Kim GY, 2018) [54].
Blockchain refers to a technology that distributes and stores all transactions and various data of all participants in a shared network. It is very secure, and it is difficult to manipulate transaction records because they are shared by all participants in the network. Blockchain verifies the validity of a transaction and secures reliability of the data (Lee et al., 2019) [55]. Due to its security and decentralization, it is applied to various industrial sectors, such as healthcare, finance, and supply chain management. It can be applied to all forms of record management and contracts in which security and auditability of transactions are important. In addition to financial sectors, such as securities trading, liquidation settlement, and remittance, investment, lending, and commodity exchange, Blockchain technology is also utilized in other areas, such as identification, notarial, ownership, electronic voting, transportation, and distribution (Lee et al., 2019) [55].
Blockchain is also used in identification. Because of Blockchain, the 21-year-old public identification system in Korea was abolished. The public certification system was first introduced 21 years ago in 1999, when internet use was still new, for self-authentication when government agencies issued civil documents or engaged in online financial transactions. However, issuance, done through financial institutions, is a complicated process. Certain web browsers must be connected, and it is difficult and inconvenient to integrate between computers and smartphones. It was abolished on 20 May 2020 as Blockchain technology is expected to be most commonly used technology for proving identity. As a result, the digital identification market (private authentication) that utilizes Blockchain entered an infinite competition in the Korean market (Hong, 2020) [56]. Industry and academia classify the level of Blockchain into three stages, depending on the scope of introduction and degree of activation of the technology: First-generation Blockchain technology (hereinafter referred to as Blockchain 1.0) is the generation that first started with Bitcoin's operating system. Blockchain 2.0 is the second generation technology that enables smart contracts. Finally, Blockchain 3.0 is the third generation, wherein smart contracts are used in various areas of society, such as public, political, and economic sectors. Korea has just moved beyond Blockchain 1.0 and is making efforts to promote technology and legislate for Blockchain 3.0 (Lee et al., 2019) [55].
Korea Trade Network (KTNET) recently implemented uTradeHub, an electronic trade infrastructure. It is a single-window system in which exporters and importers can handle all trade affairs without interruption by linking a number of related agencies, such as commercial banks, customs service, shipping/air companies, ranging from marketing, credit rating, to payment. It is established to simplify the trading procedures of trading companies and enhance the efficiency of their trade operations. (Source: https://www.utradehub.or.kr) uTradeHub currently provides a unified workplace for trading companies to conveniently and instantly process all trade business, from marketing, commerce, foreign currency exchange, customs clearance, logistics, to payment without any interruption. It is still in the early stages of implementation. Once in action, it will contribute towards trade automation innovation. It will also bring the benefits of reducing trading-related costs, increasing efficiency, and preventing fraudulent transactions if an uTradeHub is provided to prepare and exchange B2B information with foreign partners. KTNET is now working to establish a service that automatically submits necessary electronic documents to the National Tax Service (Korean Customs Service, 2017) [57].
The Korean government has been carrying out a large-scale project since 2018 to build a Blockchain platform that promotes business and technical verification with the participation of 48 companies related to exports and imports, aiming at strengthening the reliability and accuracy of documents and information. This service will provide advantages to all entities, including exporter, importer, shipping companies, insurance firm, logistics warehouses, and freight forwarders. The government is preparing to design and introduce a real-time autonomous accounting system called RAAS DApp to be applied on a trial basis to consignment processing trade first.
Consignment processing trade refers to the transaction of all or part of raw materials to be processed (including manufacturing, assembling, regenerating or remodeling) in a foreign country on the condition that they are processed, and then import or export processed goods (Article 1-0-2, 6 of the Foreign Trade Management Regulations) (Korean Customs Service, 2017) [57]. Figure 5 summarizes the flow of consignment processing trade. As shown, all transactions are identified and stored by each accounting subject using Blockchain technology. All transactions are recorded automatically with the unique number of the block in the ledger, and the transaction information and proof of accounting is stored together in the block. Therefore, it helps ensure transparency and reliability of accounting information.
Discussion
After analyzing the analytical process of systematic literature review, it was possible to summarize accounting technologies into four major topics. The main ideas of these accounting technologies already studied in the literature are presented in Table 5.
Discussion
After analyzing the analytical process of systematic literature review, it was possible to summarize accounting technologies into four major topics. The main ideas of these accounting technologies already studied in the literature are presented in Table 5. Table 5. Main ideas mentioned in prior literature.
Study Main Ideas
Cloud ACCA/IMA (2013) Overall technology change in accounting Feng (2015) Accounting information model Christauskas and Misevicience (2012) Cloud accounting for small and mid-sized business Ionescu et al. (2013) Comparison between traditional and cloud Accounting Phillips (2012) Could computing adopted in accounting Accounting is an integrated information providing system that collects company data, generates useful information, and makes it possible to manage them. Therefore, accounting information can be said to be a large set of Big Data. Utilizing the Big Data of a firm provides numerous advantages. As examined above, the Korean National Tax Service actively utilized Big Data technology. It established a new investigation division analyzing various forms of data, including past tax payment, credit card records, real estate status, car purchase records, overseas fund information, online market access, online transaction, social media, and so on. It enhanced its ability to detect tax evasion by scientific investigation, such as developing advanced forensics techniques and Big Data analysis (Korean National Tax Service, 2016) [42]. This is evidence that more efficient control systems can be developed using Big Data.
Additionally, this study presented the case of how to manage the federal government-granted national financial aids. They are recorded in the so-called e-program, and all funds are handled in the e-program from the budget to expense stages, enabling public officials to always check the status of the national funds. Local governments can also automatically send the results to the provincial government through the e-program. Therefore, the government can effectively manage the illegal execution of funds, such as embezzlement, bribery, poor execution, and fraudulent documents (Seo and Kim, 2016) [45]. Meanwhile, in Korea, Suwon City, Gyeonggi Province, adopted a new land investigation program using drones to secure accurate taxation data, shortening the investigation time, and manage public property.
Conducting audits using Big Data is also advantageous to auditors. PwC (2015) [48] reports that data analysts are changing the audit procedures with the new system. Auditors can use more information, including financial and non-financial data sets, and visualize the meaningful data. This study provided evidence from prior literatures that Big Data analytics can provide continuous audits and help mitigate audit risks (Vasarhelyi et al. (2010) [49]). Big Data will improve the quality and relevance of accounting information. This study also presented the case of Big Data combining Cloud computing together. For example, Smart A released by Douzone is an algorithm that applies Cloud and Big Data to allow users to proactively explore meaningful corporate information rather than just ex post facto exploration of financial information. In this way, Big Data is based on Cloud computing. We cannot store Big Data and meaningfully utilize them without the Cloud system.
WebCash, a Korean Accounting program provider based on Cloud technology, enables its client firms to make accounting, taxation, receipt management, payment, and remittance all at once. The Cloud server automatically records purchases and sales and collects detailed transaction data by the client so that outstanding or unpaid bills can be automatically managed. The customers of WebCash can handle large volumes of data without their own data center. At the same time, WebCash can make prompt responses possible so that their clients can manage key information of business in a more integrated manner. Bizplay was another example of the cases. It also provides the autonomous accounting processing app, and their clients can eliminate tedious and unproductive repetitive tasks for accounting records. The Bizplay app automatically matches each employee's corporate card usage records and generates quality reports with paperless receipts.
These cloud-based accounting innovations have brought changes to the management accounting aspect related to inventory management. One of the leading bakery companies in Korea, Orion, experienced a significant drop in product return rate. This was derived from the use of cloud-based Post of Sales (POS) data. Orion increased its sales through real-time identification of rapidly changing consumption trends. eBlueChannel was another Korean company that utilizes cloud-based POS data for inventory management. It is a data-based, integrated drug management solution that connects manufacturers, pharmacies, and wholesalers. In this medical care solution, pharmacies, hospitals, and pharmaceutical manufacturers are all connected. Hence, they can quickly identify drug and medicine inventory. It is a complete medical platform combining cloud services: therefore, they can manage their inventory in a more efficient way. While Cloud computing makes it possible to store and utilize Big Data, it is revealed from the cases presented in this study that Big Data can be meaningfully used with AI technology. Big Data is an essential element of AI technology because AI can only function in deep learning or machine learning only with Big Data. Through the Korean cases that are building fraudulent accounting detection systems, it is emphasized that collecting financial fraud-related data is critical because the data can be an important source of AI using machine learning. The amount and quality of data are necessary to establish an automated accounting system applying machine learning and artificial intelligence techniques. The case of "Attack Board" was also discussed. It is a system provided by AGREEMENT, which is a Korean accounting software supplier. In "Attack Board," all employees and managers can access and visualize key performance index (KPI), setting up to achieve the business plan and goal. AGREEMENT developed a service that uses AI to predict future figures from historical data considered to impact or correlate with sales. "Attack Board" uses a variety of machine learning algorithms that can be handled by Python to make sales forecasts. It reveals that companies can develop a more accurate and efficient accounting system by combining Big Data and AI together.
Through Cloud computing, the valuable Big Data can be stored, and AI technology can effectively stylize these Big Data. Therefore, these technologies contribute to more advanced accounting information systems providing higher quality information with less time and costs and making it possible to achieve more transparent accounting. Here, the Blockchain technology will play a key role. Up to this point, it is examined how accounting technologies contribute to information generating or information managing activities. However, Blockchain contributes to information assuring activities. This study discussed the advantages of Blockchain technology. For example, it is difficult to manipulate transaction records because they are shared by all participants in the network. Blockchain verifies the validity of a transaction and secures the reliability of the data. uTradeHub and electronic trade infrastructure implemented by Korea Trade Network (KTNET) were provided as evidence of utilizing Blockchain in the accounting area. It is a single-window system in which exporters and importers can handle all trade affairs without interruption by linking a number of related agencies, such as commercial banks, customs service, shipping/air companies, ranging from marketing and credit rating to payment. It was established to simplify companies' trading procedures and enhance their trade operation efficiency. uTradeHub currently provides a unified workplace for trading companies to conveniently and instantly process all trade business, from marketing, commerce, foreign currency exchange, customs clearance, logistics, to payment without any interruption. It is expected to contribute towards trade automation innovation, and to bring the benefits of reducing trading-related costs, increasing efficiency, and preventing fraudulent transactions. Besides, KTNET is working to establish a service that automatically submits necessary electronic documents to the National Tax Service (Korean Customs Service, 2017) [57]. This is a service using Blockchain technology. Once in action, all transactions are identified and stored by each accounting subject using Blockchain technology. All transactions are recorded automatically with the unique number of the block in the ledger, and the transaction information and proof of accounting is stored together in the block. Therefore, it helps ensure transparency and reliability of the accounting information.
As mentioned above, these technologies are not separated from but rather intertwined with each other. The accounting process can be significantly improved by combining each element of new technologies (see Table 6).
Despite these advantages, it might be challenging to use them in real practice. The accounting technologies should be customized for each different organization. However, customizing it also requires time and efforts. For example, for continuous audit, both audited companies and auditors require a large-scale technical investment and company-wide training programs in order to build the audit system. Establishing legal system and regulation to force large investments for financial reporting is another obstacle. Currently, audit fees are set based on audit time, and they are expected to be significantly increased if continuous audit is actually employed. Total audit time may vary by increasing the preparation, operation, and time for the auditor's professional judgment, even though audit time for technical audit process would be shortened. There could be a discrepancy in interests between audited companies and auditors. Therefore, it is important to prepare appropriate guidelines for adopting these technologies into their own systems, which are suitable for each of their organizations to achieve breakthrough synergies in accounting systems. Analyzing the real-world examples and cases will help to provide ideas for the guidelines.
Conclusions and Closing Remarks
This study discusses the case of Korea introducing new technology into accounting. Some technologies are used to collect and produce accounting information and data. Some are used to provide quality information to make an efficient decision in practice, and some others are used to improve transparency and reliability of accounting quality. In some cases, private companies took the initiative in utilizing programs or software, and in some other cases, systems utilizing new technologies were used by local government. It is particularly interesting that in Korean markets, the government initiated a new infrastructure first leading to subsequent trend shifts from government-led to private-company-oriented.
As shown above, Korean companies have developed programs to process accounting information by combining Big Data technology and accounting AI in practice. For example, WebCash, a fintech start-up, utilizes a system that can automatically collect transaction data and generate the firm's financial statements. South Korean local governments adopt this automation system, so that they can extract various data needed for tax adjustment, automatically fill out key forms, and even automatically verify errors in tax report data. In addition, Douzone, the Korean firm providing accounting software, has been obtaining patents for accounting AI technology based on Big Data. The audit process is becoming transparent and timely, and audit on demand can be implemented. In Korean markets, the accounting technologies are actively used to read and extract massive number of complex documents related to major tasks in the organization, including mergers and acquisition, leases, and so on.
In the field of financial accounting and auditing, companies have already been developing and applying programs, such as Smart A, Gyeongninara, AGREEMENT and so on, that can understand corporate data through cognitive AI and Blockchain technology. These technologies help to organize and analyze accounting data to assist accountants, and achieve self-improvement through machine learning and deep learning.
The innovation of AI is changing the working environment. AI can be used for simple and repetitive tasks. Tasks with clear standards and conditions can also be completed using AI and Cloud system can support this. Blockchain offers increased information security and improves transparency in accounting. High-volume and time-consuming tasks, such as mailing, book-keeping, and data entry, have become almost possible with automation.
Meanwhile, it has been revealed that it is not difficult to utilize AI and other technologies in managerial accounting area, which are already actively adopted in the field of financial accounting. Management accounting is not based on the mandatory accounting system, rather it should be developed based on the company's own situation. Management accounting is for optimal allocation of a company's capital or resources based on their own economic conditions or future forecasts. It is characterized by continuous change to find the optimal solution, therefore, the companies can identify the possibility through deep learnings and AI, and can produce the best solution without prior knowledge of the rules or specific accounting standards and systems (Marr, 2017) [58].
While the financial accounting and auditing analyze the data that has already occurred, management accounting has the characteristic of future orientation. Thus, predictive analysis using AI, Big Data, and Blockchain have become essential tools for management accounting to cope with rapid changes in global competitive environment and uncertainty. In the future, accountants and accounting experts need to study how the interface with the core technologies can be applied to strategic analysis and decision-making accounting in order to expand the accounting function. Up to these days, the availability of these technologies has been addressed and suggested in financial accounting and audit areas, and the attempt to adopt the technologies in management accounting is very limited, so research gaps exist. Therefore, finding an approach that can be applied to the management accounting area will also be a meaningful topic for future research.
However, irregular real-life activities, requiring social intelligence, such as negotiation, persuasion, care, creative intelligent, producing ideas, understanding, and interacting with complex patterns are difficult to replace by AI. Therefore, we need to understand that these technologies are not tools to replace experts, but ones that improve estimation and predictions. It should also be understood that estimations are just one of the several opinions about the decision-making process. Agrawal et al. (2018) [59] argued that forecasts promote judgement by reducing uncertainty and that judgments provide value. ML can accelerate the process of forecasting by sorting and finding patterns, but it is difficult for machines to evaluate and decide something. This is the domain of human expertise. Accounting is an information system that provides useful information for decision-making. These technologies simply enhance this accounting system. Therefore, we must keep in mind that providing useful information can be replaced by the technologies, but implementing decision-making is only up to human beings. Human accounting experts must make the final decisions. Gerald Stokes, a professor at the State University of New York in Korea, said it was disastrous for machines to take over the responsibilities of humans. Even if AI had self-thinking skills, the final judgment is always up to human beings (Stokes, 2020) [60].
The accounting technologies introduced in this study have now become basic trends that cannot be delayed or avoided. This transformation is expected to be accelerated, especially after COVID-19. Without understanding and adopting these technologies, we cannot survive in a fast-changing business environment. Therefore, it is necessary to understand, think about how to apply them, and develop creative fields that such technologies can be applied to. Further, while new technologies offer many opportunities, there are also several associated risks, such as errors, server-down, data backup, and so on. This study offers guidelines for the application of technologies to actual accounting practices, also contributing to the development of areas that can be utilized, the capabilities of future human experts to live with new technologies, and providing opportunities to contemplate related risks and threats.
This study is meaningful because it provides examples showing the adoption of these technologies in actual practice, allowing us to explore ways to effectively utilize and manage them. This study also initiates the process of organizations finding ways to properly adopt new technologies. It is also time to think about the capabilities human accountants need.
Despite the contributions, this study has some limitations. There are many other types of accounting technologies. However, this study presents only a few of them. The purpose was to introduce some examples from Korea. Hence, there were limitations in providing more detailed descriptions for each one of them. Future research must address the same.
|
v3-fos-license
|
2017-06-11T20:28:26.135Z
|
2008-12-30T00:00:00.000
|
1289119
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcurol.biomedcentral.com/track/pdf/10.1186/1471-2490-8-21",
"pdf_hash": "f4622e333b53c1a1f02c2cef21b4bac7f8260e35",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43122",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "770d5ab0c3d76f14e2a52b539de198d7b92b3dd6",
"year": 2008
}
|
pes2o/s2orc
|
Correlation of three immunohistochemically detected markers of neuroendocrine differentiation with clinical predictors of disease progression in prostate cancer
Background The importance of immuno-histological detection of neuroendocrine differentiation in prostatic adenocarcinoma with respect to disease at presentation and Gleason grade is gaining acceptance. There is limited literature on the relative significance of three commonly used markers of NE differentiation i.e. Chromogranin A (CgA), Neuron specific enolase (NSE) and Synaptophysin (Syn). In the current work we have assessed the correlation of immuno-histological detection of neuroendocrine differentiation in prostatic adenocarcinoma with respect to disease at presentation and Gleason grade and to determine the relative value of various markers. Materials and methods Consecutive samples of malignant prostatic specimens (Transurethral resection of prostate or radical retropubic prostatectomy) from 84 patients between January 1991 and December 1998 were evaluated by immunohistochemical staining (PAP technique) using selected neuroendocrine tumor markers i.e. Chromogranin A (CgA), Neuron specific enolase (NSE), and Synaptophysin (Syn). According to the stage at diagnosis, patients were divided into three groups. Group (i) included patients who had organ confined disease, group (ii) included patients with locally invasive disease, and group (iii) with distant metastasis. NE expression was correlated with Gleason sum and clinical stage at presentation and analyzed using Chi-Square test and one way ANNOVA. Results The mean age of the patients was 70 ± 9.2 years. Group I had 14 patients, group II had 31 patients and group III had 39 patients. CgA was detected in 33 cases, Syn in 8 cases, and NSE in 44 cases. Expression of CgA was seen in 7% of group I, 37% in group II and 35% of group III patients (p 0.059). CgA (p 0.024) and NSE (p 0.006) had a significantly higher expression with worsening Gleason grade. Conclusion CgA has a better correlation with disease at presentation than other markers used. Both NSE and CgA had increasing expression with worsening histological grade this correlation has a potential for use as a prognostic indicator. Limitations in the current work included small number and retrospective nature of work. The findings of this work needs validation in a larger cohort.
Background
Prostate cancer is the most commonly diagnosed malignancy in men in the United States with an estimated 218,890 cases diagnosed in the year 2007 and estimated death of 27,050 [1]. Among men, cancers of the prostate, lung and bronchus, and colon and rectum account for about 54% of all newly diagnosed cancers, prostate cancer accounts for about 33% of cases in men [1]. Prostate cancer incidence rates continued to increase, although at a slower rate than those reported for the early 1990s and before. Based on cases diagnosed between 1995 and 2001, an estimated 91% of these new cases of prostate cancer are expected to be diagnosed at local or regional stages, for which 5-year relative survival approaches 100% [1]. However, it is noteworthy that individual cancers show substantial variation in its outcome. The variable biological potential of these tumors makes it important to stage the disease. The various prognostic indicators include clinical staging, serum PSA, % biopsy core involved and histological grade. The histological grade correlates both with local invasiveness and the metastatic potential. In a subset of both localized and locally advanced cancers, the existing markers, however, are often unable to differentiate poor from good outcome cancers. On these grounds, it is important to establish validated prognostic indicators that could help physicians in tailoring treatment for individual patients.
Neuroendocrine (NE) differentiation in PC has received increasing attention in the recent years due to prognostic and therapeutic implications. The term NE differentiation in prostatic carcinoma includes tumors composed exclusively of NE cells (the rare and aggressive small cell carcinoma and carcinoid/carcinoid like tumor) or, more commonly, conventional prostatic adenocarcinoma with focal NE differentiation [2]. The prognostic importance of focal neuroendocrine differentiation in PC is controversial, but current evidence suggests that it has an influence on prognosis related to hormone resistant tumours or a role in the conversion to a hormone resistant phenotype [3].
Various neuroendocrine markers like Chromogranin A (CgA), synaptophysin (Syn), neuron specific enolase (NSE), β HCG have been studied. However, CgA appears to be the best overall tissue and serum marker [4]. In the current study we have investigated the importance of immuno-histological detection of neuroendocrine differentiation in prostatic adenocarcinoma with respect to disease at presentation and Gleason grade. In addition the relative significance of three markers of NE differentiation i.e. CgA, NSE and Syn is also correlated with stage and grade of disease.
Methods
This study was conducted following Aga Khan University's ethical review committee (ERC) clearance, in view of the nature of study; ERC waived the requirement for informed consent. Consecutive malignant primary prostatic specimens, were obtained from 84 patients by either trans-urethral resection of prostate (n = 69 patients) for urinary obstruction or from radical retro-pubic prostatectomy (n = 15 patients) between January 1991 and December 1998. These tissue specimens were taken from the archived records of the department of pathology. The age ranged from 52-93 years (mean 70 + 9.2 years). Sections were stained for H & E as well as for Chromogranin A, Synaptophysin and NSE (DAKO), Glostrup, by immuno histochemistry using PAP technique. The methods have also been previously described in details [5]. Clinical staging was done using the TNM system. For patients who had radical retropubic prostatectomy, the T and N stage were pathological and for patients who only had TURP was radiological.
Briefly, 3 μm thick tissue sections were cut and mounted on poly-L-lysine (sigma) coated slides. Sections were deparaffinized in xylene and re-hydrated through graded alcohol series followed by water. Sections were washed with water followed by Phosphate buffer saline (PBS) rinse. Endogenous peroxidase in the sections was blocked for 30 minutes with 0.3% H 2 0 2 in methanol. Sections were washed with PBS. All sections were treated with Normal Swine serum (NSS) prediluted 1:10 in PBS for 5 minutes.
The sections were then incubated with the primary antibodies pre diluted appropriately in NSS for 90 minutes at room temperature. Slides were then washed with PBS and incubated with peroxidase-conjugated swine anti rabbit secondary antibody (DAKO) at a dilution of 1:150 for 45 minutes at room temperature. This was followed by inoculation with PAP complex. 3, 3'-diaminobenzidine (DAB) was used as a final Chromogen. Harris Haemtoxylin was used as a counter nuclear stain. Positive controls were used with all batches of IHC staining. Same case by omitting the primary antibody was used as a negative control with each staining procedure. Histological grading, the Gleason system, was used for grading of the cancer specimens; a senior histopathologist (SP) blinded of previous Gleason grading and clinical course performed rescoring. A consensus in departmental consultation conference was achieved in case of any discrepancy. Based upon the Gleason score patients were divided into three groups i.e. well differentiated (Gleason sum 2-4), moderately differentiated (Gleason sum 5-7) and poorly differentiated (Gleason sum 8-10).
To study correlation and determine the p value Student t was applied. Statistical significance was examined by Mann-Whitney U-test, Student's t-test, Kruskal-Wallis test, the log-rank test, and Simple regression. A P-value below 0.05 was considered to be significant.
Results
During the period of 1991-1998 there were 84 patients with histological specimens from TURP and RRP. The mean age was 70 ± 9.3 years. Majority of patients had either locally invasive (37%) or metastatic disease (45%) and only 18% had organ confined disease. At a median follow up of 8.4 ± 3.5 years 54% (n 45) were dead; of the surviving 46% (n 39) 21 patients (25%) had metastatic disease. There is a statistically significant difference in the development of metastases, overall and cause specific survival between groups with and without CgA staining.
According to the TNM classification 35% (n = 29) had stage T1, 32% (n = 29) stage T2, 25% (n = 21) stage T3 and 6% (n = 5) stage T4 disease according to the TNM classification. Based upon the stage of the disease patients were divided into three groups i.e. organ confined (T1-2), locally invasive (T3-4 and N1) and metastatic (M1) cancer. Staining for NE marker (CgA) was seen in 39%, NSE in 52% and Syn in only 10%. The % expression of the three markers in the organ confined, locally advanced and metastatic disease is shown in table 1. It is note worthy that tumors with negative CgA staining were picked up in the early stage with minimal or organ confined disease; the positive results were obtained for locally advanced and metastatic disease (p 0.059). The correlation is not statistically significant but only shows a trend towards the significance. The correlation between Gleason sum and % expression of the three NE markers is shown in table 1. It indicates modest (for CgA) to significant (NSE) relationship between the extent of NE differentiation and Gleason score. In table 2, significance of CgA and NSE is correlated with overall, cancer specific survival and development of metastatic disease. There was no significant correlation in CgA, NSE and SYN expression with the androgen withdrawal status.
Discussion and conclusion
In the present work we have shown NE differentiation in conventional prostate adenocarcinoma and assessed the relationship of the extent of NE status to the commonly recognized prognostic variables. We have also tried to evaluate the relative significance of immunohistochemically detected expression of three markers viz. CgA, SYN, and NSE.
Prostate cancer is a leading cause of morbidity and mortality in men, accounting for 33% of all new cases of cancer and 14% of deaths from cancer [1]. Despite considerable advances in our ability to detect and treat PC, there have been no significant corresponding decreases in morbidity and mortality [6]. The therapeutic aim is to tailor the approach to the clinical, morphological, and molecular features of each patient. Many of the clinically important predictive factors in PC are still derived from a pathologist's examination of tissue specimens using light microscopy, but the challenge of assembling the information is such that the use of artificial neural networks is expected to improve accuracy in diagnosis, staging, and treatment outcomes for PC [3,7]. PC may show divergent differentiation towards a neuroendocrine phenotype in the form of neuroendocrine small cell carcinoma or carcinoid-like tumours [8]. Much more common, however, is focal neuroendocrine differentiation in PC, which may be pronounced in about 10% of carcinomas. The prognostic importance of focal neuroen- docrine differentiation in PC is controversial, but current evidence suggests that it has an influence on prognosis related to hormone resistant tumours or a role in the conversion to a hormone resistant phenotype [3]. However, we did not find a significant correlation in CgA, NSE and SYN expression with the androgen withdrawal status. CgA appears to be the best overall tissue and serum marker of neuroendocrine differentiation, and thus serum CgA concentrations may be useful in assessing the emergence or progression of hormone resistant cancer [8]. Recently Kamiya et al noted that CgA had a stronger relationship between serum levels and IHC positivity in contrast to NSE, suggesting clinical usefulness as a tumor marker in predicting the extent of neuroendocrine differentiation in prostate cancer [9].
In our series CgA expression was seen in 31%, Synaptophysin in only 8% and NSE in 45% cases. Only CgA expression was significantly correlated with the clinical stage of the disease, whereas both CgA and NSE correlated with the grade. The significant relationship between tumor grade and NE differentiation was found in some studies [10][11][12] whereas other investigators failed to confirm [13,14]. In the present study there is a significant correlation between rising Gleason sum and expression of both CgA (p 0.024) and NSE (p 0.006). The relationship between stage of the disease and NE expression was noted only for CgA, whereas for both the other markers it was not statistically significant.
We also compared CgA expression with established NE markers used widely to identify NE cells and NED in the prostate. CgA, a secreted acidic product of prostate NE cells, is a widely accepted and specific marker of both NE cell populations and NED differentiation [15,16]. NSE is another classical NE marker, but lacks some specificity compared to CgA. Serum concentrations of CgA and NSE can be monitored as a potential prognostic factor in PCa [17][18][19]. In tumor tissue, we found a statistically significant correlation of CgA to NSE expression. Synaptophysin, a presynaptic vesicle glycoprotein, is expressed in virtually all cells of well-differentiated prostate NE tumors (NET, carcinoid), neuroendocrine carcinomas (NEC), and in poorly differentiated carcinomas including small cell carcinoma (SCC), all being very rare clinic entities (0.2-1%) [20]. However, SYN has a lower specificity for NE cells than CgA and may stain positive in non-NE tumors [21]. CgA seems to stain prostate NE cell populations more homogenously than SYN.
Limitation in current work included small number of patients which makes it to difficult to draw definite conclusions concerning the predictive value of various markers of NE differentiation in relation to disease progression and survival. However, the trends are interesting and warrant further work in a larger cohort of patients.
In conclusion this study further supports the theory that focal NE differentiation within classical prostate carcinoma is predictive of poor prognosis, as it correlates with Gleason sum and clinical stage of the disease. In our study, CgA was the best predictor of NE differentiation as it correlated better than the other two markers examined, both with stage and grade of the disease. Given the relatively small sample size of this study, these correlative findings suggest that the prognostic impact of these markers merits further investigation in a larger cohort
Funding
Authors received a grant from the University research council seed money grant form Aga Khan University to conduct the study. Authors received no other grant for study design; in the collection, analysis, and interpretation of data; in the writing of the manuscript; and in the decision to submit the manuscript for publication.
|
v3-fos-license
|
2020-04-10T14:03:27.704Z
|
2020-04-10T00:00:00.000
|
215559065
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-020-62606-7.pdf",
"pdf_hash": "3ab1d5b08c671d4c95f309de8a4495b51a2ad0a1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43123",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "3ab1d5b08c671d4c95f309de8a4495b51a2ad0a1",
"year": 2020
}
|
pes2o/s2orc
|
Structure-based engineering of anti-GFP nanobody tandems as ultra-high-affinity reagents for purification
Green fluorescent proteins (GFPs) are widely used in biological research. Although GFP can be visualized easily, its precise manipulation through binding partners is still burdensome because of the limited availability of high-affinity binding partners and related structural information. Here, we report the crystal structure of GFPuv in complex with the anti-GFP nanobody LaG16 at 1.67 Å resolution, revealing the details of the binding between GFPuv and LaG16. The LaG16 binding site was on the opposite side of the GFP β-barrel from the binding site of the GFP-enhancer, another anti-GFP nanobody, indicating that the GFP-enhancer and LaG16 can bind to GFP together. Thus, we further designed 3 linkers of different lengths to fuse LaG16 and GFP-enhancer together, and the GFP binding of the three constructs was further tested by ITC. The construct with the (GGGGS)4 linker had the highest affinity with a KD of 0.5 nM. The GFP-enhancer-(GGGGS)4-LaG16 chimeric nanobody was further covalently linked to NHS-activated agarose and then used in the purification of a GFP-tagged membrane protein, GFP-tagged zebrafish P2X4, resulting in higher yield than purification with the GFP-enhancer nanobody alone. This work provides a proof of concept for the design of ultra-high-affinity binders of target proteins through dimerized nanobody chimaeras, and this strategy may also be applied to link interesting target protein nanobodies without overlapping binding surfaces.
LaG16 and GFP-enhancer can bind to GFPuv at the same time. To confirm that LaG16 and GFP-enhancer can bind to GFPuv noncompetitively in vitro, we used the FSEC method 22 . After the addition of only one kind of extra nanobody (LaG16 or GFP-enhancer) to GFPuv, the peak representing GFPuv emission exhibited an obvious shift compared to the peak of GFPuv alone, proving that either LaG16 or GFP-enhancer can bind to GFPuv (Fig. 2). In the sample with both LaG16 and GFP-enhancer added, all the GFPuv was incorporated into the LaG16-GFPuv-GFP-enhancer triple complex, whose peak shows a larger shift than that of the GFPuv-nanobody dimer. Therefore, the FSEC method confirmed that LaG16 and GFP-enhancer can bind to GFPuv at the same time.
Design of fusion nanobody based on the triple structure model. As nanobodies are powerful tools for the purification of GFP-tagged proteins and there are many commercialized nanobody resin products used for purification, we attempted to produce a fusion nanobody with heightened affinity to GFPuv for use in purifying protein with improved yield. Repeated (GGGGS) amino sequences can form a flexible linker between two proteins, and one turn of (GGGGS) has been found to be 19 Å long 23 . Based on the modelled structure of the LaG16-GFPuv-GFP-enhancer triple complex (Fig. 3A), we calculated that the distance from the N terminus of LaG16 to the C terminus of the GFP-enhancer (65.5 Å) is shorter than the distance from the N terminus of the GFP-enhancer to the C terminus of LaG16 (78.4 Å). Thus, we decided to add several (GGGGS) repeats between the N terminus of LaG16 and the C terminus of the GFP-enhancer. Too short a linker will cause tension when the fusion nanobody binds to GFPuv, while too long a linker will decrease the stability of the fusion nanobody. We added 4/5/6 (GGGGS) repeats between the two nanobodies ( Fig. 3B) and used the ITC method to select the best fusion nanobody with the most suitable linker.
Determination of the affinity constant between GFPuv and anti-GFP nanobody tandems.
To examine whether the fusion nanobodies had higher affinity for GFP than the individual ones, we measured the binding affinity of LaG16, GFP-enhancer, GGGGS 4 , GGGGS 5 and GGGGS 6 to GFPuv (GGGGS 4 , GGGGS 5 , and GGGGS 6 are the abbreviations of the fusion nanobodies GFP-enhancer-(GGGGS) 4 -LaG16, GFP-enhancer-(GGGGS) 5 -LaG16, and GFP-enhancer-(GGGGS) 6 -LaG16, respectively) ( Fig. 4, Table 1). GFPuv exhibits a Kd of 6.7 nM with LaG16 and a Kd of 24.3 nM with GFP-enhancer. All fusion nanobodies showed a greater affinity to GFPuv than the single GFP-enhancer or LaG16 nanobody. The Kd values of GGGGS 4 , GGGGS 5 and GGGGS 6 to GFPuv were 0.5 nM, 0.6 nM, and 1.2 nM, respectively. When the linker was too long, the LaG16 and GFP-enhancer in the tandem nanobodies could be treated as two separate and unrelated molecules and would not affect each other. When the linker length was properly optimized, as one of the nanobodies bound to GFP antigen, the linker restricted the movement of the tandem-linked nanobody to rotation and twisting in a small range. When the second nanobody's GFP binding site was nearby, there was a greater chance to simultaneously bind two nanobodies to one GFP molecule. As the fusion nanobody with the shortest linker, GGGGS 4 , showed the highest affinity with GFPuv, we chose GGGGS 4 for the nanobody-coupled resin application.
www.nature.com/scientificreports www.nature.com/scientificreports/ Application of the GGGGS 4 nanobody for membrane protein purification. We coupled GGGGS 4 or GFP-enhancer to NHS-activated Sepharose4 Fast Flow resin and used the resin to purify GFP-tagged zebrafish P2X4 receptor 24 , a membrane protein, from pelleted SF9 cell membrane. The eluted protein was analysed by SDS-PAGE (Fig. 5). The solubilized cell membrane showed a very weak band of GFP-P2X4 in the gel, while the eluted solution showed a strong band of GFP-zfP2X4, which means that both GGGGS 4 -coupled resin and GFP-enhancer-coupled resin can catch the GFP-tagged protein with high specificity. However, the GGGGS 4 -coupled resin had a higher yield, as the intensity of GFP-zfP2X4 analysed by ImageJ software was at about 1.5x that obtained with the GFP-enhancer (Table 2). We also performed and compared the purifications by the anti-GFP resins and by the TALON his-tag purification resin, which was previously employed for P2X4 purification 24 . The results showed that the anti-GFP resins yielded a much higher purity than the TALON resin ( Fig. 5, Table 2).
Discussion
In this work, we determined the structure of the GFPuv-LaG16 complex and revealed the interaction between the CDR regions of LaG16 and GFPuv. The model of the GFP-enhancer-GFPuv-LaG16 triple complex and FSEC testing confirmed that GFP-enhancer and LaG16 can bind to GFPuv at the same time. More importantly, we designed the fusion nanobody GGGGS 4 (GFP-enhancer-(GGGGS) 4 -LaG16) and tested it for purification of a GFP-tagged protein, obtaining a higher yield than the original GFP-enhancer. www.nature.com/scientificreports www.nature.com/scientificreports/ As GFP fusion expression screening techniques such as FSEC 24 . have been widely used in membrane protein structural biology, affinity purification using anti-GFP nanobodies has also become increasingly popular. In particular, after the Cryo-EM revolution, FSEC screening of functional membrane proteins suitable for single-particle Cryo-EM by fusion with GFP became the general strategy. However, the contents of important GFP-tagged membrane protein complexes in cultured mammalian cells are relatively low, and the yield of affinity purification by crosslinking a single GFP nanobody affinity resin with nanomole-scale affinity is not sufficient for Cryo-EM. Tandem nanobody binding to GFP with subnanomolar affinity significantly improved the yield and overcame this problem. The fusion nanobody GGGGS 4 in our study may provide a better choice for the purification of GFP-tagged proteins, particularly those with very low expression.
Additionally, direct manipulation of the in vivo target protein level is gradually becoming popular because DNA-and RNA-level manipulation, including knockout, knockdown and gene editing, is indirect, and unwanted side effects may cause incorrect results. Since GFP has been widely used to generate cell lines and animal models, controlling the expression level of target proteins fused with GFP may also simplify in vivo manipulation. Successful attempts have included directed protein degradation through anti-GFP nanobodies fused to E3 ligase. Several groups 25,26 have proven the usefulness of the nanobody-controlled degradation of specific nuclear proteins in mammalian cells and zebrafish embryos. With ultra-high-affinity nanobody chimaeras, the efficiency of this approach may be further improved.
Methods
Vector construction. The ORFs of LaG16, GFP-enhancer nanobodies and GFPuv were synthesized and inserted into the pET-28b vector between the NdeI and BamHI restriction sites by GENEWIZ, Inc. For the construction of fusion tandem nanobodies, (GGGGS) 4 , (GGGGS) 5 and (GGGGS) 6 were inserted between the C terminus of the GFP-enhancer and the N terminus of LaG16 by GENEWIZ, Inc. (Table 3).
Expression and purification. The plasmid was transformed into E. coli Rosetta (DE3) cells and plated
on Luria Bertani (LB) medium with 1.25% agar, 30 μg/ml kanamycin and 30 μg/ml chloramphenicol. Colonies of transformed Rosetta (DE3) cells were inoculated into LB medium. The next day, 1% of the cells cultured overnight were added to LB medium with 30 μg/ml kanamycin and incubated with shaking at 37 °C until the OD 600 nm reached approximately 0.6. Protein expression was induced by adding 0.5 mM isopropyl-b-D -1-thiogalactopyranoside (IPTG), and the cells were grown at 18 °C with shaking (220 rpm). Cells were harvested after 16 hours by centrifugation at 4000 × g for 10 min. Cell pellets were suspended in TBS (50 mM Tris pH 8.0, 150 mM NaCl) containing 1 mM phenylmethylsulfonyl fluoride (PMSF) and lysed using a High Pressure Homogenizer (JN-3000 PLUS, JNBIO, China) at 1,000 bar 5 times. The cell debris and inclusion bodies were removed by centrifugation at 35000 × g for 30 min. The supernatant was applied to a Ni-NTA (Qiagen) column pre-equilibrated with buffer A (50 mM Tris-HCl pH 8.0, 150 mM NaCl, 30 mM imidazole). The mixture was rotated at 4 °C for 1 hour, the beads were washed to remove unbound protein with 10 CV of buffer A, and the protein was eluted with elution buffer (50 mM Tris-HCl pH 8.0, 150 mM NaCl, 300 mM imidazole). The eluted protein's His8 tag was removed in a 3.5 kD dialysis membrane (spectra/Por 7) by HRV3C protease at a mass ratio of target protein:HRV3C = 10:2 overnight at 4 °C. Then, 500 ml of dialysis buffer was added to remove imidazole (50 mM Tris-HCl pH 8.0, 150 mM NaCl, 15 mM imidazole). The dialysis buffer was exchanged again during dialysis. On the next day, the digested protein was applied to a column equilibrated with dialysis buffer. Then, the column was rotated at 4 °C for 1 hour, and the flow-through fraction was collected and concentrated to www.nature.com/scientificreports www.nature.com/scientificreports/ 10 mg/ml using an Amicon Ultra 10 K filter (Millipore). Next, the protein was applied to a Superdex 75 Increase size-exclusion column (GE Healthcare) equilibrated with SEC buffer (20 mM HEPES pH 7.0, 150 mM NaCl). The target recombinant proteins with the tag removed were collected and concentrated to 10 mg/ml. Crystallization. LaG16 nanobodies and GFPuv (GFPuv: LaG16 = 1: 1.2; GFPuv: LaG16: GFP-enhancer= 1: 1.2: 1.2) were mixed and rotated at 4 °C for 1 hour. Then, the mixture was centrifuged at 41600 × g for 20 min, and the supernatant was applied to a Superdex 75 Increase size-exclusion column (GE Healthcare) equilibrated with SEC buffer (20 mM HEPES pH 7.0, 150 mM NaCl). The fractions containing the dimer/triple complex were collected and concentrated to 10 mg/ml. The crystals were obtained by vapour diffusion over a solution containing 0.3 M NaCl, 0.01M Tris-HCl 8.0, 27.5% w/v PEG4000 (for GFPuv-LaG16 complex).
Data collection and structure determination. All data sets were collected at SPring-8 BL32-XU (Hyogo, Japan). The data sets were processed with XDS programs 27 . The structure of the GFPuv-LaG16 complex was determined by molecular replacement using the Phaser program from the CCP4 crystallography package 28,29 with PDB ID code 6IR6 for GFPuv and the LaG16 model built based on chain C of 3K1K as the search models. www.nature.com/scientificreports www.nature.com/scientificreports/ The refinement was performed by Refmac 30 and Phenix 31 , and the model was further adjusted by COOT 32 . The related figures were drawn using PyMOL 33 . The structure refinement statistics are summarized in Table 4.
Isothermal titration calorimetry. The binding of nanobodies to GFPuv was measured using a Microcal ITC2000 microcolorimeter (GE Healthcare) at 20 °C. GFPuv and related nanobodies were purified as described above. We injected 280 μl of 5 μM GFPuv into the cell, and the ligand solution was 75 µM nanobody. The ligand was injected 20 times (0.4 μl for injection 1, 2 μl for injections 2-20), with 120 s intervals between injections. The baseline was obtained by adding ligand to SEC buffer. Before analysis, the baseline determined from www.nature.com/scientificreports www.nature.com/scientificreports/ GFPuv-nanobody samples was subtracted. The data were analysed by the Origin7 software package (MicroCal). Measurements were repeated two times, and similar results were obtained.
Coupling nanobodies to NHS-activated sepharose4 fast flow beads. Since the activated NHS resin will form a covalent band with Tris buffer, we used HEPES instead of Tris during purification. Nanobodies were expressed as described above. Cells were harvested by centrifugation at 4000 × g for 10 min. Cell pellets were suspended in HBS (20 mM HEPES pH 7.0, 150 mM NaCl) containing 1 mM PMSF and lysed using a High Pressure Homogenizer (JN-3000 PLUS, JNBIO, China) at 1,000 bar 5 times. The cell debris was removed by centrifugation at 35000 × g for 30 min. The supernatant was applied to a Ni-NTA (Qiagen) column pre-equilibrated with buffer A (20 mM HEPES pH 7.0, 150 mM NaCl, 30 mM imidazole), and the mixture was rotated at 4 °C for 1 hour. Then, the beads were washed with 10 CV of buffer A, and the protein was eluted with elution buffer (20 mM HEPES pH 7.0, 150 mM NaCl, 300 mM imidazole). The eluate was placed in a dialysis membrane (spectra/Por 7) to remove extra imidazole using dialysis buffer (20 mM HEPES pH 7.0, 150 mM NaCl). Then, the digested protein was concentrated to 10 mg/ml. Anti-GFP and TALON resins were used to purify GFP-tagged zfP2X4. The expression and cell disruption of zfP2X4 were performed as described previously 24 . One hundred and eighty microlitres of pelleted membrane (presumably containing approximately 60 μg of GFP-tagged zfP2X4) was solubilized with 180 µl of S buffer (50 mM Tris-HCl pH 8.0, 150 mM NaCl, 30% glycerol, 4% DDM, 1 mM PMSF, 5.2 μg/ml aprotinin, 2 μg/ ml pepstatin A, 2 μg/ml leupeptin, and 0.5 U/m apyrase. Then, the unsolubilized membrane was removed by ultracentrifugation at 41600 × g for 20 min at 4 °C. The supernatant was divided evenly into three 1.5 ml EP tubes and incubated with 50 μl of anti-GFP resin (GFP-enhancer or GGGGS 4 tagged resin) equilibrated with wash buffer Ι (50 mM Tris-HCl pH 8.0, 150 mM NaCl, 15% glycerol, 0.05% DDM) or 50 μl of TALON resin (Takara) equilibrated with wash buffer ΙΙ (50 mM Tris-HCl pH 8.0, 150 mM NaCl, 15% glycerol, 0.05% DDM, 25 mM imidazole). The mixture was rotated at 4 °C for 1 hour, and then the resin was centrifuged at 200 × g for 2 min to remove the unbound protein. Then, 100 μl of wash buffer was added to the resin and centrifuged at 200 × g for 2 min to remove the supernatant. This washing step was repeated 5 times. Finally, the resin was applied to a spin column (Micro Bio-Spin Columns, BIO-RAD), and Table 4. Data collection, phasing and refinement statistics. *The highest resolution shell is shown in parentheses.
|
v3-fos-license
|
2019-08-23T06:04:01.572Z
|
2019-07-30T00:00:00.000
|
201209527
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://wjarr.com/sites/default/files/WJARR-2019-0038.pdf",
"pdf_hash": "f6e9e6b26a2b6ef669e411cf52c9997402abb5c5",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43124",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "f6e9e6b26a2b6ef669e411cf52c9997402abb5c5",
"year": 2019
}
|
pes2o/s2orc
|
Effective microorganisms for the cultivation and qualitative improvement of onion (Allium cepa L.)
The aim of the study was to investigate how effective microorganisms (EM) affect the quality and growth of onions cvs “Dorata di Bologna”, “Lunga di Firenze”, “Bianca Musona”, “Rossa di Tropea”. An experiment was carried out with 2 treatments: 1) soil inoculated with EM microorganisms; 2) soil without EM microorganisms (control). The test showed a significant increase in the agronomic parameters analysed in the plants treated with Effective microorganisms. In fact, all the onions of the different varieties, treated with EM microorganisms, showed a significant increase in bulbs weight, bulbs diameter, bulbs length and root weight. There is also an increase in radical growth in the theses with Effective Microorganisms, an aspect confirmed in other experiments in vegetable and ornamental crops. Increased root growth results in improved resistance to water and transplant stress and a higher supply of nutrients to the plant, which consequently grows better. It is therefore clear from the evidence that the use of this selection of microorganisms, inoculated into the soil, can significantly improve the quality of onions bulbs.
Introduction
Effective microorganisms are a commercial microbial selection containing a mixture of coexisting beneficial microorganisms collected from the natural environment.
This selection was developed at the University of Ryukyus, Japan, in early 1980 by Prof. Dr. Terou Higa. About 80 different microorganisms are able to positively influence the decomposing organic substance in such a way as to transform it into a process of "promoting life".
EM is a fermented mixed culture of naturally occurring species of microorganisms coexisting in an acid environment (pH less than 3.5). Microorganisms in EM improve crop health and yield by increasing photosynthesis, producing bioactive substances such as hormones and enzymes, accelerating the decomposition of organic materials and controlling soil diseases. Effective microorganisms can be used as herbal insecticides to control insects and pathogenic microorganisms and can also be used as plant growth inducers. Soil microorganisms have an important influence on soil fertility and plant health. EMs interact with the soil-plant ecosystem by controlling plant pathogens and disease agents, solubilising minerals, increasing plant energy availability, stimulating the photosynthetic system, maintaining the microbiological balance of the soil, fixing biological nitrogen [4]. A characteristic of this mixture is the coexistence of aerobic and anaerobic micro-organisms. After Higa's research in Japan [1], the characteristics of EM have been studied in many countries. Studies have shown positive effects of the application of EM on soils and plants on soil quality and nutrient supply [5], plant growth [6], crop yield [7], [5] and crop quality [8][9][10][11]. However, in some studies no positive effects were found [4], [12][13]. This study has tested the possible use of EM microorganisms in the cultivation and improvement of the quality of onion (Allium cepa L.), to increase knowledge and improve the protocols of use of this microbial selection applied in various fields of agriculture around the world.
Greenhouse experiment and growing conditions
The experiments began in early November 2018 (mean temperature 7.5°C), were carried out price experimental greenhouses of the CREA-OF of Pescia (Pt), Tuscany, Italy (43°54′N 10°41′E) on bulbs of onion (cvs "Dorata di Bologna", "Lunga di Firenze", "Bianca Musona", "Rossa di Tropea"). The bulbs were placed in pots ø 14 cm; 40 bulbs for thesis divided into replicas of 20 bulbs each, for all types of onion.
All bulbs were fed with the same amount of nutrients supplied through controlled release fertilizer (5 kg m−3 of Osmocote Pro® 3 -4 months containing 190 g/kg N, 39 g/kg P, 83 g/kg K) blended with the growing medium before transplant.
The 2 experimental theses in cultivation were: Thesis without EM (CTRL), with only water.
Thesis with (EM) with activated in dilution 1:100 (2L of 1:100 dilution inoculum was used with EM for every 10L of peat, same proportion for the control thesis where only water was used); The lighting of the greenhouse at the plant level was about 12,000 lux with high pressure sodium lamps. The plants were lit for 16 hours a day. A minimum daytime temperature of 20 °C and a night-time temperature of 18 °C were maintained in the greenhouse. On the 6th of June, bulbs weight, diameter (for all cvs) and length (only for cv "Lunga di Firenze") were recorded.
Statistics
The experiment was carried out in a randomized complete block design. Collected data were analysed by one-way ANOVA, using GLM univariate procedure, to assess significant (P ≤ 0.05, 0.01 and 0.001) differences among treatments. Mean values were then separated by LSD multiple-range test (P = 0.05). Statistics and graphics were supported by the programs Costat (version 6.451) and Excel (Office 2010).
Plant growth
The test showed a significant increase in the agronomic parameters analysed in the plants treated with Effective Microorganisms. In fact, all the onions of the different varieties, treated with EM microorganisms, showed a significant increase in bulb weight, bulb diameter, bulb length (only for cv "Lunga di Firenze") and root weight.
In "Dorata di Bologna", the diameter of the bulb was 36.36 mm in the EM thesis, against 27.69 mm of the control (Fig.1A), 49.13 g in the EM thesis, against 22.87 g of the control with regard to the weight of the bulb (Fig.1B). There is also an increase in root weight, 17.11 g in EM compared to 12.33 g in control (Fig. 1C, 5A). Different letters for the same parameter indicate significant differences according to LSD test (P = 0.05).
In "Lunga di Firenze", the length of the bulbs increased, 96.60 mm in EM, compared to 80.81 mm for the control ( Fig. 2A). There is an increase in the weight of the treated bulbs, 32.55 g against 16.92 g of the untreated control (Fig. 2B). In addition, there is an increase in radical weight, 58.66 g in the EM-treated thesis, compared to 37.11 g in the untreated control (Fig. 2C, 5B).
In "Bianca Musona", the data show a significant increase in the diameter of the bulb, 36.81 mm in EM, compared to 20.25 mm in the control (Fig.3A). There is also a significant increase in the weight of the bulbs, 42.71 g in EM compared to 17.46 g in the untreated control (Fig.3B). The data also show a significant increase in radical weight, 37.56 g in EM compared to 28.66 g in untreated control (Fig.3C, 5C).
Figure 3
Effect of Effective Microorganisms (EM) on the growth of onion cv "Bianca Musona". Each value reported in the graph is the mean of three replicates ± standard deviation. Statistical analysis performed through one-way ANOVA. Different letters for the same parameter indicate significant differences according to LSD test (P = 0.05).
In "Rossa di Tropea" the data show a significant increase in the diameter of the bulb, 38.55 mm in Em compared to 24.22 mm of the control (Fig .4A). In addition, there was a significant increase in the weight of the bulb, 50.11 g in EM compared with 21.39 g in the control (Fig.4B). The test also showed, as for the other onion varieties, a significant increase in root weight, 28.99 g in EM compared to 23.44 g in the untreated control (Fig. 4C, 5D).
Figure 4
Effect of Effective Microorganisms (EM) on the growth of onion cv "Rossa di Tropea". Each value reported in the graph is the mean of three replicates ± standard deviation. Statistical analysis performed through one-way ANOVA. Different letters for the same parameter indicate significant differences according to LSD test (P = 0.05).
Discussion
The literature does not reveal studies on the effects of Effective Microorganisms on the quality of onion plants, although several works show the effects of this microbial selection on horticultural, ornamental and fruit crops [14] [15].
In this trial all varieties of onions treated with EM microorganisms showed a significant increase in diameter, weight and length of bulbs and root weight.
EM microorganisms stimulate onion bulbs already at the time of transplantation because they guarantee a better water supply and an increase in the solubilization of minerals present in the substrate and in the soil, in particular Ca, P and Mg. Ca influences many beneficial processes for the plant: a high content of Ca leads to fewer diseases, reduction of insect attack, better preservation of the product [4]. The results show a faster growth rate of EM-treated onion bulbs and a reduction in the possible development of diseases.
Scientists have shown that Effective Microorganisms can increase fruit weight, yield and photosynthesis [16]. EM applied with green manure significantly increased tomato yields, which in the third year of cultivation were comparable to those obtained with chemical fertilizers [17].
There is also an increase in radical growth in the theses with Effective Microorganisms, an aspect confirmed in other experiments in vegetable and ornamental crops [18] [19]. Increased root development results in increased resistance to water and transplant stress [20] and a higher supply of nutrients to the plant, which consequently grows better.
Conclusion
The test showed how the use of Effective Microorganisms can improve the quality of the onion bulbs cvs "Dorata di Bologna", "Lunga di Firenze", "Bianca Musona", "Rossa di Tropea", in particular by significantly increasing the diameter, weight, length of the bulbs and root weight. EM microorganisms can have a positive effect on the absorption of other minerals, particularly calcium, by promoting plant growth, improving product quality, growth rate and resistance to biotic and abiotic stress.
|
v3-fos-license
|
2018-04-03T00:10:49.651Z
|
2017-12-22T00:00:00.000
|
9939783
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0190123&type=printable",
"pdf_hash": "23ae0ccc4050477be53e9efa8951f1eba93938b7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43126",
"s2fieldsofstudy": [
"Business",
"Medicine"
],
"sha1": "23ae0ccc4050477be53e9efa8951f1eba93938b7",
"year": 2017
}
|
pes2o/s2orc
|
Patients’ perceptions of service quality in China: An investigation using the SERVQUAL model
Background and aim The doctor–patient relationship has been a major focus of society. Hospitals’ efforts to improve the quality of their medical services have been to reduce the probability of doctor–patient conflicts. In this study, we aimed to determine the gap between expectations and perceptions of service quality according to patients to provide reference data for creating strategies to improve health care quality. Methods Twenty–seven hospitals in 15 provinces (municipalities directly beneath the central government) were selected for our survey; we sent out 1,589 questionnaires, of which 1,520 were collected (response rate 95.65%) and 1,303 were valid (85.72% effective recovery rate). Paired t-tests were used to analyze whether there were significant differences between patients' expectations and perceived service quality. A binary logistic regression analysis was used to determine whether there were significant differences in the gap between expectation and perception of service quality according to patients' demographic characteristics. Results There was a significant difference between the expected and perceived service quality (p < 0.05) according to patients both before and after receiving medical services. Furthermore, the service quality gap of each service dimension was negative. Specifically, the gaps in service quality were as follows: economy, responsiveness, empathy, assurance, reliability, and tangibles. Overall, we can conclude that patients’ perceptions of service quality are lower than their expectations. Conclusions According to the study results, the quality of health care services as perceived by patients was lower than expected. Hospitals should make adjustments according to the actual situation and should strive to constantly improve the quality of medical services for patients.
Introduction
Recently, with the improvement of people's living standards, customers are becoming increasingly attentive to obtaining the best-quality products. Accordingly, in the medical field, patients are paying increasing attention to the quality of medical services. Understanding the quality of their medical services can help organizations identify their own competitive advantages and disadvantages while, at the same time, preventing waste of resources [1]. Medical service quality has been found to be associated with patient satisfaction [2]. When patients experience satisfactory medical treatment, their trust in the hospital tends to increase, which, in turn, benefits the construction of harmonious doctor-patient relationships [3]. Therefore, accurately understanding the needs and expectations of patients regarding medical services as well as the gap in patients' expectations and perceptions of service quality is exceedingly important for improving the quality of hospital care services.
The concept of customer service quality was initially proposed in the early 1980s. In 1982, a professor in Finland-Christian Gronroos [4]-proposed the concept of the customers' perceived service quality and created the perceived service quality model. He interpreted service quality as a subjective construct that depended on contrasting customers' expectations of the quality of a service (i.e., the expected service quality) with their perceptions of the actual quality of the service (perceived service quality).
In the mid-1980s, Parasuramn A, Valarie A. Zeithaml, and Leonard L. Berry (PZB) [5] began to study factors related to customers' perceptions and decisions regarding service quality. In 1985, the three of them published an article titled "A conceptual model of service quality and its implication for future research" in the Journal of Marketing, in which they put forward the "service quality gap model." While this model originally had 10 dimensions, they cut it down to five-tangibles, reliability, responsiveness, assurance, and empathy-which are described as follows: 1. Tangibles: physical facilities, equipment, and appearance of personnel 2. Reliability: ability to perform the promised service dependably and accurately 3. Responsiveness: willingness to help customers and provide prompt service Indeed, numerous scholars have used SERVQUAL to evaluate medical service quality [9][10][11][12]. For instance, Teng et al. [9] used SERVQUAL to assess patients in surgical departments and confirmed that the instrument was valid and reliable in this population. In China, several scholars have examined patients' perceptions of service quality. In 2004, Niu Hongli introduced the SERVQUAL evaluation system to the medical field in China. Based on SERVQUAL, they created an index system for a medical service quality evaluation scale and studied the optimal method of reading the scale. Many studies describe the application of SERVQUAL in the evaluation of medical service quality [13][14][15][16][17]. Yang Jia et al. [15] surveyed 216 outpatients in a hospital in Beijing and found a large service quality gap overall; by dimension, the tangibility dimension had the smallest gap.
According to the above mentioned studies, Chinese patients appear to be generally dissatisfied with the quality of medical services. However, it is notable that the medical service industry in China is special; thus, it would be necessary to adjust the items and dimensions of SERVQUAL to fit the special characteristics of China's medical industry [18][19]. Past studies have shown that SERVQUAL can feasibly be used to evaluate China's medical services. Previous researchers [6][7][8][9][10] have mainly focused on patients from a single hospital in a limited region; thus, the scope of investigation is limited. In this study, we surveyed 27 hospitals in 15 provinces (i.e., municipalities directly under the central government). This ensures that the sample size was large and the coverage wide. We aimed to compare patients' expectations of service quality and their perceptions of the quality of services actually received and explored the factors underlying the differences in perception. To meet the demands of patients, most hospitals should improve the quality of their services.
Sample design and data collection
Data were collected between January and June of 2016. The sample was selected using convenience sampling. In 27 hospitals across 15 provinces in China, we administered questionnaires to 1,589 hospitalized patients or their relatives (hospitalized for more than three days) who were over 18 years of age and had the capacity for independent judgment. Specifically, the participating hospitals are shown in Table 1. The researchers entered the inpatient ward under the condition of seeking hospital approval and obtained the informed consent of the patients or their families. The researchers issued the questionnaires at the scene, and they were completed by the patients or their families. The investigators recovered questionnaires on the spot. We sent out 1,589 questionnaires, of which 1,520 were collected (response rate 95.65%) and 1,303 were valid (85.72% effective recovery rate).
Design and development of questionnaire
The questionnaire was designed using the following steps. First, we referred to the international standards for SERVQUAL [6] and the actual situation of the medical service sector in China to make appropriate changes to form our questionnaire. Next, we carried out a preliminary investigation in three hospitals in Harbin. In this preliminary survey, we issued 75 questionnaires; these 75 individuals were not included in the formal study. After processing the preliminary data, we further modified the questionnaire. Finally, we consulted health management experts, hospital administrators, clinicians, and other health experts (a total of six persons) for expert opinions on the questionnaire to further improve the formation.
Through the literature review, pre-investigation, and expert consultation, the final questionnaire was formed. This questionnaire comprised general characteristics (age, sex, education, income, clinic department, medical payments) and a 24-item scale each for expectations and perceptions [20][21][22]. (Patient expectation is the expected health service before receiving medical services. It is influenced by past experience, public opinion, the image of medical institutions, and oral communication from relatives and friends. Patient perception refers to the patient's actual feelings of the quality of service provided by the hospital after receiving medical service.) The scale comprised the following dimensions: tangibles (items 1-5), reliability (items 6-9), responsiveness (items 10-13), assurance (items [14][15][16][17][18], empathy (items [19][20][21], and economy (items [22][23][24], all rated on a 5-point Likert scale (strongly disagree, disagree, indifferent, agree, and strongly agree). Higher scores on each item indicate that patients' expectations and perceptions regarding the quality of medical services are more positive.
The results of validity testing indicated that all dimensions met the minimum validity requirements. Regarding the reliability, the Cronbach's alpha value for expectations of service quality was 0.967 for the whole scale; those for the six dimensions all exceeded 0.8. For perception of service quality, the Cronbach's alpha of the whole scale was 0.933, and those for the six dimensions were all over 0.7. The details are shown in Table 2.
The principal component analysis method was used to extract the factor with the characteristic value greater than 1 and the factor load greater than 0.45 according to the Kaiser standard. The results show that expectations and perception of the Kaiser-Meyer-Olkin values were
Data calculation method
The difference between perceptions (P) and expectations (E) (P − E = SQ) represents service quality. When SQ is negative, there is a service quality gap. Conversely, when SQ is positive, patients' expectations are greater than their perceptions [23]. Each dimension of the specific calculation is shown in Table 3.
Data analysis method
Initially, we performed data entry using Epidata and then employed SPSS Statistics 20 for the statistical analysis. We calculated descriptive statistics (means and standard deviations) for patients' expectations and perceptions of service quality. Paired-sample t-tests were used to compare the expectations and perceptions of service quality and to determine which services have the greatest gaps in quality. When p < 0.05, the results were statistically significant. A binary logistic regression analysis was used to examine the relationship between patients' and their families' expected and perceived gaps of service quality and demographic characteristics.
Ethical approval
This research project was approved by the Medical Ethics Committee of the School of Public Health, Harbin Medical University. Before the survey, we received approval from the research hospitals; furthermore, all participants participated voluntarily and anonymously after signing informed consent forms. The collected data did not contain personal information such as name, telephone, and so on, so they are completely confidential.
Patient characteristics
According to the survey results, there was a relatively equal proportion of male (47.8%) and female (52.2%) participants. Most participants were treated in the internal medicine (40.1%) and surgery (24.5%) departments, and their main methods of paying for their medical services were basic medical insurance for urban workers (25.6%), basic medical insurance for urban residents (25.6), and the new rural cooperative medical system (27.7%). Most (96.6%) patients were aware of their illnesses while 90.9% were aware of the treatment of their diseases; 50.1% of patients felt satisfied with their doctors. The specific characteristics are shown in Table 4.
Gaps between expectation and perception of service quality according to patients' demographic characteristics Survey objects differed according to clinic departments, which is the exposure factor of the tangibility service quality gap. Among these, the gap of tangible service quality in the gynecological survey objects was 2.367 times that of other departments (OR = 2.367, 95% CI 1.243 to 4.505). The gap of responsiveness service quality in the male participants was 0.690 times that of female participants (OR = 0.690, 95% CI 0.553 to 0.860). The gap in the assurance service quality in the male participants was 0.760 times that of female participants (OR = 0.760, 95% CI 0.607 to 0.952). The results are shown in Table 5 and Table 6.
Mean service quality gaps by item
According to the survey, aside from item 3 ("hospital medical staff wear clean and decent uniforms"), the remaining items showed negative service quality gaps; the differences between expectations and perceptions were significant (p < 0.05). This information indicates that patients' expectations were not met. The greatest gap was for item 22 ("the hospital medical expenses are reasonable") followed by item 23 ("the cost of medical services is issued in a timely and convenient manner") and then item 24 ("detailed list of the items in the hospital charges"). As shown in Table 7, a total of 24 items showed a significant gap (p < 0.05); it also shows the difference in expectation and perception before and after receiving medical care.
Patients' expectations and perceptions of the quality of provided services
This study calculated the service quality gap according to dimension of service quality. The results showed that patients had the highest expectations for the assurance dimension (mean 4.224), followed by empathy, responsiveness, reliability, economy, and tangibles. Regarding the perceived quality of the services, assurance was again highest, followed by reliability, empathy, tangibles, responsiveness, and economy. Regarding the service quality gaps, the greatest was for economy, followed by responsiveness, empathy, assurance, reliability, and tangibles. We observed significant differences in patients' expectations and perceptions before and after receiving medical services (p < 0.05). See Table 8 for details.
Discussion
From patients' perspectives, it is of great importance to ensure high-quality hospital care services. We explored patients' expectations and perceptions of hospital service quality to determine the gap in hospital service quality, thereby providing accurate reference data for improving medical services. Demographic characteristics and service quality gap Gynecology mainly concerns female patients, whose minds are more delicate and more sensitive to the gap in tangibility services quality and other perception aspects. It is suggested to improve the infrastructure construction and provide more convenient service facilities for more departments such as gynecology and pediatrics-for example, installation of toilet handrails for patients, rental seats for accompanying personnel, kettles, and so on. In demographic characteristics, gender is significant in response and guaranteed quality of service gap. This suggests that hospitals should provide more detailed services to patients and provide patients with enough security when providing services. At the same time, because women constitute a vulnerable group in society, and the body and mind are more vulnerable in the face of disease, medical staff should pay more attention to the needs of female patients.
Patients' expected service quality
Patients' expectations of service quality were ranked as follows (high to low): assurance, empathy, responsiveness, reliability, economy, and tangibles. These results differ from PZB's ranking [4], which was reliability, responsiveness, assurance, empathy, and tangibles. This is likely due to
Patients' perceptive service quality
Regarding their perceptions of service quality, patients gave the lowest ratings to the economy dimension. This is in accordance with the results indicating that 38.4% of patients had an income between 1000 and 3000 RMB and that medical services were mainly paid for using the new rural cooperative medical system, suggesting that patients' economic level was rather low. Most rural patients are more sensitive to economic factors and cannot often afford excessive medical costs. Furthermore, even though most people in China have medical insurance, they still must partially cover their own treatment costs. This has likely led to a sizeable economic burden for some patients. Note that all of the surveyed hospitals were tertiary hospitals; compared with other hospitals, tertiary hospitals are costly and have a substantial outpatient clientele. Furthermore, medical staff are often too busy to inform patients of cost details in a timely manner, thus leading to lower levels of economic awareness among patients [24].
Patients' expected and perceived quality of service gap
The results of this study show that, for all six dimensions of service quality, the perceived quality was lower than expected. More specifically, the gaps were as follows, ranging from largest to smallest: economy, responsiveness, empathy, assurance, reliability, and tangibility. In other words, the service quality gap was largest for the economy dimension. The greatest gap in economy was in item 22 ("the hospital medical expenses are reasonable"). This may relate to China's doctor-patient contradiction of "expensive to see a doctor" and the reality of the relevant issues [18]. In other words, this gap is perhaps caused by both the excessively high medical costs in the hospitals and the low income of respondents, which suggests that the cost of medical care is, indeed, an economic burden [25]. The gaps for responsiveness and empathy were ranked second and third, respectively. The gaps were largest for items 13 ("medical staff willingness to help patients") and 21 ("the hospital gives priority to your benefits, not the benefits of medical staff"). One reason for this can be explained with Maslow's Hierarchy of Needs [26]. This theory indicates that people's physiological, security, and social needs must be incrementally met. With the development of society and concomitant improvement in people's living standards, patients are becoming less satisfied with a hospital providing only treatment for disease; in other words, they are seeking higher-level services. Thus, the quality of medical services is no longer limited to the physical care provided to patients but also includes their psychological care. Additionally, patients in China tend to believe that hospitals are "for profit" and, thus, do not prioritize patient interests. Because the government in the field of medical investment is inadequate, whether public or private, it is only profitable to ensure normal operation of the hospital [27]. However, ethically, hospitals should prioritize the safety of patients, which suggests that hospital managers must find a balance between these two demands.
The gaps for the assurance and reliability dimensions were ranked fourth and fifth, respectively. According to Table 4, most patients or their families are aware of their own condition and treatment and are satisfied with the doctor's treatment. These results indicate that patients tend to trust medical services and find their experience in the hospital to be relatively satisfactory with regard to their initial expectations.
The lowest gap in service quality was for tangibles. This result is consistent with the results of Yu Yawei and Yu Liling [13]. The reason that this gap was smallest may be that the tangibles of care are not overly important to patients. Alternatively, we selected participants from three large hospitals, which tend to have adequate medical equipment and other resources. Thus, while it did not meet patients' expectations, it was nevertheless the smallest gap of all dimensions.
Limitations
First, this research adopted a method requiring patients to recall their own situation. Thus, the results give rise to bias. Owing to the convenient sampling method, the samples selected are less representative. Limited by manpower, material resources, and financial resources, the scope of hospitals and the number of patients are limited.
Conclusions
According to the results of this study, all six dimensions of service quality showed a negative gap, indicating that patients' expectations were not met. The largest gap was for the economy dimension followed by responsiveness, empathy, assurance, reliability, and tangibility. This reflects the high cost of treating illnesses, which remains a major problem requiring urgent solution in China. Hospitals should make adjustments according to the actual situation of service quality and should constantly strive to improve the quality of medical services.
Supporting information S1 Dataset. Supporting dataset. The supporting dataset includes the data underlying our findings in this study. (XLS) S1 Questionnaire. (DOC)
|
v3-fos-license
|
2020-11-05T09:09:23.906Z
|
2020-11-03T00:00:00.000
|
228809015
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.ejmste.com/download/synthesizing-results-from-empirical-research-on-engineering-design-process-in-science-education-a-9129.pdf",
"pdf_hash": "3d54af02920b31bd18f1dc71266fc5939fe01c0e",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43128",
"s2fieldsofstudy": [
"Engineering",
"Education"
],
"sha1": "e68bcfda0f28301dc02d3026aa8dec0257e13034",
"year": 2020
}
|
pes2o/s2orc
|
Synthesizing Results from Empirical Research on Engineering Design Process in Science Education: A Systematic Literature Review
We reviewed 48 articles related to the engineering design process in science education published from 2010 to 2020. There are several previous literature review studies that analyzed the engineering design process in science education. However, we have not found any that investigates projects, discussed topics, as well as the benefits of the implementation of the engineering design process in science education. The research method used was a systematic literature review. This study analyzed the characteristics of the content based on year of publication, type of publications, countries that implement it, research approach, educational stage, and science content. The findings show that the projects used in the implementation of the engineering design processes in science education varied according to the discussed topics. The benefits of the implementation of the engineering design process in science education include cognitive benefits, procedural (skills) benefits, attitudinal benefits, and a combination of the three
INTRODUCTION
The implementation of Science, Technology, Engineering, and Mathematics (STEM) education through the engineering process has become more acknowledged in the field of education (Fan & Yu, 2015). Banko, Grant, Jabot, McCormack, and O'Brien (2013) assert that the Next Generation Science Standards (NGSS) can be used as an alternative in the reform of science education as of now. The implementation of the NGSS emphasizes the integration between engineering and science learning in schools. In addition, engineering design in the implementation of NGSS can increase student motivation, creative thinking skills, and the ability to connect science with engineering. Moreover, the engineering design process is one of the crucial parts of STEM education (Lin, Hsiao, Chang, Chien & Wu, 2018). Atman et al. (2007) state that engineering design is one of the competencies needed by students in engineering education. According to Nurtanto, Pardjono, Widarto, and Ramdani (2020), the engineering competence of students in vocational high schools can be improved by incorporating the engineering design process in their learning. Based on the results of these studies, the current K-12 reform of education emphasizes science education that is integrated with engineering design (Guzey, Ring-Whalen, Harwell, & Peralta, 2017). Many countries have underlined the account of engineering in science education (Crotty et al., 2017). Furthermore, also claims that the engineering design process is a new vision in science education. In addition, according to Lie, Aranda, Guzey, and Moore (2019), the engineering design process implemented in science learning can improve students' creative thinking and interdisciplinary abilities. Thus, the engineering design 2 / 18 process is not only applicable in engineering education, but it can also be implemented in science education.
Studies that integrate the engineering design process in science education have been carried out in several countries. Many of the aforementioned studies use various research approaches, such as mixed methods, quantitative, and qualitative. With the use of mixedmethod research approach, some studies aim to investigate: the influence of the engineering design process on students' situational interest (Dohn, 2013); students' perceptions of engineering and technology (Hammack, Ivey, Utley, & High, 2015); efficacy (Maeng, Whitworth, Gonczi, Navy, & Wheeler, 2017); content knowledge (Marulcu, & Barnett, 2013); students' conceptions (Schnittka & Bell, 2011;Dankenbring & Capobianco, 2016); and students' ability in handling the complexity of a task (English, King, & Smeed, 2016). In addition, some studies related to the engineering design process are also used to investigate some variables in the research. The aforementioned studies aim to investigate: the influence of the engineering design process on students' achievement and interest (Guzey, Ring-Whalen, Harwell, & Peralta, 2017); students' understanding and self-efficacy (Zhou et al., 2017); as well as content knowledge, STEM conceptions, and engineering views (Aydin-Gunbatar, Tarkin-Celikkiran, Kutucu & Ekiz-Kiran, 2018). Furthermore, Berland et al. (2013) attempt to examine the way students implement science and mathematics to their engineering work.
In a quantitative research approach, some studies aim to investigate: the influence of the engineering design process on problem-solving skills (Syukri, Halim, Mohtar & Soewarno, 2018) and teachers' response (Pleasants, Olson, & De La Cruz, 2020); the influence of the engineering design process on engineering content and attitudes towards STEM ; interest towards STEM subjects and career (Shahali, Halim, Rasul, Osman, & Zulkifeli, 2016); curiosity and scientific disciplines (Ward, Lyden, Fitzallen, & León de la Barra, 2016); as well as science attitudes, and science content knowledge (Wendell & Rogers, 2013). In addition, Fan and Yu (2015) examines the influence of the STEM approach within engineering design practices on conceptual knowledge, higher-order thinking skills, and design project activity. Yu, Wu, and Fan (2019) also look into the influence of the engineering design process on science knowledge and critical thinking within the delivered design product.
With the use of the qualitative research approach, some studies intend to analyze: the application of the engineering design process as well as its influence on students' understanding (Park, Park, & Bates, 2016;Schnittka, 2012), on teachers' understandings (Mesutoglu, & Baran, 2020), on reflective decisionmaking (Wendell, Wright, & Paugh, 2017), on students' views of design (Lie, Aranda, Guzey, & Moore, 2019), the classroom discourse (McFadden & Roehrig, 2018), and on problem-solving skills (English, Hudson, & Dawes, 2013). Some researches also examine the influence of the engineering design process on the generation of ideas and design thinking (English, Hudson, & Dawes, 2012), on the subject matter and pedagogical content knowledge (Hynes, 2012), as well as on mindful planning and students' modeling practices (Bamberger & Cahill, 2013). Capobianco, DeLisi, and Radloff (2018) also explain the development of elementary science teachers when implementing the engineering design process. Additionally, Chiu and Linn (2011) delve into how students integrate mathematics and science into their engineering design work.
In addition to using a mixed-method, quantitative, and qualitative research approach, we also found some studies related to the engineering design process that uses the systematic literature review (meta-analysis) method. One literature review study found is intended to summarize information on learnings with the engineering process through project-oriented capstone courses (Dutson, Todd, Magleby, & Sorensen, 1997). In addition, Lammi, Denson, and Asunda (2018) review articles related to engineering design challenges in secondary school settings. A review of articles on the engineering design process in science learning is also carried out by Arık & Topçu (2020). Their review investigates the steps of design in the engineering design process that are used for learning. Although we have managed to find literature reviews that analyze the implementation of the engineering design process in science education. However, we have not found any previous literature review studies that aim to investigate which projects and topics are used in the Contribution to the literature • Previous literature review study focused on the implementation of the engineering design processes in the K-12 science classrooms. However, this study aims to investigate projects, topics, and benefits of the implementation of the engineering design process in science education. This research is not limited to K-12 science classrooms, but this study investigates various levels of education, such as students, undergraduate or graduate students, and teachers. • This research can be used as a reference for all stakeholders involved in science education.
• The results of this study can encourage science educators and other fields to implement the engineering design process in their learning.
3 / 18 implementation of the engineering design process in science education. Furthermore, studies investigating the benefits of implementing the engineering design process in science education have also not been carried out by previous researchers.
Based on this explanation, there have been innovations in education that integrate the engineering design process in science education. However, students still experience difficulties in connecting the design projects that they develop with Science (Chao et al., 2017). Berland et al. (2013) also state that although students are able to apply science and mathematics to their engineering projects, the implementation in itself is still rather inconsistent. In addition to students, teachers also claim that teaching science using the engineering design process is challenging and, still, leads to several problems (Capobianco, 2011). These problems are most probably caused by the fact that engineering design is a new, unfamiliar concept to some teachers. Due to the newness of engineering design, science teachers may feel challenged when implementing engineering in science education (Guzey, Harwell, Moreno, Peralta, & Moore, 2016). In addition, science learning, as of now, still encounters several problems in various countries. The problems in learning science are that students regard science as difficult, less interesting, and have too many formulas (Zhang & He, 2012;Winarno, Rusdiana, Riandi, Susilowati, & Afifah, 2020). Ogunkola and Samuel (2011) also explain that students' perceptions of science lessons are abstract; this arises in spite of the fact that science lessons are closely related to everyday life so that students can observe science directly in their environment. Furthermore, according to Sun, Wang, Xie, & Boon (2014), they argue that the implementation of science learning in schools still does not meet the expected standards; that said, there is a need for learning innovations to improve the quality of learning.
From the aforementioned explanation, it can be concluded that the implementation of the engineering design process in science education has not achieved its expected merit. This is mainly due to the implementation process in the field that is still faced with challenges (Berland, Martin, Ko, Peacock, Rudolph, & Gulobski, 2013;Capobianco, 2011;Chao et al., 2017) as the engineering design process is a new, unfamiliar concept to most science teachers (Guzey, Harwell, Moreno, Peralta, & Moore, 2016). According to Dankenbring and Capobianco (2016), the current reform of education is based on the integration of science learnings through engineering practices. Based on these problems, learning innovations that are based on the engineering design process are expected to be an alternative solution to solve various problems in science education.
This research explains the characteristics of its content, such as year of publication, type of publication, countries that implement the engineering design process, research approach, educational stage, and science content. The purpose of analyzing the characteristics of the content is to provide an overview of the articles analyzed in this study. Furthermore, we also investigated which projects and topics are used in previous studies in implementing the engineering design process in science education (Science, Physics, Chemistry, and Biology). In addition, the results of this study can provide a comprehensive explanation for stakeholders in the field of science education who will implement the engineering design process into their learning. For example, to teach the topic of "energy" by using the engineering design process, we will mention some alternative projects that are suitable for use on the topic of energy based on the results of the previous studies. Furthermore, this study also explains the benefits of the engineering design process in science education. The elaboration of the benefits of the engineering design process in science education is based on cognitive, procedural/skills, attitudinal benefits, and a combination of the aforementioned three benefits. Therefore, the results of this study are not only useful for stakeholders in the field of science education who will implement the engineering design process, but also for future researchers. We served the data in the form of tables so that readers will find it easier to comprehend. The results of previous research reveal that learning with the engineering design process had a positive effect on students (Kim, Oliver, & Kim, 2019).
Thus, a literature review study that discusses the engineering design process in science education is essential to be carried out. The results of this study are expected to be beneficial as reference for all stakeholders involved in science education, especially teachers, lecturers, or future researchers. In addition, the engineering design process can be used as an alternative learning approach in science education. The aim of this study was to review 48 articles related to the engineering design in science education that are published from 2010 to 2020. There are three research questions used to guide the process of this study: 1. How is the distribution of research based on the characteristics of the content?
2. What are the projects and discussed topics in the implementation of the engineering design process in science education?
3. What are the benefits of the engineering design process in science education?
Research Design
The research method used in this study was a systematic literature review (Petticrew & Roberts, 2008). We chose 48 articles from highly-regarded journals published from 2010 to 2020. All journals chosen are indexed by Scopus and Web of Science (WoS). Scopus 4 / 18 and Web of Science (WoS) were used as the basis for selecting articles because they are both reputable journal indexers. The articles published on Scopus and the Web of Science (WoS) are also of good quality and can be accounted for. This study aims to review 48 articles related to engineering design in science education.
Data Collection
The articles chosen for review were published from January 2010 to April 2020. The highly-regarded publishers that were chosen are Taylor & Francis, Springer, Wiley, Cambridge, Elsevier, Emerald, Oxford, Sage, etc. We also looked for articles directly on the website of international journals. The keywords used were: "STEM approach" "STEM education", "engineering design", "engineering design process", "engineering design in science education" or "STEM through the engineering design process". There were about 393 articles found. However, only 48 articles met our research criteria. The number of articles is symbolized by the letter "f" in the table. The shortlisted journals for review are to be found in Table 2.
From Table 2, it shows that out of 19 international journals, 15 journals are indexed by both Scopus and WoS, and the remaining four are indexed by Scopus only. Out of the 48 chosen articles, 38 articles are indexed by both Scopus and WoS, and the remaining ten are indexed by Scopus only. All chosen journals can be found in Scimago Journal & Country Rank (Scimagojr.com). Scimago Journal & Country Rank states that the journals have high H-index. Also, most of the journals are indexed by Web of Science based on Clarivate Analytics. Therefore, it can be concluded that the articles chosen for this study are of good quality.
Data Analysis
The data obtained in this study were analyzed with a descriptive approach. We classified the data in the form of tables and figures based on the predetermined research framework. The data was then discussed comprehensively and synthesized with the previous research. The focus of this study is to investigate the distribution of research based on the characteristics of the content, projects and discussed topics, and the benefits of the engineering design process in science education. Discussing the research questions among the writers based on the research theme that is the engineering design process in science education 2 Determining the criteria We are determining the criteria of the articles to be shortlisted for review. The articles must be related to the engineering design process in science education and must be indexed by Scopus and Web of Science or just Scopus. The articles selected to be indexed by Scopus have at least a quartile 2 (Q2) category so that the quality of the articles is classified as excellent. In addition, we selected articles that are in English only. 3 Producing the protocol for the review Generating a research framework for each section, starting from the title, introduction, method, results, discussion, and conclusion 4 Searching, screening, and selecting Looking for journals from the highly-regarded publishers with the following keywords: engineering design process, engineering design, engineering design in science education, or STEM through the engineering design process. We were shortlisting articles based on the predetermined criteria. All articles must be published by international highly-regarded journals and related to the engineering design process in science education. If the articles did not meet these criteria, they were exempted from review. Discussing the validity and the reliability of the articles among authors Choosing articles that are relevant to the engineering design process in science education
Research Question 1: How is the Distribution of Research Based on the Characteristics of the Content?
The distribution of research is divided based on the following characteristics: year of publication, type of publication, countries that implement the engineering design process, research approach, educational stage, and science content.
The distribution of research based on year of publication
The distribution of research chosen for review ranged from 2010 to 2020. The complete data can be seen in
The distribution of research based on the type of publication
The distribution of research based on the type of publication is divided into journal, proceeding, and thesis. The data can be seen in Table 3. Table 3, it can be seen that all articles for review were chosen from 48 international journals (100%). Although many articles on the engineering design process in science education have been published in proceedings and theses, we did not choose the articles from these proceedings and theses. We aim that the articles selected for review are articles of excellent quality and can be accounted for. In addition, the selection of articles from journals indexed by Scopus or Web of Science (WoS) is more stringent and has been through a peer review. Based on the given data, it can be concluded that the articles chosen for this study are of good repute and quality.
The distribution of research based on countries and regions that implement the engineering design process in science education
The data of the countries and regions that implement the engineering design process in science education were obtained from the affiliation of the writer of the chosen articles. The complete data can be seen in Figure 2.
Based on Figure 2, the countries that implement the engineering design process in science education are the United States of America (USA), Australia, Taiwan, Turkey, Malaysia, Denmark, and Indonesia. The distribution of research can be seen in Table 4. Based on Table 4, it can be seen that the United States of America had the highest number of articles with 34 articles (70.83%). Denmark and Indonesia were the lowest in the number of articles with 1 article (02.08%), respectively. From the data, it can be concluded that there are very few countries that implement the engineering design process in science education.
The distribution of research based on the research approach
The research approach was determined by the research method used in the articles. The complete data can be seen in Table 5.
Based on Table 5, it can be seen that there were three research approaches: quantitative, qualitative, and mixed methods. The most used research approach was qualitative with 20 articles (41.67%), and the least used approach was mixed methods with 13 articles (27.08%).
The distribution of research based on the educational stage
The sample of participants in the articles was analyzed to determine the distribution of research based on the educational stage. This aims to provide an overview of the distribution of previous studies related to engineering design process in science education based on the level of education. Elementary school level consists of students aged around 6-12 years. Middle school level consists of students who have graduated from elementary school within an age of around 12-15 years. High school level consists of students who have graduated from middle school within an age of 15-18 years. Undergraduate level consists of students who have graduated from high school and continue their studies to university level within the age of around 18-22 years. Meanwhile, graduate students are students who have graduated from university level around the age of 22 years or more. The complete data can be seen in Table 6.
Based on Table 6, it can be seen that there were 37 articles (77.08%) which sample or participant consisted of students; 10 articles (20.83%) with teachers as the sample or participants; and 1 article (02.08%) with undergraduate/graduate students as the sample of participants. The distribution of research based on the educational stage was found the highest in middle school students with 15 articles (31.25%). The lowest came from undergraduate/graduate students with only 1 article (02.08%). From the data, it can be concluded that the engineering design process is implemented in science education of various educational stages (level). However, the implementation in the undergraduate/graduate level is still rather scarce when compared to the elementary school, middle school, and high school levels.
The distribution of research based on science content
This study divides the science content into 5: Science, Physics, Biology, Chemistry, and the integration of science with other subjects. The selection of articles containing the integration of science with other subjects aims to investigate fields other than science that use one of the topics of science in their research. The discussion of the results of this study is broader and more comprehensive because interdisciplinary fields related to science are also described in this study. The science content was divided based on school subjects or research topics. The complete data can be seen in Table 7.
From Table 7, it can be seen that the implementation of the engineering design process was mostly found in the subject of science, with the least found in the integration of science with other subjects. Based on the data, it can be concluded that the engineering design process is implemented in Science, Physics, Biology, Chemistry, and the integration of science with other subjects.
Research Question 2: What are the Projects and Discussed Topics in the Implementation of the Engineering Design Process in Science Education?
The design projects and discussed topics in the implementation of the engineering design process were mostly found in Science, Physics, Biology, Chemistry, and the integration of science with other subjects.
The projects and topics in science
The data of the projects and discussed topics in science can be seen in Table 8. Table 8 shows that there were 18 projects related to the implementation of the engineering design process in science. It can be concluded that the choice of projects varied on the discussed topic.
The projects and topics in Physics
The data of the projects and discussed topics in Physics can be seen in Table 9. Table 9 shows that there were 17 projects related to the implementation of the engineering design process in Physics. It can be concluded that the choice of projects varied depending on the discussed topic.
The projects and topics in Biology
The data of the projects and discussed topics in Biology can be seen in Table 10. Table 10 shows that there were three projects related to the implementation of the engineering design process in Biology. It can be concluded that the choice of projects varied depending on the discussed topic.
The projects and topics in Chemistry
The data of the projects and discussed topics in Chemistry can be seen in Table 11. Explore cells, consider the relationship of the structure and function of DNA, basic heredity patterns found in nature, study sexual and asexual reproduction.
Genetically modified organisms Lie (2019) Table 11 shows that there were 3 projects related to the implementation of the engineering design process in Chemistry. It can be concluded that the choice of projects varied depending on the discussed topic.
The projects and topics in science integrated with other subjects
The data of the projects and discussed topics in science integrated with other subjects can be seen in Table 12. Table 12 shows that there were 6 projects related to the implementation of the engineering design process in science integrated with other subjects. It can be concluded that the choice of projects varied depending on the discussed topic.
Research Question 3: What are the Benefits of the Engineering Design Process in Science Education?
This study also examined the benefits of the engineering design process in science education. According to Martín-Páez, Aguilera, Perales-Palacios, and Vílchez-González (2019), the benefits of a learning approach can be classified into cognitive benefits, procedural benefits (skills benefits), and attitudinal benefits. In this study, we classified them into four categories of cognitive benefits, procedural benefits (skills benefits), attitudinal benefits, and the combination of the three aforementioned benefits. There is an addition of one category: the combination of the three aforementioned benefits because this study also incorporated interdisciplinary research related to science.
Cognitive benefits
Cognitive benefits are those that are based on empirical factual knowledge. The complete data of the cognitive benefits of the engineering design process can be found in Table 13. Table 13 shows that the cognitive benefits of the engineering design process were found in 9 articles. Most articles claimed that the engineering design process improved students' content knowledge. It can be concluded that the implementation of the engineering design process in science education may improve students' content knowledge, science teachers' understanding, and is effective for conceptual change.
Procedural Benefits (Skills Benefits)
Procedural/skills benefits are proficiency in a specific field. The complete data of the procedural benefits of the engineering design process can be found in Table 14. Table 14 shows that the procedural benefits (skills benefits) of the engineering design process were found in 21 articles. Most articles claimed that the engineering design process could integrate engineering with science learning. It can be concluded that the implementation of the engineering design process in science education may improve students' content knowledge, science teachers' Introducing engineering at the beginning of the lesson resulted in higher students' achievements compared to introducing engineering only at the end of the lesson Crotty (2017) 1 Total 9 understanding, and is effective for improving various students' skills.
Attitudinal benefits
Attitudinal benefits are benefits related to behavior or actions based on one's stance. The complete data of the attitudinal benefits of the engineering design process can be found in Table 15. Table 15 shows that the attitudinal benefits of the engineering design process were found in 7 articles. It can be concluded that the implementation of the engineering design process in science education may result in various attitudinal benefits.
Combination of cognitive, procedural/skills, and attitudinal benefits
The combination of cognitive, procedural/skills, and attitudinal benefits is the benefit obtained by more than one cognitive, procedural/skills, and/or attitudinal benefits. Thus, this section measures more than one variable from the three categories. The complete data of the combination of cognitive, procedural/skills, and attitudinal benefits of the engineering design process can be found in Table 16. Table 16 shows that the combination of cognitive, procedural/skills, and attitudinal benefits of the engineering design process were found in 11 articles. Most articles claimed that the engineering design process improved scientific knowledge and reasoning. Based on the data, it can be concluded that the implementation of the engineering design process in science education may result in different combinations of cognitive, procedural/skills, and attitudinal benefits.
DISCUSSION
This study aims to review 48 articles from international highly-regarded journals related to the engineering design process in science education. The focus of this study is to investigate the distribution of research based on the characteristics of the content, projects, and discussed topics, and benefits of the engineering design process in science education. A literature review study that examines the distribution of research based on the characteristics of the content is in line with several previous studies (Deveci & Çepni, 2017). Martín-Páez, Aguilera, Perales-Palacios, and Vílchez-González (2019) which state that the analysis of the distribution of research based on the general Improved students' ability in handling the complexity of a task English (2016) 1 9 Supported STEM integration Johnston (2019) 1 10 Useful for structuring stages of design, construction, and redesign King ( The articles reviewed in this study were published from 2010 to 2020. The span of 10 years was specifically chosen so that the results of this study are not out of date (still conforming to the current situation) and are suitable for use as a reference by stakeholders in the field of science education. The highest number of reviewed articles were published in 2016, and the lowest number was published in 2020. All chosen articles are of good quality because they are indexed by Scopus and WoS. The countries that implement the engineering design process in science education are still very few in number. The research approaches used are quantitative, qualitative, and mixed methods. The most used research approach was qualitative, and the least used approach was mixed methods. Most studies used a qualitative research approach because data collection from some of these studies usually employed one observation or interviews through video recordings. Based on the method of collecting the data, the researcher chose a research approach that was deemed more suitable, which was qualitative, compared to other research approaches. The results of this statement are in line with research in the engineering design process in science education, which mostly used a qualitative research approach (Johnston, Akarsu, Moore, & Guzey, 2019;. Meanwhile, studies that use a mixed-method research approach mostly employed data collection in more than one manner, such as classroom observations, science tests, and surveys (Guzey, Ring-Whalen, Harwell, & Peralta, 2017). In addition, Gunbatar (2018) also used a mixed-method in his research. In the study, data collection methods used two methods: the Chemistry achievement test and interviews. Furthermore, the engineering design process is implemented in science education of various educational stages (levels). The results of the analysis show that the implementation in the undergraduate/graduate level is still rather scarce compared to the elementary, middle school, and high school levels. Therefore, there are still many opportunities to seek research novelty from the implementation of the engineering design process at the university level.
This study analyzed the characteristics of the content based on the year of publication, type of publication, countries that implement the engineering design process, research approach, educational stage, and science content. The choice of content analysis was supported by several previous studies that examine the year of publication (Jayarajah, Saat, Rauf, & Amnah, 2014;Martín-Páez, Aguilera, Perales-Palacios, & Vílchez-González, 2019), the type of publication (Belland, Walker, Kim, & Lefler, 2017;Çetin & Demircan, 2018;Henderson, Beach & Finkelstein, 2011;Jeong, Hmelo-Silver, & Jo, 2019;Martín-Páez, Aguilera, Perales-Palacios, & Vílchez-González, 2019), and the countries that implement it (Martín-Páez, Aguilera, Perales-Palacios, & Vílchez-González, 2019; Reinhold, Holzberger, & Seidel, 2018;Uzunboylu & Özcan, 2019). Jayarajah, Saat, Rauf, and Amnah (2014) also claims that examining the research approach is important in the analysis of the general characteristics of the content. In addition, some previous studies also investigate the content based on the educational stage (Deveci & Çepni, 2017;Martín-Páez, Aguilera, Perales-Palacios, & Vílchez-González, 2019) and the science content (Arık & Topçu, 2020). If a study analyzes content based on the year of publication, type of publication, countries that implement the engineering design process, research approach, educational stage, and science content, the results of the study can provide an overview for readers. Readers can judge whether the journals being analyzed are of high quality, up-to-date, and assess other important aspects.
This study also analyzed the distribution of research based on design projects and discussed topics when implementing the engineering design process in science education. The results show that the design projects varied based on the discussed topics. The distribution of research based on design projects and discussed topics Improved curiosity and scientific disciplines Ward (2016) 1 Attitude & Procedural/skills 10. Improved attitudes and science content knowledge Wendell (2013) 1 Attitude & Procedural/skills 11. Improved students' knowledge, attitudes, and practices Siew (2016) 1 Attitude, Procedural/skills, and attitude Total 11 discovered that the implementation of the engineering design process was found in Science, Physics, Biology, Chemistry, and the integration of science with other subjects. Some examples of research in the engineering design process in Science use bridge construction project to teach the topic of measurements to 58 middle school students (English, Hudson, & Dawes, 2012), submarine building to teach the topic of fluid to 89 middle school students (Siew, Goh, & Sulaiman, 2016), and musical instrument, door alarm, compost column, solar panel tracker were also used to teach the topic of energy and matter to 32 elementary school teachers (Capobianco, DeLisi, & Radloff, 2018).
Some examples of research in the engineering design process in Physics design a wind turbine to teach the topic of energy (Bamberger & Cahill, 2013), an optical instrument to teach the topic of mirror and lens , and used free electrical energy to teach the topic of electricity and magnetism (Syukri, Halim, Mohtar, & Soewarno, 2018). In addition, some studies related to the engineering process in Biology designed a simple hydroponic system to teach the topic of Plants (Crotty et al., 2017); explore cells, consider the relationship of the structure and function of DNA were also discussed to teach the topic of genetically modified organisms (GMOs) . In Chemistry, the topic of climate change, energy use, and greenhouse gases utilized the project of airbag design and chemical reactions project (Chiu & Linn, 2011). Furthermore, Hammack, Ivey, Utley, and High (2015) taught the topic of chemicals using an airplane design project, a popcorn challenge, and a rocket body attached to their film canisters. In addition to Science, Physics, Biology, and Chemistry, the engineering design process is also implemented in science that is integrated with other subjects. That said, projects at hand may utilize several subjects at once. The engineering design process for the integration of science with other subjects was designing a pinhole camera, a system to take aerial images, a wind turbine, robotic vehicles, and construction helmets. The aforementioned projects were used to teach the topic of power equations and energy transformation (Science) and converting between different units of measurement (Mathematics) (Berland & Steingut, 2016). Moreover, Apedoe and Schunn (2013) taught the topic of Genetics (Biology), projectile motion (Physics), chemical energy (Chemistry) through the project of designing the earthquake task.
The result of this study also shows that the implementation of the engineering design process has its benefits in science education. The said benefits were classified into cognitive benefits, procedural benefits (skills benefits), attitudinal benefits, the combination of the three benefits. This result is in line with the previous study that found STEM approach in science education to have resulted in cognitive benefits, procedural benefits (skills benefits), and attitudinal benefits (Martín-Páez, Aguilera, Perales-Palacios, & Vílchez-González, 2019). The difference of this study from the aforementioned previous study lies in the classification of the benefits. The previous study divided the benefits into three, whereas this study divided the benefits into four aspects, adding the combination of cognitive, procedural (skills), and attitudinal benefits.
Several previous studies investigated that the cognitive benefits of the implementation of the engineering design process were that it improved students' content knowledge/students' achievement (Aydin-Gunbatar, Tarkin-Celikkiran, Kutucu, & Ekiz-Kiran, 2018;Chao et al., 2017;Dankenbring & Capobianco, 2016;Guzey, Ring-Whalen, Harwell, & Peralta, 2017;Marulcu & Barnett, 2013;Park, Park, & Bates, 2016). Mesutoglu and Baran (2020) also claimed that the engineering design process improved science teachers' understanding. When it comes to procedural benefits (skills benefits), some researches stated that the engineering design process improved students' problem-solving skills (English, Hudson, & Dawes, 2013;Syukri, Halim, Mohtar, & Soewarno, 2018), students' ability in designing a project (Xie, 2018) as well as sophisticated discourse (McFadden & Roehrig, 2018;Wendell, Wright, & Paugh, 2017). Furthermore, attitudinal benefits were also found in the implementation of the engineering design process. The stated benefits include improving students' interest in STEM subjects and career (Shahali, Halim, Rasul, Osman, & Zulkifeli, 2016), attitudes towards engineering , and it impacted positively on students' perceptions of engineering and technology (Hammack, Ivey, Utley, & High, 2015). To add, the engineering design process in science education may result in the combination of cognitive, procedural/skills, and attitudinal benefits. Fan and Yu (2015) claimed that the engineering design process improved students' achievement (cognitive benefits) and creative attitude (procedural/skills benefits). This finding is supported by another study asserting that the engineering design process improved scientific knowledge (cognitive benefits) and reasoning (procedural/skills benefits) (Yu, Wu, & Fan, 2019;Wendell, Swenson, & Dalvi, 2019). One research even found the combination of all three benefits: improved students' knowledge (cognitive benefits), attitude (attitudinal benefits), and practices (procedural/skills benefits) (Siew, Goh, & Sulaiman, 2016). So, the benefits of the implementation of the engineering design process in science education include cognitive benefits, procedural (skills) benefits, attitudinal benefits, and a combination of the three benefits.
This research also has its strengths and weaknesses. The strength of this research is that the chosen journals are of high quality and up to date. This statement is supported by the fact that the selected articles were chosen from journals indexed by Scopus (Q1 and Q2) and the Web of Science (WoS) in the last ten years. The results of this study are very useful because topics and projects commonly used by previous researchers are elaborated. In addition, the results of this research can be useful for teachers, lecturers, or further researchers in order to implement the engineering design processes in science education or other fields. Meanwhile, the weakness of this research is that aspects related to how the stages of the engineering design process in science education have not been explained. However, research related to stages of the engineering design process in science education can be explained in further research.
CONCLUSIONS
Currently, the reform of science education is to integrate science learning with the engineering design process. However, there are still challenges met in its implementation. The implementation is found to still be rather inconsistent. Most science teachers are still too unfamiliar with the engineering design process to be able to implement it in their science teaching. Therefore, a literature review study is important to be carried out. This study analyzed the characteristics of the content based on the year of publication, type of publication, countries that implement the engineering design process, research approach, educational stage, and science content. The result of the study shows that there were 48 articles chosen for review published from 2010 to 2020. All chosen articles are of good quality because they are indexed by Scopus and WoS. The countries that implement the engineering design process in science education are the United States of America (USA), Australia, Taiwan, Turkey, Malaysia, Denmark, and Indonesia. The most used research approach was qualitative, and the least used approach was mixed methods. Moreover, the engineering design process is implemented in science education of various educational stages (level). However, the implementation in the undergraduate/graduate level is still rather scarce. The engineering design process was found to be implemented in Science, Physics, Biology, Chemistry, and the integration of science with other subjects. This study also analyzed the distribution of research based on design projects and discussed topics when implementing the engineering design process in science education. The results show that the choice of projects used in implementing the engineering design process varied based on the discussed topics. Additionally, the implementation of the engineering design process has its benefits in science education. The benefits include cognitive benefits, procedural benefits (skills benefits), attitudinal benefits, and the combination of cognitive, procedural (skills), and attitudinal. The engineering design process is considered a new trend in the current reform of science education. Thus, the results of this study can be used as a reference for all stakeholders involved in science education, especially teachers, lecturers, or future researchers. In addition, the engineering design process can be used as an alternative learning approach in science education. The gap for future research is that we have not found any study that implements the engineering design process in the subject of integrated science with preservice science teachers as subjects. Therefore, we recommend the engineering design process to be implemented in science education with projects and topics that have yet to be discussed by previous researchers. Also, research related to stages of the engineering design process in science education can be explained in further research.
|
v3-fos-license
|
2014-03-14T23:41:47.000Z
|
2013-12-11T00:00:00.000
|
119254187
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://academic.oup.com/ptep/article-pdf/2014/4/043A02/9720078/ptu045.pdf",
"pdf_hash": "625b2cd5a82a125a49598d01d6f7ebe71dfa6f5c",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43131",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "625b2cd5a82a125a49598d01d6f7ebe71dfa6f5c",
"year": 2013
}
|
pes2o/s2orc
|
Weak value as an indicator of back-action
In this study we critically examine some important papers on weak measurement and weak values. We find some insufficiency and mistakes in these papers, and we demonstrate that the real parts of weak values provide the back-action to the post-selection, which is caused by weak measurement. Two examples, a counterfactual statement of Hardy's paradox and experiments that determine the average trajectory of photons passing through double slits, are investigated from our view point.
Introduction
Since Aharonov et al. [1][2] [3] developed the concepts of weak measurement and weak values, these ideas have been studied by many authors. In weak measurement, which differs from conventional von Neumann-type measurement [4] (strong measurement in this paper), the interaction between an observed system and a probe is considered to have no effect on the observed system when its weak coupling limit is taken. Some authors [3][5] [6] have even claimed that noncommuting observables can be measured simultaneously by weak measurement, and relations to Bell's inequality [7] have also been discussed. In addition, it has been claimed that wave functions can be directly determined by weak measurement [8][9] [10]. In particular, Wiseman [11] defined the average velocity of photons operationally with weak measurement and demonstrated identification with Bohm's velocity [12]. Kocsis et al. have developed this study and reported [13] that the average trajectory of photons passing through double slits can be determined operationally by weak measurement maintaining the interference pattern.
Weak values have attracted attention because of both the values obtained by weak measurement and their inherent physical meaning [14]. For example, the counterfactual statements of Hardy's paradox and the three-box paradox have been interpreted with the help of weak values [15] [16][17] [18], which were experimentally verified [19] [20] [21] [22] to agree with the values obtained by the corresponding weak measurements. Moreover, strange weak values have been discussed by many authors [23] [24][14] [25], but the conditions in which they appear have not been clarified.
Despite the strange properties of weak values, they have been interpreted as conditional probabilities or conditional expectation values by many authors. One of their main bases is that ordinary expectation values can be described as the sum of weak values, which was demonstrated in [3]. To corroborate the above statement, other authors [26] [27][25] [28][8] [29] have discussed this problem with positive operator-valued measure (POVM). In the second section, we examine their discussions, and we find some insufficiency and mistakes. Then, we demonstrate that the real parts of weak values should be interpreted as the indicator of the back-action caused by the weak measurement. Reinvestigation of the operational process of the post-selection provides a clearer basis for the above conclusion. In the following two sections, Hardy's paradox and the double-slit experiment are investigated from the viewpoint given in the second section. The last section is our conclusion.
Interpretation of weak values 2.1 Weak measurement and weak values
First, we quickly review the relation between weak values and the values obtained by the corresponding weak measurement[1] [30]. The interaction Hamil-tonianĤ I between an observable of the quantum system and the momentum π of the pointer isĤ where g is the real coupling constant.Ĥ I is assumed to be constant and roughly equivalent to the total Hamiltonian over some interaction time t. The wave function φ(x) of the pointer is assumed to be Here, we have introduced the centre x 0 = 0, which is essential in the discussion of 2.4. The initial system-pointer state |Φ(0) = |I |φ evolves obeying to where |I is the initial state of the observed system. The state of the pointer |φ ji both after the interaction between the observed system and the probe and the post-selection in |ψ j is, up to the lowest order in gt, Then, the expectation value of the pointer's positionx for this state is where is the weak value of an operator for an initial state |I and a final state |ψ j .
Ordinary interpretation of weak values
In some papers [3] [14], it is considered as a basis of statistical interpretation of weak values that ordinary expectation values can be described as the sum of corresponding weak values. We examine the expectation value I|Â|I of an observable for a state vector |I . Let |ψ j be the eigenvectors that correspond to the respective eigenvalues ψ j , j = 1, 2, · · · of an observableΨ. By assuming that a set of projection operators {|ψ j ψ j |} are complete, i.e., 1 = j |ψ j ψ j | and that ψ j |I = 0, where Pr(ψ j |I) = | I|ψ j | 2 is the probability that the state |ψ j is found in the state |I . Thus, we can interpret the expectation value I|Â|I as a statistical average of the weak values  ψj ,I , and as a result, weak values are treated by many authors as the expectation values of between the initial state |I and the final states |ψ j , j = 1, 2, · · · . However, as shown below, we should not decide based exclusively on (8) whether weak values can be interpreted as probabilities or expectation values. We write the proposition 'an eigenvalue a i is obtained when an observable is measured' as A(a i ), and its corresponding projection operator is denoted i = |a i a i |. Similarly, we define a proposition Ψ(ψ j ) and a projection operatorΨ j = |ψ j ψ j |. A set of such propositions constitutes a σ-complete orthomodular lattice [31] [32], as does the corresponding set of such projection operators.
Let in (8) be the projection operator i = |a i a i |. Then, A necessary and sufficient condition for the operatorΨ jÂi to be a projection operator is [Ψ j , i ] = 0. If and only if this condition is satisfied,Ψ jÂi corresponds to a proposition Ψ(ψ j ) ∧ A(a i ) and the left-hand side of (9) is its probability for |I [32]. Here, we define the joint probability of A(a i ) and Ψ(ψ j ) for |I as the probability of the proposition Ψ(ψ j ) ∧ A(a i ), i.e., the probability of finding |ψ j and |a i in |I simultaneously. We do not regard the probability of finding A(a i ) and then Ψ(ψ j ) one by one as the joint probability, because the operation of i must affect the probability of Ψ(ψ j ) for |I , as shown in the subsection 2.3. Thus, the weak value  ψj ,I is the conditional probability of finding |a i in |I when |ψ j is found in |I if and only if [Ψ j , i ] = 0. Then, As shown later, the weak values are actually 0 or 1 in such a case. We can interchange |ψ j and |I in the above discussion. If |I I| andΨ j commute,  ψj ,I is the probability of finding |a i in |I (or in |ψ j ).
If [Ψ j ,Â] = 0, the projection operator that corresponds to a proposition [31]. Instead, if we construct (for example) a Hermitian operatorÂΨ and a projection operator |h k h k |, whereÂΨÂ|h k = h k |h k , then the proposition corresponding to AΨA(h k ) exists. Nevertheless, this proposition is not expressed with the help of the Ψ(ψ j )s and/or A(a i )s. In contrast, eitherΨ jÂi is not a projection operator or it does not correspond to any propositions. Thus, if any two ofΨ j , i and |I I| do not commute, we cannot interpret the left-hand side of (9) as a probability or the right-hand side of (8) as a sum of probabilities. Therefore, in such cases,  i ψj ,I is not the conditional probability of finding |a i in |I when |ψ j is found in |I , To clarify the meaning of strange probability, we divide I|Ψ jÂi |I into its real part and imaginary part as follows: Thus, the weak value becomes real if I|[Ψ j , i ]|I = 0. Here, we should pay attention to the fact that even if |[Ψ j , i ]| = 0 for some states, this is not a sufficient condition for [Ψ j , i ] = 0, i.e., this condition does not ensure thatΨ jÂi is a projection operator and possesses the corresponding proposition. If I|[Ψ j , i ]|I = 0 and any pair ofΨ j , i and |I I| do not commute, (12) may be more than 1 or less than 0. This possibility is not strange because (12) is not a (conditional) probability as shown above. Considering Hardy's paradox, we will encounter such a situation.
We corroborate the above conclusion by reexamining I|Â|I . When = i , I| iΨjÂi |I = I|Ψ jÂi |I if i andΨ j commute. Then, by comparing (13) and (8), it is clear that  i ψj , it is obvious that at least one of the following two statements is false: '  i ψj ,I is the expectation value of i between an initial state |I and a final state |ψ j ' or '|  i ψj ,I | 2 is the expectation value of i between an initial state |I and a final state |ψ j '. We have demonstrated above that the former statement is false if the operators do not commute, and we will demonstrate below that the latter statement is also false if they do not commute. As written by Aharonov et al. [33], Because the denominator of the right-hand side does not depend on a i , |  i ψj ,I | 2 gives the product of two independent probabilities Pr(a i |ψ j ) and Pr(a i |I) (divided by Pr(ψ j |I)). It is worth noting that (14) is not a conditional probability if [Ψ j , i ] = 0. To verify this fact, we rewrite (14) as The right-hand side of this equation is the expectation value of one observablê A iΨjÂi divided by Pr(ψ j |I). If [Ψ j , i ] = 0, then iΨjÂi corresponds to no proposition, and consequently, (14) is not a conditional probability becausê A iΨjÂi is not a projection operator. The above discussion can be straightforwardly applied to other observables, such as = i a iÂi . Thus, it is obvious that ifΨ j and do not commute, then the weak value  ψj ,I is not the conditional expectation value of for |I when |ψ j is found in |I .
POVM of weak measurement
As shown in the previous subsection, we can not regard (8) [29] have developed discussions with the help of positive operator-valued measure (POVM). Let {M m } be a set of operators that act on the Hilbert space of the observed system. The probability of obtaining an outcome m for the quantum state expressed in a density matrixρ, Pr(m|ρ), is where {M † mMm } is POVM that satisfies Next, let us consider a sequential measurement corresponding to two sets of POVMs, {M Dressel et al. [27] and Wiseman [26] have considered as a conditional probability or a probability between some initial state and final state. Some of the authors have insisted that (19) would become the corresponding weak value withρ = |I I|,M n = |ψ n ψ n | in its weak coupling limit, and hence, the weak value could be interpreted as a conditional probability. However, if so, (19) could take negative values despite the fact that it is made up of the sum, product and quotient of some probabilities. This inconsistency is not the matter of interpretation. It is the matter of calculation.
To raise the point, we examine their calculation [27]. Let bê They have defined the conditional expectation value n  as the expectation value obtained by the sequential measurement, n  = m α m Pr(n, m|ρ) m Pr(n, m|ρ) .
The POVMÊ (1) m that corresponds toM (1) m is expanded up to the lowest order of g, the constant that gives the strength of the measurement: where m p m = 1.
Substituting (22) into (21) leads to
In addition, ifρ = |I I|,M (2) † nM (2) n = |ψ n ψ n |, then Here, we should pay attention to the fact that the right-hand side of this equation is not the real part of the weak value in the meaning defined in (7) . Rather, Taking account of the fact that (21) is an expectation value obtained by sequential measurement, the real part of the weak value,  ψn,I , can be regarded as the indicator of the inevitable back-action, caused by the weak measurement, to the post-selection, i.e., to the measurement of |ψ n ψ n |. Here, the inevitable back-action is defined as I|ÂΨ j |I − I|Ψ j |I for the strong measurement. Moreover, we notice that the term weak measurement means the interaction described by the Hamiltonian (1) -in other words, the POVM (22). However, Hofmann [8] has insisted that the back-action of a weak measurement to the post-selection should be the second order of the coupling constant. He has calculated the expectation values of the operators corresponding to the post-selection for the state after the weak measurement with the help of POVM, and he has demonstrated that their sum should not contain the back-action of the weak measurement up to the first order of the coupling constant. Although the result of his calculation is right, what has been demonstrated is that the first order of the back-action should vanish in the sum. The back-action to each post-selection has not been calculated in [8].
Weak value as the indicator of back-action
To clarify the above conclusion, let us reconsider the operation of weak measurement and post-selection. In post-selection, a final state |ψ j is selected after the weak measurement of an operatorÂ. In other words, the final state is obtained as the state projected by measuring the expectation value of the operator |ψ j ψ j | after the weak measurement.
From (4), let us define |Φ(t) φ as the state both after the measurement of the position of the pointer and before the post-selection, i.e., The expectation value of the operator |ψ j ψ j | for this state is, up to the first order of gt, Thus, With the help of (6), this equation is rewritten as Then, (28) and (29) are the operational expression of (25). These are our main results. Although it has already been noted in [34][35] [36][25] that the imaginary part of a weak value gives the back-action caused by the weak measurement, its real part is interpreted as a conditional probability or a conditional expectation value there. Nevertheless, the above equations demonstrate that the real part of the weak value and the expectation value of the position of the pointer after the post-selection give only the back-action caused by the weak measurement. It is worth noting that this back-action itself does not depend on the probe system and is inevitable, as shown in the previous subsection, especially (25). Thus, we have no reason to interpret (28) as a conditional probability or a conditional expectation value, if any pair of the operators |I I|, |ψ j ψ j | and do not commute.
It is easier to convince ourselves of this fact, if we consider the following concepts. After weak measurement, only a small part of the entangled state of the observed system and the probe would change, whereas almost the whole state would remain the same as the initial state. As noted in [37], a weak measurement of one particle and that of many identical particles should give the same result, if the measurement is repeated many times and the average is adopted. Thus, we can suppose that the initial state of a weak measurement is formed of many identical particles. When this state is weakly measured, a small number of particles are strongly measured and the rest are not measured, though which one is measured is never determined. The weak value demonstrates how the strong measurement for the minority changes the initial state.
Next, let us consider a complete set of projection operators, { i }, which satisfy1 = i i .
As easily confirmed from the definition (7), (30) is often treated as supporting evidence of the statistical interpretation of weak values. We reconsider (30) based on the above discussion. The interaction Hamiltonian between a projection operator i and the corresponding probe iŝ whereπ i is the momentum of the pointer. We assume that the wave function φ i (x i ) of each pointer is in the same form as (2). |Ψ(t) i is defined like (26) as the state both after the measurement of the position of the i-th pointer and before the post-selection: Similarly, the state after the measurement of the positions of all the pointers corresponding to their respective projection operators of { i } is defined as It is in the nature of things that this state is identified with the initial state except for the normalisation factor. Then, This equation demonstrates that (30) only reflects the fact that the operation of the identity operator does not affect the state.
Hardy's paradox
Recently, the counter factual statements of Hardy's paradox [15] were interpreted with the help of weak values [16] and it was ascertained that they agreed with the values obtained by the corresponding weak measurement [19][20] [21]. We investigate the weak values in Hardy's paradox based on the discussion in the previous section. As shown in Fig.1, a device composed of an electron Mach-Zehnder interferrometer (MZI − ) and a similar machine with positrons (MZI + ) is examined. OL is the domain where these two MZIs overlap. We assume that pair annihilation must occur if an electron(e − ) and a positron(e + ) exist simultaneously in OL. The length between BS1 −(+) and BS2 −(+) is adjusted to let e − (e + ) be detected by a detector C −(+) without exception in a solo MZI −(+) experiment. Conversely, detection by a detector D −(+) implies that obstacles exist on either path.
We consider the case where the pair annihilation does not occur and e − and e + are detected by D − and D + , respectively. The initial state |Φ and the final state |Ψ are defined as where O and N O are abbreviations of 'Through OL' and 'Not through OL', respectively. Then Ψ|Φ 2 = 1 12 by ordinary quantum mechanical calculation.
However, the weak values are whereN It is easily verified that any two ofΨ ≡ |Ψ Ψ|,Φ ≡ |Φ Φ| and any one of the operators defined in (42) 41), we have no right to evaluate the validity of the counterfactual statement 'e − must pass through OL to ensure that e + is detected by D + and vice versa. Nevertheless, both e − and e + cannot simultaneously pass through OL, because they must be annihilated together if they encounter each other'. We can discuss the three-box paradox [17][18] [22] similarly.
W-slit experiment
As is well known, the interference pattern is lost if we attempt to determine the photon's trajectory in the double-slit experiment. However, Kocsis et al. [13] have experimentally define a set of trajectories for an ensemble of the photons that pass through a double slit with the help of weak measurement technique. They have used the polarisation degree of freedom of the photons as the pointer that weakly couples to their momenta. After the weak measurement of the momenta, they have selected the subensemble of the photons arriving at a particular position by the strong measurement. Thus, they insisted that the average momentum of the photons reaching any particular position in the image plane could be determined, and their average trajectories could be reconstructed by repeating this procedure in a series of planes.
In their study, the average momentum of the photon's subensemble postselected at a position ξ should be given in the form whereP is the momentum operator of ξ-direction. However, we cannot interpret the weak value (46) as a conditional expectation value ofP between the initial state |I and the final state |ξ , because |ξ ξ| andP do not commute. That is, we cannot interpret (46) as the average momentum of the photons that have reached the position ξ. On the other hand, if we replace withP and |ψ j with |ξ in (28), where This equation indicates that P ξ,I gives the back-action caused by the weak measurement of the momentum to the post-selection of the position. In addition, it is obvious from (47) that the real part of (46) is a classically measurable quantity, as noted in [37].
In [11], which is one of the bases of [13], the average velocity of a particle at a position ξ, v(ξ, t), is operationally defined as v(ξ, t) ≡ lim τ →0 τ −1 E ξ strong (t + τ ) − ξ weak (t) ξ strong = ξ , where ξ strong(weak) (t) is the strongly (weakly) measured position of the particle at the time t and E a F is the average of a when F is true. If the Hamiltonian of the observed system is P 2 /2m+V (ξ), (49) is identified with Bohm's velocity [12] and in proportion to P ξ,I . However, we know from the above discussion that E ξ weak (t) ξ strong = ξ is not the average position at the time t when the position ξ is post-selected at the time t + τ , and hence, we cannot interpret (49) as the average velocity. Therefore, the fact that P ξ,I is in proportion to Bohm's velocity does not help the claim in [13].
Conclusion
The main conclusions of this study, which were drawn in the second section, are as follows: The real part of the weak value  ψj ,I is not the expectation value of an operator between an initial state |I and a final state |ψ j . Rather, it gives the back-action caused by the weak measurement of for |I , which changes the probability of finding the state |ψ j . There are so many studies based on the statistical interpretation of weak values that we cannot examine all of them. However, the studies investigated in the previous two sections are typical, and we have noted some of their essential faults. Therefore, we conclude that it is worth considering suggestion that there is controversy in previous findings, though our ideas may not be applicable to all discussions concerning weak values and weak measurement.
|
v3-fos-license
|
2023-08-07T15:04:21.751Z
|
2023-08-01T00:00:00.000
|
260665008
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-393X/11/8/1324/pdf?version=1691157546",
"pdf_hash": "4fe52afe381662e1f4850cf48ec88ed0bfc3e58e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43132",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "47299fa463d1feefbdea986f22d52d3bd0909dc6",
"year": 2023
}
|
pes2o/s2orc
|
The Burden of Streptococcus pneumoniae-Related Admissions and In-Hospital Mortality: A Retrospective Observational Study between the Years 2015 and 2022 from a Southern Italian Province
Streptococcus pneumoniae (SP) has high worldwide incidence and related morbidity and mortality, particularly among children and geriatric patients. SP infection could manifest with pneumonia, bacteremia, sepsis, meningitis, and osteomyelitis. This was a retrospective study aimed at evaluating the incidence, comorbidities, and factors associated with in-hospital mortality of pneumococcal disease-related hospitalization in a province in southern Italy from the years 2015 to 2022. This study was performed in the Local Health Authority (LHA) of Pescara. Data were collected from hospital discharge records (HDRs): this database is composed of 288,110 discharge records from LHA Pescara’s hospitals from 2015 to 2022. Streptococcus Pneumoniae-related hospitalizations were about 5% of the hospitalizations; 67% of these were without comorbidities; 21% were with one comorbidity; and 13% were with two or more comorbidities. Regarding mortality of SP infection, the most affected age group was older people, with the percentage of cases among the over-65s being more than 50% compared to the other age groups. HDRs represent a valid and useful epidemiological tool for evaluating the direct impact of pneumococcal disease on the population and also indirectly for evaluating the effectiveness of vaccination strategies and directing them.
Introduction
Streptococcus pneumoniae is one of the main causes of invasive and non-invasive human infectious diseases, with high worldwide incidence and related morbidity and mortality, particularly among children and geriatric patients [1]. The most common manifestation of pneumococcal disease is pneumonia, which represents one of the most frequent causes of community-acquired pneumonia (CAP). However, a wide range of clinical manifestations can occur due to Streptococcus pneumoniae infection. While some of these infections can be less serious, such as otitis, sinusitis, and bronchitis, others can be very dangerous and lead to illnesses such as bacteremia, sepsis, meningitis, and osteomyelitis. In these cases, we refer to these manifestations as invasive pneumococcal disease (IPD) [2].
For these reasons, according to the World Health Organization (WHO), pneumococcal disease is a major public health problem worldwide. It is estimated that approximately one million children die from pneumococcal disease every year [2].
In Italy, hospital discharge records (HDRs) are a useful tool to evaluate the burden of several diseases related to cost and healthcare utilization [12,13].
It included information on patients' demographic characteristics and the diagnosisrelated group (DRG) used to classify the admission and patients' comorbidities, coded by ICD-9 CM codes. hospital discharge records (HDRs), despite some limitations, can be also considered a proxy for healthcare utilization. In particular, evaluating factors associated with healthcare utilization for patients affected by pneumococcal disease can lead to the improvement of preventive strategies at the regional or country level.
Poor studies were performed in Italy on pneumococcal disease using HDRs. In addition, the major part of them referred only to the pre-pandemic period. For these reasons, we conducted a retrospective study aimed at evaluating the incidence of pneumococcal disease-related hospitalization in a province in southern Italy from the year 2015 to 2022. In addition, we evaluated comorbidities and factors associated with in-hospital mortality.
Materials and Methods
This was a retrospective observational study performed in the Local Health Authority (LHA) of Pescara, a province of the Abruzzo region accounting for about 320,000 inhabitants. It has three hospitals: a tertiary referral hospital and two spokes. Data were collected from the LHA registry of hospital discharge records (HDRs). The HDRs include a large variety of data regarding patients' demographic characteristics and hospitalization such as gender, ages and other information such as admission and discharge date and the discharge type, which also includes death. The HDRs also include information about diagnoses that led to hospitalization or that are concurrent including complications (a maximum of six diagnoses, one principal diagnosis and up to five comorbidities) and a maximum of six procedures or interventions that the patient underwent during hospitalization. Diagnoses and procedures were coded according to the International Classification of Disease, 9th Revision, Clinical Modification (ICD-9-CM), the National Center for Health Statistics (NCHS) and the Centers for Medicare and Medicaid Services External, Atlanta, GA, USA.
Coding of Streptococcus pneumoniae-Hospital Admission
For the selection of admissions with or without directly specified etiology, the following ICD-9-CM codes were used for the relative diagnoses: • Unspecified Pneumonias: 482.9 (Bacterial pneumonia, unspecified); 485 (Bronchopneumonia, organism unspecified); 486 (Pneumonia, organism unspecified).
• Pneumonia All-cause: 480.x, which includes the following diagnoses: Teneligliptin has 5 times higher activity than sitagliptin because of the presence of a Jshaped anchor-lock domain and stronger covalent bond with DPP-4, also an additional bond with S2 extensive subsite. d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
480.1 (Pneumonia due to adenovirus); Teneligliptin has 5 times higher activity than sitagliptin because of the presence of a Jshaped anchor-lock domain and stronger covalent bond with DPP-4, also an additional bond with S2 extensive subsite. d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
480.2 (Pneumonia due to respiratory syncytial virus); Teneligliptin has 5 times higher activity than sitagliptin because of the presence of a Jshaped anchor-lock domain and stronger covalent bond with DPP-4, also an additional bond with S2 extensive subsite. d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 480.9 (Pneumonia due to other virus not elsewhere classified).
(Pneumococcal pneumonia);
482.x which includes the following diagnoses: d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 482.0 (Pneumonia due to Klebsiella pneumoniae); d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 482.1 (Pneumonia due to Pseudomonas); bond with S2 extensive subsite. d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 482.2 (Pneumonia due to Hemophilus influenzae (H. influenzae)); with DPP-4, also an additional bond with S2 extensive subsite. d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 482.30 (Pneumonia due to Streptococcus, unspecified); and stronger covalent bond with DPP-4, also an additional bond with S2 extensive subsite. d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 482.31 (Pneumonia due to Streptococcus, group A); shaped anchor-lock domain and stronger covalent bond with DPP-4, also an additional bond with S2 extensive subsite. d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 482.32 (Pneumonia due to Streptococcus, group B); because of the presence of a Jshaped anchor-lock domain and stronger covalent bond with DPP-4, also an additional bond with S2 extensive subsite. d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 482.39 (Pneumonia due to other Streptococcus); higher activity than sitagliptin because of the presence of a Jshaped anchor-lock domain and stronger covalent bond with DPP-4, also an additional bond with S2 extensive subsite.
d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 482.40 (Pneumonia due to Staphylococcus, unspecified); Teneligliptin has 5 times higher activity than sitagliptin because of the presence of a Jshaped anchor-lock domain and stronger covalent bond with DPP-4, also an additional bond with S2 extensive subsite.
d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 482.41 (Methicillin susceptible pneumonia due to Staphylococcus aureus); Teneligliptin has 5 times higher activity than sitagliptin because of the presence of a Jshaped anchor-lock domain and stronger covalent bond with DPP-4, also an additional bond with S2 extensive subsite.
d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 482.42 (Methicillin resistant pneumonia due to Staphylococcus aureus); Teneligliptin has 5 times higher activity than sitagliptin because of the presence of a Jshaped anchor-lock domain and stronger covalent bond with DPP-4, also an additional bond with S2 extensive subsite.
d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 482.49 (Other Staphylococcus pneumonia); Alogliptin binds to S1, S2 and S1′ subsites. Linagliptin binds to S1, S2, S1′, S2′ subsites. Compared with alogliptin, Linagliptin has 8-fold higher activity.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 482.81 (Pneumonia due to anaerobes); Alogliptin binds to S1, S2 and S1′ subsites. Linagliptin binds to S1, S2, S1′, S2′ subsites. Compared with alogliptin, Linagliptin has 8-fold higher activity.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 482.82 (Pneumonia due to escherichia coli (E. coli)); e 1. Illustrating type of class interacting at which subsite of DPP-4 protease. Alogliptin binds to S1, S2 and S1′ subsites. Linagliptin binds to S1, S2, S1′, S2′ subsites. Compared with alogliptin, Linagliptin has 8-fold higher activity.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 482.83 (Pneumonia due to other Gram-negative bacteria); e 1. Illustrating type of class interacting at which subsite of DPP-4 protease. Alogliptin binds to S1, S2 and S1′ subsites. Linagliptin binds to S1, S2, S1′, S2′ subsites. Compared with alogliptin, Linagliptin has 8-fold higher activity.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 482.84 (Pneumonia due to Legionnaires' disease); ps helping in the formation of Hydrogen-bond [28][29][30]. Alogliptin binds to S1, S2 and S1′ subsites. Linagliptin binds to S1, S2, S1′, S2′ subsites. Compared with alogliptin, Linagliptin has 8-fold higher activity.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 482.89 (Pneumonia due to other specified bacteria); contributes their role in the interaction & potency of composed several halogen ps helping in the formation of Hydrogen-bond [28][29][30]. Alogliptin binds to S1, S2 and S1′ subsites. Linagliptin binds to S1, S2, S1′, S2′ subsites. Compared with alogliptin, Linagliptin has 8-fold higher activity.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 482.9 (Bacterial pneumonia, unspecified).
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 483.0 (Pneumonia due to Mycoplasma pneumoniae); to form a higher number of interactions with DPP-4 protease subsites; different scafcontributes their role in the interaction & potency of composed several halogen ps helping in the formation of Hydrogen-bond [28][29][30]. Alogliptin binds to S1, S2 and S1′ subsites. Linagliptin binds to S1, S2, S1′, S2′ subsites. Compared with alogliptin, Linagliptin has 8-fold higher activity.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 483.1 (Pneumonia due to chlamydia); Alogliptin binds to S1, S2 and S1′ subsites. Linagliptin binds to S1, S2, S1′, S2′ subsites. Compared with alogliptin, Linagliptin has 8-fold higher activity.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 483.8 (Pneumonia due to other specified organism).
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond 484.6 (Pneumonia in aspergillosis); d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond d Cyanopyrrolidine scaffold binds to S1. Hydroxyadamantyl group binds to the S2 subsite. Saxagliptin has five-times higher activity than vildagliptin.
Inhibitors were categorised in several classes based on the binding of inhibitors to the ites present, such as sitagliptin and teneligliptin were categorized in class 1 as it binds with S1, S2, and S2 extensive subsite, those binding to S1′, S2′, S1 and S2 (alogliptin linagliptin) were categorized in third class whereas, inhibitors like vildagliptin and gliptin were ranked in class second as they binds at S1 and S2 subsites only (illusd in Figure 3). Interaction of named drugs such as vildagliptin, sitagliptin, saxagliptin howed in Figure 4. The first class of drugs (e.g., vildagliptin and saxagliptin) founded interacting majorly with S1 and S2 subsites, cyannopyrrolidine moiety interacts with hereas hydroxy adamantyl interacts with the S2 subsite. On the other hand, the second of drugs binds by forming an additional subsite in comparison with the first class. gliptin binds with four subsites, including S1, S2, S1′, and S2, yielding 8-fold higher ity than Alogliptin. Alogliptin finds to be binding with only three subsites, i.e., S1, S2, S1′. Moreover, 3rd class, which holds teneligliptin, is a marketing DPP-4 inhibitor bee of the pentacyclic ring. Teneligliptin has five times higher activity than sitagliptin use of the presence of a J-shaped anchor-lock domain and a stronger covalent bond • Unspecified Bacteriemia: 038.0 (Streptococcal septicemia); 038.9 (Unspecified septicemia); 790.7 (Bacteremia).
In the case of unspecified pneumonia, meningitis, and septicemia, a specific percentage could be attributable to pneumococcal infection. According to the recent literature [13], for unspecified pneumonias the attributable percentage to SP could be 36%; for unspecified meningitis an attributable percentage to SP could be 58%; and for unspecified septicemias a percentage due to SP could be 20%.
The proportion of SP-HA was calculated on the assumption that all HDRs mentioning this pathogen were SP-HA, of the cases of pneumonia, meningitis, and septicemia for which no pathogen was specified.
Comorbidity Coding
Comorbidities were calculated according to Charlson through an algorithm proposed by Baldo et al. [14] which follows the ICD-9-CM codes. The comorbidities taken into account are previous myocardial infarction, peripheral vascular disease, cerebrovascular disease, dementia, chronic pulmonary disease, rheumatic disease, peptic ulcer disease, mild liver disease, diabetes without chronic complication, diabetes with chronic complication, hemiplegia or paraplegia, renal disease, any malignancy, moderate or severe liver disease, metastatic solid tumor, and AIDS/HIV.
Statistical Analysis
Qualitative variables were summarized as frequency and percentage. Annual admission rates for each SP-HA were calculated per 100,000 inhabitants using, when appropriate, the related attributable fractions, according to the most recent literature and as described previously [14].
The data related to the demographic structure, sex, and age of the population were collected through free access to the database on the website of the National Institute of Statistics (ISTAT).
Hospitalization rates were standardized for age and gender according to the Abruzzo population in the first year of the study (2015). To evaluate the association between inhospital mortality and predictors, a multivariable logistic model was implemented using the presence or absence of death as the dependent variable (type of hospital discharge: death) and as independent variables, age expressed in categories (0-4, 5-14, 15-65, 65-79, and 80+), gender (M or F), the various invasive bacterial pathologies investigated (All Pneumonia, SP Pneumonia, Unspecified Pneumonia, SP Meningitidis, SP Bacteriemia, and unspecified Bacteriemia), and the presence of individual comorbidities according to Charlson (previous myocardial infarction, peripheral vascular disease, cerebrovascular disease, dementia, chronic pulmonary disease, rheumatic disease, peptic ulcer disease, mild liver disease, diabetes without chronic complication, diabetes with chronic complication, hemiplegia or paraplegia, renal disease, any malignancy, moderate or severe liver disease, metastatic solid tumor, and AIDS/HIV).
For all tests, a p-value less than 0.05 was considered significant. The statistical analysis was performed with STATA v14.2 software (StataCorp LLC, College Station, TX, USA).
Results
Our database comprises 288,110 discharge records from ASL Pescara's hospitals covering the period from 2015 to 2022. Streptococcus pneumoniae-related hospitalizations numbered 14,506 (5.035%), of which 7906 (2.744%) were associated with invasive diseases, including 33 cases of meningitis (0.011%) and 88 cases of bacteremia (0.031%). In contrast, unspecified invasive infections accounted for 1673 pneumonia cases (0.581%) and 5 cases of bacteremia (0.002%). There were no diagnosed cases of meningitis without a defined etiology.
Patients were categorized into different age groups, and it was found that the majority of admissions occurred between the ages of 15 and 64 (42%). Hospitalizations were well distributed across all seven years, with a minimum of 30,166 in 2020 (10%) due to pandemic restrictions and a maximum of 39,225 in 2015 (14%).
Regarding patients' comorbidity, we found that 192,088 had no comorbidity (67%), 59,573 had one comorbidity (21%), and 36,449 had two or more comorbidities (13%) according to the Charlson Index classification. In-hospital deaths totaled 13,434 (5%), with 3060 (1%) associated with Streptococcus pneumoniae. The sample is further detailed in Table 1 About comorbidity distribution, apparently there was a similar pattern among cardiovascular, cerebrovascular, renal, and respiratory diseases and diabetes: the percentage of infants (0-4) with these comorbidities was higher than in children (5)(6)(7)(8)(9)(10)(11)(12)(13)(14), and this fraction progressively increase with age ( Figure 1): for instance, there were 4520 patients between 0 and 4 with at least one cardiovascular disease (9.63%), compared to only 41 patients between 5 and 14 (0.29%). The number of patients with cardiovascular comorbidities peaked in the oldest age class (over 80) with 8457 cases (22.79%). Similarly, cerebrovascular diseases were most common among those over 80 (5928 cases, 15.98%), decreasing to a minimum in children between 5 and 14, with 109 patients (0.78%). Cardiovascular, cerebrovascular, renal, and respiratory diseases and diabetes were more common among the younger age group than in the 15 to 64 age range.
Malignancies showed a slightly different pattern: cancer was most common between 65 and 79, with 9513 patients (13.89%). Malignancies were more frequently found in the central age class , followed by the younger one, with rates of 5.61% and 2.06%, respectively.
In-hospital mortality displayed a similar trend across all comorbidity classes (see Figure 1): it decreased from 0-4 to 5-14 and progressively increased for patients over 80.
Logistical analysis for in-hospital mortality ( Table 2) confirmed the previously described trend: being younger than 5 and older than 64 is a risk factor for in-hospital mortality, with odds of 4.786 (p < 0.001) for ages 0 to 4, 2.868 (p < 0.001) for ages 65 to 79, and 6.599 (p < 0.001) for patients older than 80. Sex was not statistically significant as a risk factor (p = 0.480). Malignancies showed a slightly different pattern: cancer was most common between 65 and 79, with 9513 patients (13.89%). Malignancies were more frequently found in the central age class , followed by the younger one, with rates of 5.61% and 2.06%, respectively.
In-hospital mortality displayed a similar trend across all comorbidity classes (see Figure 1): it decreased from 0-4 to 5-14 and progressively increased for patients over 80.
Logistical analysis for in-hospital mortality ( Table 2) confirmed the previously described trend: being younger than 5 and older than 64 is a risk factor for in-hospital mortality, with odds of 4.786 (p < 0.001) for ages 0 to 4, 2.868 (p < 0.001) for ages 65 to 79, and 6.599 (p < 0.001) for patients older than 80. Sex was not statistically significant as a risk factor (p = 0.480). All S.P.-related invasive infections were correlated with in-hospital mortality, with the highest odds for Streptococcus pneumoniae (4.528 with p < 0.001), followed by S.P. meningitis (3.443 with p = 0.048) and lastly S.P. bacteremia (2.201 with p = 0.050). Among unspecified etiology infections, only pneumonias were significantly related to in-hospital mortality (7.098 with p < 0.001), while there was no association with bacteremia (p = 0.227). It was impossible to evaluate unspecified meningitis as a risk factor due to the lack of cases in the recorded seven years.
The great part of comorbidities included in our evaluation were significantly associated with in-hospital mortality, apart from COPD (p = 0.054), complicated diabetes (p = 0.267), and any -plegia (p = 0.160).
S.P. bacteremia-related deaths began in 2018, with a rate of 0.3 per 100,000 (CI 95% 0-0.8), with no significant differences in the rate trend. In 2022, a not significantly higher in-hospital death rate has been reported with 0.6 per 100,000 (CI 95% 0-1.5), equal to the rate in 2021.
Discussion
With the present study, we analyzed HDRs, from 2015 to 2022, of a local health authority of Pescara, a province in southern Italy with three hospitals, two hubs and one spoke, and approximately 320,000 inhabitants. It was possible to evaluate the burden of hospitalization of all cases of pneumonia, pneumonia caused by streptococcus pneumonia, and pneumonia with non-specific causes. The study of HDRs has already been used as an indirect source of data to measure both the effectiveness of vaccination strategies [9].
Our data appear to be similar with the data of other works carried out in other Italian regions such as in Sicily [9] and the northeast of the country [14], with an admission rate percentage varying between 350 and 450 per 100,000 inhabitants. The epidemiological study of pneumococcal disease is of great importance because it can be effectively prevented through pneumococcal vaccination which has demonstrated its cost-effectiveness in different age groups of population [15,16].
The decrease in admissions observed during the years 2020-22 can be explained by the impact of pandemic on hospital admissions. Healthcare services focused their attention on COVID-19 patients during these years, causing on the other hand a decrease in admissions for other diseases, as reported in previous studies [17,18].
Regarding mortality, pneumonia causes over 27,000 deaths yearly across Europe [19]. We also calculated the odds of death due to invasive streptococcus pneumonia diseases. The most affected age group was older people, as expected, with a percentage of cases in the over-65s of more than 50% compared to the other age groups. The older age group is known to be the most affected group, and it shows the highest mortality risk associated with SP infection. It can be linked to a decrease in immune response and to the high frequency of co-morbidities among the elderly [9]. This point highlights that improving the vaccination among persons ≥65 may be the most cost-effective public health strategy in a community setting. Regarding factors associated with in-hospital mortality, cancers, dementia, heart failure, and kidney diseases are also known risk factors, in line with the previous literature [20]. The positive association of diabetes with mortality is controversial. Some studies reported a negative association [20]; other reported a simply non-significant association [21]. However, a hyperglycemic state caused by infections and relative treatments can worsen patients' conditions, particularly among patients transferred to ICU [21,22]. However, diabetes is well known risk factor in 30-and 90-day mortality after discharge [21], but we are not able to obtain data on out-hospital mortality with this study. The similar age distribution of in-hospital mortality in patients with diabetes, CVDs, or renal diseases can be due to the most frequent distribution of these conditions in older age classes [12,17]. In addition, all these conditions are known risk factors for in-hospital mortality for many other medical conditions such as hip fracture, general surgery, and trauma [23][24][25].
Furthermore, from the analysis of the data from 2020, we reported a significant increase in the number of cases of pneumonia from all causes, compared with a constant or slightly decreased number of pneumonias from Streptococcus pneumoniae. This trend is probably compatible with the pandemic period, in which there was an increase in hospitalizations for COVID-19 pneumonia.
Also, the increased mortality rate reported in the year 2020 can be attributed to COVID-19. On the other hand, across all study periods, the mortality in the younger age class was negligible. This can be due to the extensive mass vaccination campaign performed, accordingly with the Italian National Vaccination Plan, that strongly reduced the mortality for PC diseases [9].
HDRs represent a very important source for invasive Streptococcus pneumoniae disease and for the evaluation of data concerning hospitalizations, comorbidities, and deaths. The study of HDRs has already been used as an indirect source of data to measure both the effectiveness of vaccination strategies [23][24][25] and to guide them [26][27][28].
The main strength of this study is the use of official, routinely collected electronic health databases from the entire population of an Italian province. To our knowledge, this is one of first study conducted in Europe covering a large study period (data from 7 years, from 2015 to 2022) and considering also the pandemic and post-pandemic periods. The evaluation of an entire stable population can be used as a proxy for the evaluation of primary prevention intervention, such as vaccination, or it can be considered as a useful tool to evaluate the impact of infectious diseases on hospital admissions. In addition, this is one of the first studies performed in the Abruzzo region on this topic. Another point of strength is the size of the HDR that we analyzed, which included 288,110 records. The large sample is useful to evaluate factors associated with hospitalization, making results generalizable.
However, this study has several limitations. Firstly, it represents the situation of a single province in southern Italy and the burden of the pneumococcal disease, which is a vaccine-preventable disease, and is therefore affected by the vaccination coverage that the local health authority has managed to achieve.
The second limit is that the HDRs were not completed with epidemiological intent but instead for remunerating purposes of admission. For this reason, comorbidities reported in each record can be overestimated or underestimated.
Thirdly, HDRs do not contain patients' clinical data, such as drug therapies, blood parameters, and the clinical severity of each disease. The lack of this data can limit the power of the analysis. Finally, vaccination status for each included patient was not available, not allowing us to evaluate the effectiveness of vaccination against pneumococcal disease.
Therefore, our analysis can make an important contribution to the study of the characteristics of this disease in our region.
Conclusions
Streptococcus pneumoniae is a pathogen capable of contributing to the hospital burden in terms of both hospitalizations and intra-hospital mortality despite the existence of effective primary prevention tools such as vaccines. HDRs represent a valid and useful epidemiological tool for evaluating the direct impact of pneumococcal disease on the population and indirectly for evaluating the effectiveness of vaccination strategies and directing them.
Institutional Review Board Statement:
The study was conducted in conformity with the regulations on data management of the Regional Health Authority of Abruzzo and with the Italian Law on privacy (Art. 20-21 DL 196/2003), published in the Official Journal, n. 190, on 14 August 2004. The data were encrypted prior to the analysis at the regional statistical office, when each patient was assigned a unique identifier. The identifiers eliminated the possibility of tracing the patients' identities. According to Italian legislation, the use of administrative data does not require any written informed consent from patients.
|
v3-fos-license
|
2019-06-25T14:28:28.326Z
|
2019-06-24T00:00:00.000
|
195329708
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/s12866-019-1514-7",
"pdf_hash": "fbbaca18eaa8098c8b624b375d6110ddba999610",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43134",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "067d5bcf85fe104cb1959e4712f2206a613a672a",
"year": 2019
}
|
pes2o/s2orc
|
Bacterial microbiomes of Ixodes scapularis ticks collected from Massachusetts and Texas, USA
Background The blacklegged tick, Ixodes scapularis, is the primary vector of the Lyme disease spirochete Borrelia burgdorferi in North America. Though the tick is found across the eastern United States, Lyme disease is endemic to the northeast and upper midwest and rare or absent in the southern portion of the vector’s range. In an effort to better understand the tick microbiome from diverse geographic and climatic regions, we analysed the bacterial community of 115 I. scapularis adults collected from vegetation in Texas and Massachusetts, representing extreme ends of the vector’s range, by massively parallel sequencing of the 16S V4 rRNA gene. In addition, 7 female I. scapularis collected from dogs in Texas were included in the study. Results Male I. scapularis ticks had a more diverse bacterial microbiome in comparison to the female ticks. Rickettsia spp. dominated the microbiomes of field-collected female I. scapularis from both regions, as well as half of the males from Texas. In addition, the male and female ticks captured from Massachusetts contained high proportions of the pathogens Anaplasma and Borrelia, as well as the arthropod endosymbiont Wolbachia. None of these were found in libraries generated from ticks collected in Texas. Pseudomonas, Acinetobacter and Mycobacterium were significantly differently abundant (p < 0.05) between the male ticks from Massachusetts and Texas. Anaplasma and Borrelia were found in 15 and 63% of the 62 Massachusetts ticks, respectively, with a co-infection rate of 11%. Female ticks collected from Texas dogs were particularly diverse, and contained several genera including Rickettsia, Pseudomonas, Bradyrhizobium, Sediminibacterium, and Ralstonia. Conclusions Our results indicate that the bacterial microbiomes of I. scapularis ticks vary by sex and geography, with significantly more diversity in male microbiomes compared to females. We found that sex plays a larger role than geography in shaping the composition/diversity of the I. scapularis microbiome, but that geography affects what additional taxa are represented (beyond Rickettsia) and whether pathogens are found. Furthermore, recent feeding may have a role in shaping the tick microbiome, as evident from a more complex bacterial community in female ticks from dogs compared to the wild-caught questing females. These findings may provide further insight into the differences in the ability of the ticks to acquire, maintain and transmit pathogens. Future studies on possible causes and consequences of these differences will shed additional light on tick microbiome biology and vector competence. Electronic supplementary material The online version of this article (10.1186/s12866-019-1514-7) contains supplementary material, which is available to authorized users.
With more than 30,000 reported cases per year and an estimated 10-fold greater burden than the reported case counts, Lyme disease is the most common vector-borne illness in the U.S. [1,[17][18][19]. Despite a broad geographic distribution of I. scapularis across the eastern United States, Lyme disease cases are concentrated in the northeastern and upper midwestern states, whereas the disease is very rare or absent in the southern portion of the vector's range [9,19]. The prevalence of B. burgdorferi among I. scapularis in the northeastern U.S. has been reported to be as high as 30-50% [20][21][22], while it is rarely (< 1%) detected in the ticks from the southern United States [23][24][25]. Over the past two decades, the incidence of Lyme disease has increased in numbers and geographical area across the eastern U.S., which coincides with a significant northward range expansion of I. scapularis in the northeastern and midwestern regions [9,26].
Many factors, including the density of host-seeking B. burgdorferi-harboring ticks, availability of the B. burgdorferi competent hosts, tick-behavior, seasonal activity of the ticks, and environmental variables influence the risk of Lyme disease [9,[27][28][29][30][31][32][33], yet the reasons behind the regional distribution of Lyme disease are not fully understood. In the upper midwestern and northeastern U.S., all active stages of I. scapularis can be encountered by humans during warm season of the year. But, in the southeastern U.S., human encounter occurs primarily with the adults I. scapularis ticks as immature ticks rarely seeks hosts in the region [34]. Recent studies have demonstrated that the resident microbial community of ixodid ticks can influence reproductive fitness and physiological processes of the tick and the acquisition, establishment and transmission of certain tick-borne pathogens [4,[35][36][37][38][39]. The microbial community of I. scapularis ticks has increasingly been studied in recent years [40][41][42][43]. In the U.S., the microbiome of Ixodes ticks varies with sex, species and geography [44]. By contrast, in Canada the microbiomes of I. scapularis ticks from eastern and southern Ontario do not differ significantly with regard to geographic origin, sex or life stages [40]. These contradicting reports highlight the need for additional studies considering the potential role that geography and related ecological and environmental factors may have in shaping the microbiome of ixodid ticks and disease transmission. More recently, we have demonstrated that the composition of the endogenous tick microbial community in colony-reared I. scapularis can be influenced by the environmental temperature [45]. With that goal in mind, we investigated the bacterial microbiomes of I. scapularis adults collected from natural vegetation from Texas and Massachusetts, representing opposing ends of the vector's range and possessing distinct climates, by sequencing the hypervariable region 4 (V4) of the 16S ribosomal RNA (rRNA) gene using an Illumina MiSeq platform. Adult ticks were chosen to provide a fair comparison of the tick microbiomes from two regions with different geography and climate, and to provide ample DNA per sample without the need for pooling of multiple, smaller life stages.
Host blood meal has been shown to affect microbial diversity in I. pacificus [39], a closely related species to I. scapularis, with potential consequences for vector competence. To investigate how host blood meal affects microbiome, we also analysed the bacterial microbiomes of dog-fed female I. scapularis ticks during this study.
16S V4 sequencing results
From the 122 I. scapularis samples (115 questing I. scapularis adult ticks collected from Texas and Massachusetts plus 7 female ticks collected from dogs in Texas, see Table 2 in Methods section for details) 12,204,399 quality-filtered reads (average per sample = 100,036; standard deviation = 24,411; range = 29,611-167,874) were generated. This library included 6544 reads generated from negative controls (one blank extraction control and another no-template PCR negative control). The number of reads (for a particular OTU) that were present in the negative controls was subtracted from the libraries of the samples. Additionally, for genus level data analysis, 0.085% of the reads from each sample were considered as zero to minimize putative background contamination (i.e. if an OTU was abundant at less than 0.085% in a given sample, it was removed from the downstream analyses). All libraries generated from tick samples had adequate depth for further analysis, as evident from the mean Good's coverage of 99.9% (range = 99.9-100%). Additionally, rarefaction curves of the number of observed OTUs plotted at a depth from 1000 to 30,000 sequences reaching plateau~25,000 reads (Additional file 1: Figure S1) suggested sufficient sample coverage to proceed further.
Bacterial composition of I. scapularis
Proteobacteria dominated the I. scapularis microbiomes in both locations under study. Proteobacteria were prevalent at 87.2% mean relative abundance across fieldcollected ticks from Texas, with 73% (3.5-96.5%) prevalence in males and 100% (99.9-100%) prevalence in females. The prevalence of Proteobacteria in female ticks collected from dogs in Texas was 93% (86-100%). Proteobacteria were prevalent at 84.8% across ticks from Massachusetts, with 71.9% (0-99.9%) prevalence in males and 98.5% (11.1-88.9%) in females. Other common phyla in the microbiomes of the Texas ticks (both field-collected males and the females from dogs) included various proportions of Actinobacteria, Bacteroidetes, and Firmicutes. In addition to these non-proteobacterial phyla found in the Texas ticks, Massachusetts ticks were represented by the Spirochaetes, albeit with higher abundance in males (63% of them with ≥1% relative abundance) compared to that of the females (37%) (see Additional file 1: Figure S2).
Significant differences were found in the mean relative abundance of certain genera in males from Texas and Massachusetts including, Pseudomonas (Kruskal-Wallis test p = 0.0001), Acinetobacter (p = 0.006) and Mycobacterium (p = 0.004). Additionally, Anaplasma, Borrelia and Wolbachia bacteria were found in both male and female ticks from MA, but not from those in TX.
Libraries generated from female ticks collected from dogs in Texas contained many genera that were rare or absent in libraries generated from female ticks collected from foliage in Texas and Massachusetts. These actively feeding ticks did carry Rickettsia (mean abundance = 53%), but the abundances of Pseudomonas (20%), Bradyrhizobium (7%), Sediminibacterium (5%), Ralstonia (4%), and Acinetobacter (2%) were much higher than in the microbiomes of questing ticks collected from foliage ( Fig. 1).
Bacterial composition in I. scapularis females after removal of Rickettsia
Because the microbiomes of female ticks were entirely dominated by amplicons likely derived from the rickettsial endosymbiont known to occur in this species, and because this endosymbiont resides primarily in the ovaries [46], we removed Rickettsia sequences from the female data sets to further explore the inherent (predominately) gut microbiome of the female ticks, as described in Thapa et al. (2018) [45]. After in-silico removal of Rickettsia sequences from the female dataset, none of the field-collected female samples from Texas had sufficient sequences remaining to pass the inclusion criteria (as described in the beginning of the results section) needed to proceed for further analysis. Five female samples from Massachusetts also did not meet the inclusion criteria for further analysis after removing Rickettsia. Of the remaining 25 female samples, all collected from Massachusetts, only 13 had more than 1000 reads (mean = 7439, range = 1149-29,487) after deleting Rickettsia sequences. In-silico removal of Rickettsia from the Massachusetts female datasets revealed the previously hidden bacterial composition (Fig. 2), where the presence of Borrelia was prominent compared to the full female profiles (i.e. Rickettsia included). In contrast to a very low distribution of Borrelia in their full profiles (range = 0.5-11%, mean = 3%), the relative abundance of Borrelia in 11 Rickettsia-deleted female samples (two of the 13 samples under analysis were negative for these bacteria) ranged from 45 to 100% (mean = 79%) (Fig. 2).
Alpha diversity
Prior to diversity analyses, subsampling at the minimum sequencing depth (25,059 sequences per sample) was performed to normalize the number of sequences in each sample [47]. Regardless of their geographical origin, field-collected male ticks exhibited significantly higher bacterial richness (number of OTUs observed) than that of the females (Wilcoxon rank-sum test p < 0.0001). However, female ticks collected from dogs in Texas had a significantly higher number of observed OTUs in comparison to the foliage-associated female ticks from either Texas or Massachusetts (FDR corrected Wilcoxon ranksum test p < 0.0001 for all comparisons) (see Fig. 3). Similar results were found with ACE (abundance-based coverage estimator) value and Chao1 estimator (See Additional file 1: Figure S3). The Shannon diversity index of male ticks from both sites was significantly higher than the females (Wilcoxon rank-sum test p < 0.0001 for both sites compared separately) (Fig. 3), and female ticks collected from foliage in both Texas and Massachusetts had a lower bacterial diversity in comparison to the female ticks collected from dogs in Texas. Further multiple comparison analyses revealed no significant differences in the bacterial diversity of female ticks collected from dogs with that of the male ticks collected from foliage in both states.
Beta diversity
While cluster analyses did not show a clear separation of the ticks for all samples, the majority of the male I. scapularis ticks collected from Massachusetts clustered separately from others, as did half the males from Texas in an unweighted PCoA plot of axis-1 vs axis-2. PCoA of unweighted UniFrac distances of bacterial communities showed that the first two axes (PCo1 and PCo2) explained 10.5 and 3.7% of the variation in the data, respectively (Fig. 4). PERMA-NOVA analysis of Unweighted UniFrac distances revealed a significant difference in the microbiome composition of male and female ticks from both collection sites (Adonis p = 0.001). The plot also demonstrated that the male samples from TX clustered separately when compared to the males from Massachusetts (p = 0.001), except one outlier from MA within the cluster of TX males. Female ticks collected from dogs in Texas formed a coherent cluster close to the field-captured males from Massachusetts. No clear clustering was observed in a PCoA plot of the weighted UniFrac distance metrics based on the collection site or sex of the tick (See Additional file 1: Figure S4).
Co-infection of Borrelia and Anaplasma in Massachusetts ticks
Anaplasma and Borrelia were detected exclusively in libraries generated from ticks collected in Massachusetts. Anaplasma-Borrelia appeared together in 7 of 62 (11%) field-collected I. scapularis from Massachusetts ( Table 1).
Microbiome of Borrelia-positive and Borrelia-negative I. scapularis ticks Although the distribution of individual bacterial taxa varied among male and female ticks from Massachusetts, there was no significant difference in bacterial composition between Borrelia-positive and Borrelia-negative groups in both male (PERMANOVA of UniFrac distance, Adonis p = 0.25) and female (p = 0.26). (Additional file 1 Figure S5). However, Borrelia-negative males from Massachusetts had a different community structure of bacteria when compared to the Texas males based on PERMA-NOVA test of the unweighted UniFrac distance metrics (Adonis p = 0.001). In terms of female ticks, Rickettsia was almost exclusively dominant in both regions, but Massachusetts females contained Borrelia, Anaplasma, and Wolbachia, while these bacteria were absent from the ticks collected in Texas.
Microbiome comparisons of the colony-reared and fieldcollected I. scapularis ticks We also compared the baseline microbiome data of the colony-reared I. scapularis ticks from our previous publication [45] to that of the microbiome data obtained from the field-collected ticks in the present study. In the case of the male ticks, relative abundance of Bacteroidetes and Firmicutes was significantly different in colony-reared ticks than that of the wild-caught ticks from Texas or Massachusetts (BH corrected Dunn's Kruskal Wallis test p < 0.05 for all comparisons). In female ticks, Proteobacteria dominated the microbiome of both colony-reared and field-collected ticks (questing and dog fed) (See Additional file 1: Figure S6). Similar to the Texas ticks, Borrelia, Anaplasma, and Wolbachia were not found in the colony-reared ticks purchased from the Tick Rearing Facility at Oklahoma State University (OSU). Bacterial diversity in colony-reared male ticks found to differ significantly from that of the wild-caught males from TX or MA (unweighted UniFrac PERMANOVA p = 0.001 for both comparisons). The colony-reared female ticks also differed significantly in beta diversity (as measured by unweighted UniFrac distances) when compared to that of the female ticks collected from vegetation in TX and MA (p = 0.001). However, the colony-reared I. scapularis females were no more diverse than the female ticks collected from dogs (p = 0.06).
Discussion
The dominance of Proteobacteria in I. scapularis ticks from both Texas and Massachusetts agrees with a previous study [44] of wild-caught ticks from several U.S. states that also found > 80% of the reads could be assigned to Proteobacteria. Other phyla found in this study, including Spirochaetes, were also previously reported in wild-caught ticks [44]. Our finding of exclusive abundance (100%) of Rickettsia in all field-collected female ticks from Texas, and a very high dominance (97%) in females from Massachusetts, is consistent with previous reports for the microbiome of I. scapularis [41][42][43][44] and high prevalence of Rickettsia in larvae and nymphal I. scapularis [6,22].
The high numbers of Rickettsia likely reflect a mutualism between this endosymbiont and the host, and most likely belong to the endosymbiont Rickettsia buchneri [46]. R. buchneri has been shown to provide a source of vitamins to the tick [48]. The genus Rickettsia also contains many potentially pathogenic species, including Rickettsia rickettsii, R. japonica, R. akari [49] and R. parkeri [50], but these bacteria are not known to be vectored by I. scapularis. By contrast, high prevalence of R. buchneri endosymbionts in female ticks is generally associated with the ovaries [41,51]. As Rickettsia was also highly prevalent in male ticks, our findings suggest that Rickettsia resides in other body parts of the male ticks. This is consistent with previous reports of R. parkeri detected in male tissues of Amblyomma maculatum [52].
The complexity of the microbiomes of male ticks collected in both Massachusetts and Texas may reflect acquisition from the environment, as relatives of many genera found in the guts of male ticks are considered free-living (not host-associated) bacteria. Differences, such as the relative abundance of Pseudomonas (MA = 23% vs TX = 1%), Acinetobacter (MA = 1% vs TX =22%), Mycobacterium (MA = 0.4% vs TX = 23%) and the exclusivity of Borrelia and Anaplasma to MA, suggests a geographical and/or ecological variation of the microbiota in these ticks with public health consequences. Our findings of slightly higher abundance of Borrelia or Anaplasma in male ticks from Massachusetts compared to the female ticks suggest the possible roles of the underlying microbial community in the male ticks for pathogen acquisition. However, it should also be noted that some differences between males and females may be artifacts arising from differences in sequencing depth between males and rickettsia-subtracted females. Caution is therefore warranted in interpretation of these differences. In addition, the large variation between microbiomes of wild-caught Texas male ticks suggests the possibility of two distinct microbiomes. However, all the ticks were collected from the same habitat in Texas during 2016 and 2017, and the pattern is similar for both collection years. The variations between the microbiomes of Texas male ticks might have to do with prior host blood meal. The mean abundance of Borrelia, which could include pathogenic B. burgdorferi and B. miyamotoi¸was higher in males (35%) in comparison to the females (< 2%) and in the range of previous studies [20,41,44]. Xu et al. (20) study tested the ticks via qPCR while our study is based on 16S sequencing. So, the methodological differences could also have contributed to the relatively high levels of Borrelia and Anaplasma detected in I. scapularis ticks collected from Massachusetts. In comparison to the traditional PCR-based approaches used previously [20], the 16S rRNA gene sequencing used here cannot discriminate between species. It is highly likely that the samples that yielded Borrelia 16S rRNA gene sequences are due to B. burgdorferi (the causative agent of Lyme disease), but could be partially due to B. miyamotoi (relapsing fever group bacterium). B. miyamotoi has also been identified in this area, albeit at substantially lower numbers, with 2.3% of ticks tested from Cape Cod in 2016 found positive for this bacterium [53]. Furthermore, not all I. scapularis samples yielding Borrelia reads from 16S rRNA Illumina sequencing produce amplicons in PCR testing of the B. burgdorferi specific ospC gene [44]. Similar results of discordance between traditional PCR assays and Illumina MiSeq sequencing was also observed in another study on the A. americanum tick [54]. However, our findings of about 63% Borrelia in I. scapularis ticks collected from North Truro in Cape Cod, Massachusetts is in line with the findings of Xu et al. (2016) [20], who also reported that 62.5% of I. scapularis ticks tested from Nantucket county in Massachusetts were B. burgdorferi positive, and also consistent with unpublished work conducted by our laboratory using nested PCR methods (data not shown).
The Anaplasma-Borrelia co-infection rate of 11% in the ticks from Massachusetts we report was substantially higher than a previous study [20] on human-biting I. scapularis from Massachusetts, where 1.8% of the ticks were coinfected by B. burgdorferi and A. phagocytophilum. The higher rate of co-infection in this study could be attributed to the overall higher prevalence of Borrelia.
The detection of Wolbachia in more than 25% of the ticks from Massachusetts was not expected. Wolbachia are known to exhibit endosymbiotic mutualism with insects [55,56], and has been previously reported in other ticks [57,58], but not I. scapularis. Although Wolbachia has been known to induce resistance to dengue virus when introduced into Aedes aegypti mosquitoes [56] and other insects [55], Plantard et al. (2012) showed that Wolbachia in the I. ricinus tick, a major European vector of the Lyme disease agent, is due to the presence of the endoparasitoid wasp Ixodiphagus hookeri, and not representative of a true endosymbiont of the tick [59]. Thus, the prevalence of Wolbachia reported here is likely not a true mutualism with I. scapularis, but rather may indicate the presence of an unidentified parasite.
The higher bacterial richness in the microbiome of male ticks compared to female ticks, regardless of the geographical origin reflects dominance of Rickettsia in female ticks. Furthermore, a significantly higher Shannon diversity in male ticks suggests that the community of male ticks were more diverse as well as even, compared to the females.
The complexity of libraries generated from female I. scapularis ticks collected from dogs in Texas, in comparison to the wild-caught females from both states, suggests that the tick microbiome may shift as the result of a recent blood meal. The microbiomes of the female ticks that originated from dogs in Texas closely matches that of the male ticks from Massachusetts in terms of diversity but not in community membership, further supporting the idea that the bacterial microbiomes of female I. scapularis ticks vary with their sample source. One possible explanation could be that recent blood feeding led to increased abundance of midgut bacteria, lessening the overall relative impact of the rickettsial endosymbiont on subsequent analyses.
Difference in the composition and diversity of the microbiome of colony-reared I. scapularis ticks in comparison to the wild-caught ticks could be attributed to multiple factors, including the type of previous blood meal, and the environmental/ecological parameters. The difference in microbiomes of I. scapularis ticks from Massachusetts and Texas, including the ticks fed on dogs in Texas may, also reflect the seasonal effects on the tick microbiome. Indeed, we have previously shown that the environmental temperature can influence the endogenous tick microbial community composition in colony-reared I. scapularis [45].
Conclusions
Analyses of the microbiomes of field-collected adult I. scapularis ticks from Texas and Massachusetts demonstrated that the bacterial microbiota of the ticks varies by sex and geographic origin. The main findings of this study are that sex plays a larger role than geography in shaping the composition/diversity of the I. scapularis microbiome, but that geography affects what additional taxa are represented (beyond Rickettsia) and whether pathogens are found. In addition, the microbiome of dog-fed female I. scapularis ticks is more complex than those of the wild-caught females.
Taken together, our findings may provide further insight into the sexual and regional differences in the ability of the ticks to acquire, maintain and transmit pathogens. Future studies on functional and mechanistic aspects of the tick microbiome, including possible causes (such as the ecological factors) and consequences of these differences will help us better understand the microbiome biology of the ticks and vector competence. These efforts may ultimately aid development of strategies to control the risk and transmission of tick-borne diseases.
Tick sampling and processing
During 2016 and 2017, a total of 115 adult I. scapularis were collected in the Davy Crockett National Forest near Kennard, Texas and from the North Truro area in the Barnstable county of Cape Cod, Massachusetts. Due to the difference in activity levels of ticks in different local environments, sample collection in Massachusetts was done during the late spring while ticks from Texas were collected during autumn. I. scapularis is endemic to both Cape Cod, located in the northeast U.S. [20,60], and Trinity county in Texas, part of the southeastern U.S. [9]. Standard flagging technique was used for tick sampling, which consisted of walking down trails dragging a 1 m 2 piece of white cloth attached to a pole gently over and around the vegetation where ticks were likely to be present. All encountered ticks were collected with fine-tipped tweezers and placed into sterile collection vials containing cotton fabric for housing. Ticks were categorized by location (TX or MA) and sex (male or female). All ticks were then preserved at 20°C until DNA extraction. In addition, seven I. scapularis females collected from dogs (pulled off with tweezers) in North Texas were included in the study. These dog-fed ticks were unengorged to partially engorged. Details of the collection sites and dates are provided in Table 2.
DNA extraction
All tick samples were treated in sequence with 10% sodium hypochlorite and molecular biology grade water to reduce surface contamination. Sterilization techniques using sodium hypochlorite solution have previously been demonstrated to significantly eliminate the bacteria and DNA on the tick surface [61]. Each whole tick was then cut into sections with a sterile scalpel on a glass microscope slide to disintegrate the thick cuticle layer and all sections were used during DNA extraction as previously described [45]. Briefly, all resultant sections of a tick were placed in a 2-ml screw-capped FastPrep tube (MP Biomedicals, LLC., Santa Ana, CA) containing 550 μl CSPL® buffer (Omega Bio-tek, Norcross, GA) and 8-10 sterile 2.8 mm ceramic beads (MoBio Laboratories Inc., Carlsbad, CA). Following pulverization (3 cycles of 7 m/s for 60s) in a FastPrep-24™ 5G Instrument (MP Biomedicals, LLC.), each sample was incubated at 56°C for 2 h. Total DNA was then extracted from 122 individual ticks using a Mag-Bind® Plant DNA Plus Kit (Omega Bio-tek) as per the manufacturer's instructions. A blank extraction control with reagents and beads was also prepared for each lot of DNA extractions. The extracted genomic DNA was quantified with a Nanodrop spectrophotometer (Invitrogen, Carlsbad, CA) and stored at − 20°C until further processing.
Tick mitochondrial 16S rRNA gene amplification
Each DNA extract was first assessed by PCR to amplify the tick mitochondrial 16S rRNA gene as a sample positive control, as previously described [45] using 16S-1 and 16S + 2 primers [62].
Bacterial 16S rRNA gene amplification DNA was amplified in duplicates by PCR using 515F/ 806R primers that target the hypervariable region four (V4) of the bacterial 16S rRNA gene. The primer set (forward: 5′-GTGCCAGCMGCCGCGGTAA-3′ and reverse 5′-GGACTACHVGGGTWTCTAAT-3′) had overhanging Illumina sequencing adaptors. Earth Microbiome Project (EMP) 16S Illumina Amplification Protocol was followed [63] with minor modifications as described below. In brief, a master mix solution was prepared per 25-μl PCR reaction volume with 2.5-μl 10X Accuprime™ PCR Buffer II (Invitrogen, Carlsbad, CA), 2.5-μl of 1.6 mg/ ml Bovine Serum Albumin (New England Biolabs, Inc., Ipswich, MA), 1-μl 50 mM MgSO 4 , 0.5-μl 10 μM forward primer, 0.5-μl 10 μM reverse primer, 0.1-μl of 5 U/μl Accuprime™ Taq DNA Polymerase High Fidelity, 10-μl (43-554 ng) of template DNA and 7.9-μl molecular biology grade water. PCR was carried out in a BioRad C1000 Touch™ thermal cycler with the following cycling parameters: an initial denaturation at 94°C for 2 min followed by 30 cycles (35 cycles for all male samples, with few exceptions, 40 cycles) consisting of denaturation at 94°C for 30 s, annealing at 55°C for 40 s, and extension at 68°C for 40 s, with a final extension at 68°C for 5 min and a 4°C indefinite hold. Amplicon quality was evaluated by visualizing under UV light after separation in 1.5% agarose gel after electrophoresis. No template negative controls were used during the PCR runs.
16S rRNA gene library preparation and sequencing PCR amplicons in duplicate sets were combined for each sample. Purification of the PCR products were performed using AMPure XP magnetic beads, and 16S libraries for a total of 122 samples were prepared following the Illumina 16S metagenomic sequencing library preparation protocol [65] and chimeras were removed using the UCHIME [66] algorithm. Sequences within a 97% identity threshold were binned into operational taxonomic units (OTUs) [67] and taxonomic groups were assigned by comparison to the Greengenes reference database v13.8.99 [68,69]. Rickettsia sequences were removed from the dataset using the remove.lineage command in mothur, as described in Thapa et al. (2018) [45]. Relative abundances of bacterial taxa were then compared between groups based on location (Texas vs Massachusetts), sex (male vs female), and source (vegetation vs dogs). Taxa with < 1% relative abundance in all samples were grouped together into '< 1% abundant taxa' category for visual representation. Alpha diversity within samples was calculated using Observed OTUs, ACE value, Chao1 estimator, and Shannon index [70] in the data set rarefied at the lowest sequencing depth of 25, 059 reads/sample. Beta-diversity beween samples was quantified by weighted and unweighted UniFrac distance matrices and the bacterial community structure was visualized using principal coordinates analysis (PCoA) plots. Statistical analyses of the differently abundant taxa among groups were performed using the Kruskal Wallis test. Comparison between groups was performed using the Wilcoxon ranksum test. Permutational multivariate analysis of variance (PERMANOVA) was used to determine the differences in microbial community composition within and among the groups using the 'Vegan' (v2.5.3) and 'PhyloSeq' (v1.24.2) R-packages. If appropriate, a post-hoc correction using the Benjamini-Hochberg method [71], which takes into account the false discovery rate (FDR) [72], was applied for multiple comparison testing [73]. The level of significance used in these analyses was 0.05.
Additional file
Additional file 1: Figure S1. Rarefaction curves of the number of OTUs observed in male and female I. scapularis. Figure S2. Relative abundance of bacterial phyla in I. scapularis ticks from Texas and Massachusetts, USA. Figure S3. Bacterial richness (ACE and Chao1 estimators) in I. scapularis ticks. Figure S4. PCoA plot of weighted UniFrac distance metrics in male and female I. scapularis ticks collected from Texas and Massachusetts, USA. Figure S5. Unweighted PCoA plot of Borrelia-positive and Borrelianegative I. scapularis males and females collected from Massachusetts, USA.
Acknowledgments
We thank Elizabeth Mitchell for laboratory and field assistance and helpful comments during the project, and Dr. Michael LaMontagne of the University of Houston-Clear Lake for assistance with collection of ticks in Cape Cod, MA, and helpful comments on the manuscript. We gratefully acknowledge Dr. Pete Teel (Texas A&M University) for many helpful discussions and the Texas DSHS Zoonosis Control Branch for providing tick samples from dogs.
Authors' contributions ST and MSA conceived and designed the study. MSA supervised the study. ST prepared samples for sequencing, performed PCR and sequencing experiments. YZ processed the data using mothur. ST, YZ, and MSA analyzed data. ST performed statistical analyses with assistance from YZ. ST drafted the initial manuscript, and all authors provided feedback and insights into the manuscript. ST revised the manuscript, and all authors read, edited, and approved the final version of the manuscript.
Funding
Funding for this work was provided by the State of Texas and the University of North Texas Health Science Center. The funding entities had no role in the study design, data collection, interpretation, or publication of this study, and the results and conclusions expressed herein are solely those of the authors.
Availability of data and materials All raw sequence data generated and/or analysed during this study are available in the National Center for Biotechnology Information (NCBI) Sequence Read Archive (SRA) with the accession number SRP144771 (https://www.ncbi.nlm.nih.gov/sra/SRP144771) under the BioProject PRJNA464062.
|
v3-fos-license
|
2018-12-11T09:05:34.452Z
|
2015-02-23T00:00:00.000
|
55049852
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2015/537049.pdf",
"pdf_hash": "f89b8e25a3e6ecdac3fabcab591112a5e86d4757",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43137",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "f89b8e25a3e6ecdac3fabcab591112a5e86d4757",
"year": 2015
}
|
pes2o/s2orc
|
Optimal Locations of Bus Stops Connecting Subways near Urban Intersections
1MOE Key Laboratory for Urban Transportation Complex Systems Theory and Technology, Beijing Jiaotong University, Beijing 100044, China 2Transport Planning and Research Institute, Ministry of Transport, Beijing 100028, China 3Beijing Transportation Research Center, Beijing 100073, China 4China Urban Sustainable Transport Research Center, China Academy of Transportation Sciences, Beijing 100029, China
Introduction
The demand of urban traffic in China has been dramatically increasing with the rapid development of economy, urbanization, and mobilization for last decades.Moreover, severe traffic congestion is challenging to the operation and management of urban traffic.More people have realized that urban public transit is an effective way to mitigate traffic congestion and improve the sustainability of transport development.
Urban public transit is a highly complex system where urban railways and buses are crucial parts of the sustainably social development.Due to reliable running time, efficient transport service, and energy consumption, urban rail transit which can carry a great number of passengers has been playing a very important role as the skeleton of urban traffic system.The number of passengers served by the Beijing subway system has exceeded 10 million persons per day.The efficiency of urban rail transit depends not only on its operational performance but also on good cooperation with other transportation modes, especially with those buses which are able to provide feeder service for passengers arriving at or leaving urban rail stations.
Bus stops, as the connecting points between buses and urban rail transit services, are important spaces for passengers to alight or board [1].According to the real situation of urban roads, the areas around intersections are suitable and convenient to arrival and leaving of passengers.Thus, the bus stops near intersections are often considerable for passengers transferring between urban rail transit and buses, which have a great impact on level of passenger service and attraction of public transit.
The bus stops connecting subways have some different characteristics, such as more bus routes and berths, longer bus stopping time for passengers to alight or board, larger ridership amount, and more complex passenger flow than common bus stops.If bus stops are not located correctly, they are likely to cause serious impacts on road traffic flow [2] and intersection capacity [3] and even become the bottlenecks of urban traffic systems [4].
Three major criteria, the efficiency, effectiveness, and equity of service, are employed to evaluate public systems [5].The efficiency of service is defined as the ratio of LOS (level of service) to the cost of resource consumed and thus usually measured by its cost.The effectiveness of service shows what comfortable service public transport can provide.The equity of service requires an indiscrimination of transport supplies that different users can obtain.The above three criteria reflect the interests of different stakeholders involved in operators (bus companies), bus users (passengers), and other users (private cars, taxis), respectively, which are imported into this study.
Literature Review
The location of feeder bus stops near intersections is a hotspot in research of public transit which has been drawing the attention of researchers for decades.Those achievements have an important impact on the quality of service provided for passengers and the operation of bus vehicles.
There are two major types of studies relating tothe location optimization of bus stops.The first type investigates the layout of bus stops to cut down the cost of buses or overall transportation system.For example, by taking possible changes in demand into account, a bilevel optimization model for locating bus stops was developed to minimize the social cost of the overall transport system [6].Hafezi and Ismail [7] analyzed the efficiency of bus operation where the studies on bus stop location at three points including near-side, far-side, and mid-block were considered.Li [8] established a model that lists the stops in the order that they appear after organizing bus stops by routes and then transformed the problem of matching bus stops into a shortest path problem.Similar research was also found in the work of Hu et al. [9] which attempted to analyze the characteristics of urban rail transit and conventional buses.
The second type of studies are mainly focused on the impact on the capacity of roads or intersections as well as delay incurred by bus stops.For instance, a weighted-leastsquares regression model was used to estimate the dwell time of buses at stops that associated prediction interval to reduce the negative impacts of nearside bus stops [10].Furth and SanClemente [11] worked on the impact of bus stop locations on bus delay where buses are running on the near-side, far-side, uphill, downhill of road.By dividing bus delay into service delay and nonservice delay, Xu et al. [12] proposed a delay estimation model for buses at a bus bay stop and a curbside bus stop.Lu et al. [13] concentrated on the delay of buses near a stop when mixed traffic flow was considered by a special cellular automation model.The relationship between delay time and distance from a bus stop to a stop line of intersection, arriving rates, and dwell time of buses, signal circle, was studied [14].A kinematic wave theory-based model was used to determine where to place a near-side stop to achieve a target level of residual car queuing [15].Chen et al. [16] developed a computation method on the bus delay at stops in Beijing through statistical analysis.
The existing efforts that considered the benefits from any one perspective of passengers [17], buses [18], and cars [19], and sometimes even the combination of any two, such as the cost of buses and delay time of cars [20], have provided important references for optimizing locations of bus stops.However, there is insufficient research that fully accounts for the influences from some vital factors at the same time, such as the walking distance of passengers, delay time of cars, and travelling time of buses.It is necessary and helpful to make up the gap by implementing a multiobjective analysis.
The rest of this study is organized as follows.A multiobjective optimization model is developed in Section 3 to aim at the shortest total walking distance of passengers, minimum delay time of cars through intersections, and least travelling time of buses.The solution method is given in Section 4. The empirical studies for the Xizhimen bus stops in Beijing and their sensitivity analyses are finally conducted to ensure that the proposed model is effective in Section 5.
Model Formulation
3.1.Objective Functions.The locations selection of bus stops mainly influences the walking distance of passengers and travelling speed of cars and buses through impact regions.The optimization of total walking distance of passengers, delay time of cars through intersections, and travelling time of buses plays a crucial role in determining an appropriate position for bus stops.
Total Walking Distance of Passengers.
Different from common passengers, the walking distance of passengers who want to transfer at the feeder bus stops connecting subways is usually classified into two parts: transfer distance from subways to buses and direct distance to take buses.Thus, the total walking distance of transferring passengers can be expressed by where 1 is the total walking distance of passengers transferring from adjacent subways to buses, 2 is the total walking distance which passengers take buses through upstream or downstream intersections, 1 is the number of passengers who transfer between subways and buses, 2 is the number of passengers who come from the upstream intersection adjacent to bus stops, 3 is the number of passengers who come from the downstream intersection adjacent to bus stops, 1 is the distance from bus stops to subway entrances where the abscissa of bus stop center is set to x, 2 is the distance from bus stops to their adjacent upstream intersection, and 3 is the distance from bus stops to their adjacent downstream intersection. 1 and 2 are the abscissas of subway entrance and its adjacent downstream intersection, respectively.The upstream intersection is selected as the origin of horizontal ordinate.The detailed layout including the relative positions of bus stops, adjacent intersections, and subway stations is represented in Figure 1.
Delay Time of Cars through
Intersections.It will be convenient for passengers to transfer between subways and other transportation modes and gather or disperse in all directions if the locations of bus stops are close to intersections, which is however likely to generate the delay time of private cars and taxis.
The following factors are key parameters in modeling delay at intersections with a bus stop: is distance between bus stop and stop line (m), is traffic flow including vehicles and buses at the key lane (vehicles/s), is the average arrival rate of every bus line at the bus stop (vehicle/s), is the average boarding and alighting time for passenger at the multiple feeder bus stop (s), Ω is proportion of time the bus is blocked at the bus stop, which is equal to the value that multiplies by , is effective green times (s), and is cycle times (s).
The delay of cars at signalized intersections is expressed by the following equation [21]: The delay of cars at unsignalized downstream intersections is expressed by the following equation [21]: where is the proportion of effective green time with a cycle time and is calculated by (4). is equal to 1 if intersections are unsignalized: is the degree of key entrance lanes at intersections and can be expressed by the following equation; is saturation flow: where is the traffic flow volume including vehicles and buses per lane (pcu/s). is the degree of saturation at bus stops which can be described as where Ω is the proportion of time when buses are dwelling at bus stops, whose value is equal to the product of the average arrival rate of buses to bus stops, (vehicle/s), and the average boarding and alighting time of passengers at bus stops, (s).
and are obtained by where is the arrival rate of bus berth ; is the average alighting and boarding time of passengers for bus berth ; and is the number of buses dwelling at bus berth ( = 1, 2, 3, . . ., ).Moreover, is the distance between bus stops and the vehicle stop line of downstream intersections (m). 0 ∼ 5 are coefficients and values are 106.5, −0.09, 0.07, 1.27, −0.53, and 0.57, respectively.
Travelling Time of Buses.
The influenced area for bus stops is defined as the area that is located between upstream intersection and downstream intersection in this paper.The process of buses passing the area is divided into seven stages, including accelerating to pass upstream intersections, running at a constant speed, decelerating and stopping at bus stops, boarding or alighting of passengers at bus stops, accelerating to leave bus stops, running at a constant speed, and decelerating and stopping before downstream intersections.The speed variation of buses with their locations which are differently positioned between two adjacent intersections is illustrated in Figures 2, 3, and 4.
The total travelling time of buses within the influenced area is computed by the following equation in terms of the above analysis: where is the consumed time of buses within each stage as mentioned above, respectively.
The corresponding travelling distance for each stage as introduced before can be calculated by different methods.For the stages where buses are accelerating or decelerating, the travelling distance is obtained by where is the order number of the stages as shown in Figures 2 to 4 that buses run through ( = 1, 3, 5 or 7). is the travel distance of buses with the stage .V ,0 is the initial travel velocity of buses at the beginning of process ; V ,max is the maximum travelling velocity of buses in the stage . is the accelerated velocity and is set to 1.4 m⋅s −2 [22].
For the stage when buses dwell at stops, the boarding and alighting time of passengers, 4 , that is, , is obtained by (10) given in terms of [16]: where curbside and bay-style are the dwelling time of buses during boarding and alighting passengers at curbside and bay-style stops, respectively. 1 , 2 , and 3 are the number of passengers boarding and alighting at different bus doors.LF is the load factor of buses, which is usually calculated through the ratio of passenger number to bus capacity.For the stages when buses are running at a constant speed, the corresponding travelling times and distances are calculated by (12).The constant operational velocity is set to 35 km⋅h −1 , which is equal to 9.7 m⋅s −1 [22]: The abscissa of bus stop center is required to satisfy 3.2.Constraints.The performance of vehicles at upstream intersections will not be affected by queuing buses.Thus, the location of the bus stops is required to be confined within a certain range and the corresponding constraint is expressed by where is the set of alternative locations of bus stops. bus is the vehicle length of buses. is the number of buses permitted to queue before bus stops.The constraint indicates that the minimum distance from bus stops to the upstream intersection is more than the permitted maximum length of queuing bus while the maximum distance is less than the road length from upstream to downstream intersections.
The number of buses permitted to queue can be calculated through the following equation as the service system of buses is really an M/M/C queuing system: where is the number of bus berths.The probability without any bus at stops, (0), is expressed by The total arrival rate of buses at bus stops, , is attained through where is the arrival rate of buses at bus stops along the bus route .The average service rate of buses at stops, , is obtained by The service intensity of bus stops is expressed by
Solution of the Multiobjective Optimization Model
The mathematical model for the optimal locations of bus stops connecting subways near urban intersections is proposed as below where is a decision vector containing all continuous variables, () denotes multiple objective functions, () ( = 1, 2, 3) gives the th nonlinear objective function, and denotes the set of feasible solutions.
For multiobjective problems, a set of Pareto optimal solutions form the Pareto frontier of them.Decision makers (DM) usually select a particular Pareto solution based on additional preference information about the objectives.
Multiobjective Analysis.
The total walking distance of passengers, delay time of cars through intersection, and travel time of buses are given in different units so that they are unable to be measured and compared directly.A normalization formula on the basis of the original Pareto frontier is established as follows: where , min , and max are the normalized, maximum, and minimum values of the th ( = 1,2,3) objective, respectively.The original Pareto frontier can be converted to the normative Pareto frontier after the normalization procedure.
The best alternatives under different weights on objectives by the distance-based method can be figured out.The distance-based method is expressed by where 1 , 2 , and 3 are the weight coefficients of three objectives, which represent the preferences of managers or decision makers on the objectives, respectively.The weight coefficients also denote the relative importance of the objectives and are more important when they are more close to 1.
Solution Method.
There are a large variety of methods to solve multiobjective optimization problems, including exact methods, such as the -constraint approach and Tchebycheff algorithm, and heuristic algorithms, such as evolutionary, Tabu search, ant colony, and particle swarm algorithms.
For the multiobjective model in (20), all feasible positions of bus stops, , should lie within the range between max { bus ⋅ } and 2 .The model is thus converted into a nonlinear integer programming model as bus stops are permitted actually to set at those positions in meter in practice, which means that is an integer.The solution tool, Lingo, is employed here to obtain the Pareto frontier of the proposed model and optimal solutions.
Case Studies
One bus stop in the Xizhimen terminal of Beijing is selected to implement to verify the validity and practicability of the proposed method in case studies.Moreover, a sensitivity analysis of all alternatives on the weights of three objectives is conducted to provide decision-making supports for bus operators.
Set-Up.
The Xizhimen terminal is a large transportation terminal in the northwestern region of Beijing.Passengers can transfer here among five transportation modes, including railway, subway, private cars, taxis, and bicycles.The daily amount of passengers served by the bus stops in the Xizhimen terminal is over 30000 according to the field survey.The layout of the Xizhimen terminal and its surrounding facilities is described in Figure 5.
The bus stop 1 with 3 bus berths is one of busiest bus stops in the Xizhimen terminal which is located on the eastern side of Gaoliang Bridge Street and between two intersections as shown in Figure 5.The stop is 70 and 170 meters away from the upstream and downstream intersection along the Gaoliang Bridge Street.There are two major passenger flows relating to bus stop 1: interchange passengers from/to Line 2, Line 4, and Line 13 of subway and those from/to other bus stops.A few of passengers from/to the downstream intersections are not considered in the case studies.1.
The other basic parameters for the proposed model are calculated in light of the filed survey data and listed in Table 2.
The three objectives of the proposed model under current situation can be computed based on the above field data and calculated parameters.The total walking distance of passengers, delay time of cars through intersection, and travelling time of buses are 93.6 kilometers, 29.7 seconds, and 38.6 seconds, respectively.
Optimization Alternative Analyses.
The suitable position of bus stops near intersections is given within a range of 80 to 150 meters away from their upstream intersection in terms of the Code for Design of Urban Road Engineering [23].To ensure the completeness of this research, the range of possible positions of bus stops is set from 50 to 240 meters.The original Pareto frontier through the proposed model is illustrated in Figure 6.The variation trend of three objectives with the positions of bus stops is listed in Table 3. "↗" and "↘" denote the values of three objectives increase and decrease with the growth of The normalization of Pareto frontier is shown in Figure 7.
A group of optimal solutions are obtained and listed in Table 4 under the different combination of weight coefficients according to the preferences of decision makers or managers.
All Pareto solutions for 66 weight combinations are summarized into 5 groups due to values from Table 4.The details on the grouping and improvement rates of solutions comparing to the current position of the bus stop, that is, is 70, are listed in Table 5.It indicates that weight combinations have remarkable influences on optimal solutions.For example, if decision makers pay more attention to the reduction of the delay of cars through intersection and the travelling time of buses, the optimal position of bus stops will tend to be close to the upstream and downstream intersection, respectively.If the total walking distance of passengers is considered most, the optimal position will be located near the subway stations and keep a certain distance from adjacent intersections.Thus the balanced optimal position of bus stops is here recommended to be 90 meters away from the upstream intersection due to the average improvement rates of three objectives in Table 5.
Conclusion
Locations of bus stops connecting subways are much crucial to improve the efficiency and level of service of bus operations.A multiobjective optimization model to determine suitable locations of bus stops is proposed in this study considering the total walking distance of passengers, delay time of cars through intersections, and travelling time of buses between adjacent intersections.The proposed model is available and effective to determine locations of bus stops through case studies where an empirical study on the bus stop at Xizhimen in Beijing is carried out.The balanced optimal location of bus stops connecting to subways is recommended to be near its upstream intersection and subway station through this study when it meets the requirement from the Code for Design of Urban Road Engineering [23].
Figure 1 :
Figure 1: Location layout of subway entrance and bus stops.
Figure 2 :
Figure 2: Speed profile of buses when bus stops are located near upstream intersection.
Figure 3 :Figure 4 :
Figure 3: Speed profile of buses when bus stops are located equally away from downstream and upstream intersections.
Figure 5 :
Figure 5: Layout of the Xizhimen bus stop and its surrounding facilities.
lay tim e (s) Bus travel time (s) T o t a l p a s s e n g e r w a l k d i s t a n c e ( k m )
Table 1 :
Bus and Passenger Data for the proposed model from field survey.
Table 2 :
Other basic parameters for the proposed model from survey.
Table 3 :
Variation trends of the values of three objectives in the proposed model.,respectively.The maximum and minimum values of three objectives from Table3are 435.24 and 84.24 km, 36.03 and 29.62 seconds, and 38.6 and 31.67 seconds, respectively.
Table 4 :
Optimal solutions under different weights.
Table 5 :
Details on the grouping and average improvement rates of the optimal solutions.
|
v3-fos-license
|
2022-08-24T06:17:56.815Z
|
2022-08-23T00:00:00.000
|
251742073
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://insight.jci.org/articles/view/158444/files/pdf",
"pdf_hash": "2d63a65ea15529e062ed23ca9bddb3c30321b799",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43138",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "ece4302543bd9b6abceecedb576ec74c8c68858b",
"year": 2022
}
|
pes2o/s2orc
|
Fcγ receptor–mediated cross-linking codefines the immunostimulatory activity of anti-human CD96 antibodies
New strategies that augment T cell responses are required to broaden the therapeutic arsenal against cancer. CD96, TIGIT, and CD226 are receptors that bind to a communal ligand, CD155, and transduce either inhibitory or activating signals. The function of TIGIT and CD226 is established, whereas the role of CD96 remains ambiguous. Using a panel of engineered antibodies, we discovered that the T cell stimulatory activity of anti-CD96 antibodies requires antibody cross-linking and is potentiated by Fcγ receptors. Thus, soluble “Fc silent” anti-CD96 antibodies failed to stimulate human T cells, whereas the same antibodies were stimulatory after coating onto plastic surfaces. Remarkably, the activity of soluble anti-CD96 antibodies was reinstated by engineering the Fc domain to a human IgG1 isotype, and it was dependent on antibody trans-cross-linking by FcγRI. In contrast, neither human IgG2 nor variants with increased Fcγ receptor IIB binding possessed stimulatory activity. Anti-CD96 antibodies acted directly on T cells and augmented gene expression networks associated with T cell activation, leading to proliferation, cytokine secretion, and resistance to Treg suppression. Furthermore, CD96 expression correlated with survival in HPV+ head and neck squamous cell carcinoma, and its cross-linking activated tumor-infiltrating T cells, thus highlighting the potential of anti-CD96 antibodies in cancer immunotherapy.
Introduction
The clinical success of agents targeting immune checkpoint receptors such as CTLA-4 and PD-1 has demonstrated that the immune system is a bona fide and key therapeutic target for the treatment of cancer. Despite the unprecedented durable anti-tumor responses seen in a subset of patients, the majority of patients fail to respond to these treatments or develop resistance after the initial response (1). This has galvanised the search of additional immune checkpoint receptors that could be targeted to extend the benefit of immunotherapy to the wider population (2). One such receptor that has recently received attention is CD96, also known as T-cell activated late expression (TACTILE). CD96 is a type I transmembrane protein comprising an extracellular region that consists of three immunoglobulin superfamily (IgSF) domains followed by an Oglycosylated stalk region (3,4). The cytoplasmic domain of CD96 contains a conserved short basic/proline-rich motif, which typically associates with SH3-domain containing proteins, followed by a single Immunoreceptor Tyrosine-based Inhibitory Motif (ITIM). In addition, a YXXM motif similar to that found in CD28 and ICOS is present in human but not mouse CD96.
Expression of CD96 is limited to immune cells, primarily T cells, NK cells and NKT cells and is
upregulated following T-cell activation (3,5). Two isoforms of CD96 that differ in the sequence of the second IgSF domain exist as a result of alternative splicing with the shorter isoform (CD96v2) being the predominant form expressed in human primary cells (6). CD96 shares an ability to bind proteins from the nectin and nectin-like family with two other IgSF receptors, namely T-cell immunoreceptor with Ig and ITIM domains (TIGIT) and CD226 (DNAX accessory molecule 1, DNAM-1). While TIGIT and CD226 bind to CD155 (necl-5) and CD112 (nectin-2), CD155 is the only known ligand for CD96 in humans (7). CD155 is weakly expressed on a variety of cells, including immune, epithelial and endothelial cells, and is upregulated on cancer cells (8,9). TIGIT and CD226 function as inhibitory and activating receptors, respectively, while both inhibitory and stimulatory functions have been ascribed to CD96. Initial studies demonstrated that engagement of CD96 stimulates human NK cell-mediated lysis of P815 cells in redirected killing assays, albeit less efficiently than CD226 (10,11).
Furthermore, unlike CD226, CD96 was dispensable for killing of CD155-expressing tumor cells, suggesting that the stimulatory effect of CD226 is dominant (12,13). In contrast, studies in mice showed that CD96 deficiency results in an exaggerated NK cell-mediated IFNγ production and resistance to carcinogenesis and experimental lung metastases (14), indicating that CD96 functions as an inhibitory receptor in murine NK cells. Additional studies employing anti-CD96 antibodies provided further support for targeting this pathway as a strategy to treat cancer (14,15), however the findings were confounded by the observation that anti-CD96 antibodies need not block the CD155-CD96 interaction to exert their anti-metastatic effect (16). More recently Chiang et al (17) showed that genetic ablation or antibody blockade of CD96 rendered murine CD8 T cells less responsive and conversely anti-CD96 antibody presented on microbeads promoted T-cell proliferation. Antibodies have the capacity to induce receptor clustering dependent on co-engagement of Fcγ receptors (FcγR) and this property has been exploited for the development of agonistic immunostimulatory antibodies that target costimulatory TNF receptor superfamily members (18)(19)(20).
Here we have addressed whether Fcγ receptor crosslinking potentiates the activity of antihuman CD96 antibodies. Through Fc domain engineering, we have identified the human IgG1 isotype as a key determinant that co-defines the activity of anti-CD96 antibodies. We show that anti-CD96 antibodies costimulate the proliferation of human peripheral CD4+ and CD8+ T cells and enhance cytokine production in an isotype and FcγRI-dependent manner. Costimulation by anti-CD96 antibodies was effective in countering suppression by regulatory T cells and in inducing the proliferation of tumor-infiltrating T cells. RNAseq analysis following CD96 costimulation revealed upregulation of multiple gene networks associated with T-cell proliferation and effector function. These results inform the design of immunostimulatory anti-CD96 antibodies for the reinvigoration of anti-cancer T cells.
Immobilized and Fcγ γ γ γR-crosslinked anti-CD96 antibodies promote human T-cell proliferation
We evaluated three different anti-CD96 mAbs that either fully (19-134 and 4-31) or partially inhibit the CD155-CD96 interaction (Table I Figure 2, C and D). As shown in Figure 1A and 1B, clone 19-134 did not significantly alter the proportion of dividing T cells. Similar results were obtained using two additional anti-CD96 mAbs (clones 19-14 and 4-31; Figure 1, A, C and D). Taken together, these data demonstrate that CD96 blockade does not confer a proliferative advantage to anti-CD3 stimulated T cells. Next we tested whether the activity of anti-CD96 mAbs could be potentiated through antibody immobilization on tissue culture plates, an experimental strategy used for inducing antibody-mediated receptor crosslinking. In contrast with the findings using soluble antibodies, plate-bound anti-CD96 mAbs were able to costimulate the proliferation of CD4+ and CD8+ T cells (Figure 2, A and B). We also tested if blocking the CD155-CD96 interaction with an anti-CD155 mAb can affect T-cell proliferation. As shown in Figure 2C, the addition of a blocking anti-CD155 mAb failed to enhance T-cell proliferation and did not affect the increase in cell proliferation afforded by plate-bound anti-CD96 mAb. As CD96 is expressed by T cells and NK cells in resting human PBMCs (3, 10), we examined whether anti-CD96 mAbs could costimulate purified CD3+ T cells. As shown in Figure 2D, immobilized anti-CD96 mAb significantly boosted the proliferation of isolated CD4+ and CD8+ T cells, demonstrating that (27). We evaluated two anti-CD96 clones (19-134 and 19-14) in the IgG1 V12 format, but neither mAb was active ( FcγRs coated onto plastic, together with highly purified CFSE-labelled CD3+ T cells, and showed that FcγRI was uniquely able to restore the activity of soluble anti-CD96 human IgG1 ( Figure 4B).
Collectively, our data demonstrate that soluble anti-CD96 mAbs of the IgG1 subclass enhance the proliferation of CD4+ and CD8+ T cells, dependent on mAb crosslinking through Fc domain trans-interaction with FcγRI.
Agonistic anti-CD96 mAb counters suppression by Tregs
Tregs exert a dominant role in maintaining self-tolerance and suppressing anti-tumor Tcell responses (29), but the role of CD96 on Treg is currently unknown. Flow cytometric analysis revealed that peripheral blood Tregs expressed CD96 similarly to conventional CD4+ and CD8+ T cells ( Figure 5A). To assess if the presence of increasing numbers of Tregs would negate the costimulatory effect of anti-CD96 mAbs, highly purified, CFSE-labelled CD3+ CD25-CD127+ (98.1±0.5%) conventional/effector T cells (Tconv) were stimulated with anti-CD3 and either anti-CD96 or an isotype-matched control antibody. In some cultures, purified unlabelled CD4+ CD25+ CD127-Treg (93.6±1.8% purity) were added to obtain a Tconv:Treg of 2:1 or 3:1. Tconv proliferation and activation were determined by measurement of CFSE dilution and upregulation of CD25, respectively, after four days. As expected, the addition of purified Tregs suppressed the proliferation of CD4+ and CD8+ Tconv and reduced expression of CD25 ( Figure 5, B-E).
However, when anti-CD96 mAb was present, both Tconv proliferation and CD25 expression were restored to levels seen in the absence of Tregs ( Figure 5, B-E). These data support the notion that costimulation of Tconv by anti-CD96 mAb overcomes to a large extent the suppression exerted by Tregs.
Gene expression profiling reveals augmentation of multiple T-cell activation pathways by CD96
To gain further insights into the downstream events triggered by anti-CD96 mAbs, we were also enriched in the anti-CD96 treatment group. Consistently, the hallmark of the unfolded protein response, which is known to contribute to the regulation of T-cell proliferation and effector function (31), was significantly upregulated following anti-CD96 treatment ( Figure 6C).
Quantification of cytokine production in the supernatant of T cells stimulated for 6 or 22 hours
showed that anti-CD96 significantly upregulated IL-2 production by CD3+ T cells at both time points, while IFN-γ production was augmented at 6 hours ( Figure 6, D and E). Hence, increased gene transcription correlated with elevated protein levels for IL-2 and IFN-γ. Moreover, we showed that agonist anti-CD96 mAb provided direct costimulation to CD4+ and CD8+ isolated T cells, resulting in enhanced IL-2 production from each of these cell types in addition to promoting independent signals for CD4+ and CD8+ T-cell proliferation (supplemental Figure 4).
Furthermore, IPA identified a broad range of upstream regulators predicted to be activated and a smaller number of regulators predicted to be inhibited by CD96 stimulation (supplemental Figure 5, A and B). TCR, CD3 and CD28 were highlighted as potential positive upstream regulators of the gene signature induced by anti-CD96 mAb, suggesting that CD96 engagement elicits signaling pathways that overlap and strengthen those emanating from the engagement of the TCR and CD28 (supplemental Figure 5A). In agreement with this, transcription factors and signaling kinases triggered by the integrated response to TCR and CD28 engagement, such as Myc, Jun, NFκB, Mek/MAP2K1/2, PI3K/Akt and p38 MAPK, were additionally identified as upstream activating regulators (supplemental Figure 5A).
Collectively our transcriptomic data indicated that CD96 engagement triggers multiple signaling pathways associated with increased T-cell proliferation and effector function and identified several candidate molecules that could mediate signaling downstream of CD96.
Agonist anti-CD96 mAb augments the proliferation of tumor-infiltrating T cells
Given that anti-CD96 mAbs were able to costimulate peripheral blood T cells, we asked if this approach could also promote the proliferation of tumor-infiltrating T cells (TIL), which are known to exist in various dysfunctional states (32). Using publicly available data from the Cancer Genome Atlas (TGCA) database through the Tumor Immune Estimation Resource (33) Next, we used flow cytometry to examine CD96 expression on T-cell subsets isolated from fresh HNSCC tumor biopsies (patient characteristics are included in Table III). CD96 was expressed on CD8+ T cells, CD4+ Foxp3-Tconv and CD4+ Foxp3+ Tregs ( Figure 7B). Although ranging widely between patients, on average expression of CD96 on CD8+ T cells was higher than that seen on the other T-cell subsets analyzed ( Figure 7B). Furthermore, we evaluated if CD96 is co-expressed with the inhibitory receptor PD-1, typically found on chronically stimulated and/or exhausted tumor-infiltrating CD8+ T cells (36). Figure 7C shows that PD-1 expression on CD8+ T cells from HNSCC tumors varied amongst patients and expression of CD96 could be detected on a significant proportion of the PD-1 bright and PD-1 dim T cells ( Figure 7C).
To test whether anti-CD96 mAbs are capable of costimulating tumor-infiltrating T cells, we isolated lymphocytes from HPV+ HNSCC tumors and measured T-cell proliferation in response to plate-bound anti-CD3 and anti-CD96. On average, the percentages of tumoral CD8+ T cells, CD4+ Tconv and CD4+ Tregs out of the CD3+ T cells were 35.6 ± 5.2, 42.7 ± 5.9 and 15.9 ± 2.2, respectively. The data presented in Figure 7D show that tumor-infiltrating T cells proliferated more extensively when cultured with anti-CD3 and anti-CD96 mAb compared to incubation with anti-CD3 and a control mAb, highlighting CD96 as a potential target to reinvigorate anti-cancer T cells.
Discussion
Despite the success of targeting the PD-1/PD-L1 inhibitory axes, there remains a strong incentive to discover additional immunomodulatory targets driven primarily by the need to extend the response rate and durability offered by current treatments. Herein we provide data to suggest that mAbs targeting human CD96, a member of IgSF, expressed at low levels on naive T cells, but strongly upregulated during T-cell activation, are potent stimulators of T-cell activation and proliferation. Although earlier studies, which primarily focused on murine NK cell responses, suggested that CD96 could function as an inhibitory receptor (14,15), our data using human T cells do not support this notion. Instead, we provide evidence that CD96 is a bona fide costimulatory receptor for human T cells. First, we showed that soluble 'Fc silent' mAbs that block the interaction of CD96 with its ligand CD155 did not exert functional effects (Figure 1), whereas the same mAbs were stimulatory when coated on tissue culture plastic ( Figure 2).
Second, the conversion of 'Fc silent' anti-CD96 mAbs to Fc competent mAbs of the IgG1 subclass endowed them with the capacity to costimulate T cells without the need for coating ( Figure 3). Third, we demonstrated that the T cell costimulatory effects of soluble anti-CD96 IgG1 are critically dependent on crosslinking mediated through trans-binding to FcγRI (Figures 3 and 4). We interpret these results as evidence that immobilization of anti-CD96 mAbs either by coating on synthetic surfaces, or more physiologically through co-engagement of FcγRI, results in CD96 clustering on the T-cell surface, which subsequently leads to stimulation of intracellular signaling. Our findings are consistent with a recent study demonstrating that coupling of anti-CD96 mAbs to beads provided a costimulatory signal to T cells (17). Our data extend previous findings by demonstrating the importance of the antibody Fc domain in driving the functional activity of anti-CD96 mAbs. These findings should therefore guide future development of agonist anti-CD96 mAb aimed towards enhancing sub-optimal anti-tumor responses. In this context it is well known that anti-tumor T-cell responses are hindered by Tregs and therefore our data showing that anti-CD96 mAb was highly effective in overcoming suppression by Tregs is noteworthy ( Figure 5). Therefore, we anticipate that anti-CD96 mAbs remain capable of augmenting Tconv responses in spite of the presence of increasing numbers of Tregs within the tumor microenvironment.
Mechanistically CD96 costimulation could lessen Treg-mediated suppression in a number of ways. First, by augmenting IL-2 secretion ( Figure 6 and supplemental Figure 4) and the expression of CD25 on CD4+ and CD8+ Tconv (Figure 5), the ability of Tregs to deprive responder T cells of IL-2 (29) is likely to be reduced, thus increasing the bioavailability of IL-2 to Tconv. Second, our transcriptomic data and pathway analysis suggested convergence of CD96 signaling pathways with those downstream of CD3 and CD28 ( Figure 6 and supplemental Figure 5). This is predicted to reduce the dependency of Tconv on costimulation via CD80/86-CD28 and therefore could circumvent Treg-mediated suppression exerted by CTLA-4 expressing Tregs (29). Third, our transcriptomic analysis also showed that CD96 costimulation upregulated several costimulatory receptors and ligands, including OX40, GITR, 4-1BB, CD40 ligand and CD226, which could further lower the activation threshold of Tconv and impede Treg suppression.
Although our data offer plausible mechanisms of how Tconv resist suppression, an alternative hypothesis might be that anti-CD96 antibodies modulate Tregs directly as these cells also express CD96, a possibility that will be examined in future studies.
From the perspective of developing new anti-cancer immunotherapies, the finding that CD96 costimulation is able to augment the proliferation of intratumoral T cells from HPV+ HNSCC is particularly encouraging. A recent study showed that intratumoral HPV-specific PD-1+ CD8+ T cells can be distinguished by expression of TCF-1 and TIM-3, markers that are used to identify stem cell-like and terminally differentiated T cells, respectively (36). Interestingly, the authors of that study demonstrated that it is the stem cell-like CD8+ T cells that proliferate extensively upon in vitro stimulation with the cognate HPV peptide (36). Herein we showed that fraction of PD-1 bright as well as on PD-1 dim T cells. Therefore, it would be interesting to dissect the role of CD96 further by examining how CD96 costimulation impacts on different HPV-specific CD8+ T-cell subsets. Such studies will inform of more effective strategies to reinvigorate anti-cancer T cells in patients. Tables Table I.
|
v3-fos-license
|
2020-10-05T01:00:49.610Z
|
2020-10-01T00:00:00.000
|
222124891
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.aanda.org/articles/aa/pdf/2021/02/aa39574-20.pdf",
"pdf_hash": "bca71250266cbe6b6c65556b2c2b6b2ecf120a69",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43139",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "bca71250266cbe6b6c65556b2c2b6b2ecf120a69",
"year": 2020
}
|
pes2o/s2orc
|
HOLISMOKES -- IV. Efficient Mass Modeling of Strong Lenses through Deep Learning
Modelling the mass distributions of strong gravitational lenses is often necessary to use them as astrophysical and cosmological probes. With the high number of lens systems ($>10^5$) expected from upcoming surveys, it is timely to explore efficient modeling approaches beyond traditional MCMC techniques that are time consuming. We train a CNN on images of galaxy-scale lenses to predict the parameters of the SIE mass model ($x,y,e_x,e_y$, and $\theta_E$). To train the network, we simulate images based on real observations from the HSC Survey for the lens galaxies and from the HUDF as lensed galaxies. We tested different network architectures, the effect of different data sets, and using different input distributions of $\theta_E$. We find that the CNN performs well and obtain with the network trained with a uniform distribution of $\theta_E$ $>0.5"$ the following median values with $1\sigma$ scatter: $\Delta x=(0.00^{+0.30}_{-0.30})"$, $\Delta y=(0.00^{+0.30}_{-0.29})"$, $\Delta \theta_E=(0.07^{+0.29}_{-0.12})"$, $\Delta e_x = -0.01^{+0.08}_{-0.09}$ and $\Delta e_y = 0.00^{+0.08}_{-0.09}$. The bias in $\theta_E$ is driven by systems with small $\theta_E$. Therefore, when we further predict the multiple lensed image positions and time delays based on the network output, we apply the network to the sample limited to $\theta_E>0.8"$. In this case, the offset between the predicted and input lensed image positions is $(0.00_{-0.29}^{+0.29})"$ and $(0.00_{-0.31}^{+0.32})"$ for $x$ and $y$, respectively. For the fractional difference between the predicted and true time delay, we obtain $0.04_{-0.05}^{+0.27}$. Our CNN is able to predict the SIE parameters in fractions of a second on a single CPU and with the output we can predict the image positions and time delays in an automated way, such that we are able to process efficiently the huge amount of expected lens detections in the near future.
Introduction
Strong gravitational lensing has become a very powerful tool for probing various properties of the Universe. For instance, galaxygalaxy lensing can help to constrain the total mass of the lens and moreover, assuming a mass-to-light (M/L) ratio for the baryonic matter, also its dark matter (DM) fraction. By combining lensing with other methods like measurements of the lens' velocity dispersion (e.g., Barnabè et al. 2011Barnabè et al. , 2012Yıldırım et al. 2020) or the galaxy rotation curves (e.g., Hashim et al. 2014;Strigari 2013), the dark matter can be better disentangled from the baryonic component and a 3D (deprojected) model of the mass density profile can be obtained. Such profiles are very helpful for probing cosmological models (e.g., Davies et al. 2018;Eales et al. 2015;Krywult et al. 2017).
Another application of strong lensing is to probe high redshift sources thanks to the lensing magnification (e.g., Dye et al. 2018;Lemon et al. 2018;McGreer et al. 2018;Rubin et al. 2018;Salmon et al. 2018;Shu et al. 2018). In the last years, huge efforts were spent in reconstructing the surface brightness distribution of lensed extended sources. Together with redshift and kinematic measurements, these observations contain information about the evolution of galaxies at higher reshifts. If the mass profile of the lens is well constrained, the original unlensed morphology can be reconstructed (e.g., Warren & Dye 2003;Suyu et al. 2006;Nightingale et al. 2018;Rizzo et al. 2018;Chirivì et al. 2020).
In the case of a transient source like a quasar or supernova (SN), measurements of the time delay between multiple images can be used to constrain the value of the Hubble constant H 0 (e.g., Refsdal 1964;Chen et al. 2019;Rusu et al. 2020;Wong et al. 2020;Shajib et al. 2020) and thus help to assess the 4.4σ tension between the cosmic microwave background (CMB) analysis that gives H 0 = (67.36 ± 0.54) km s −1 Mpc −1 for flat Λ cold dark matter (ΛCDM;Planck Collaboration et al. 2018) and the local distance ladder with H 0 = (74.03 ± 1.42) km s −1 Mpc −1 (SH0ES programme; Riess et al. 2019).
Since these strong lens observations are very powerful, several large surveys including the Sloan Lens ACS (SLACS) survey Shu et al. 2017), the CFHTLS Strong Lensing Legacy Survey (SL2S; Cabanac et al. 2007;Sonnenfeld et al. 2015), the Sloan WFC Edge-on Late-type Lens Survey (SWELLS; Treu et al. 2011), the BOSS Emission-Line Lens Survey (BELLS; Brownstein et al. 2012;Shu et al. 2016;Cornachione et al. 2018), the Dark Energy Survey (DES; Dark Energy Survey Collaboration et al. 2005;Tanoglidis et al. 2020), the Survey of Gravitationally-lensed Objects in HSC Imaging (SuGOHI; Sonnenfeld et al. 2018a;Wong et al. 2018;Chan et al. 2020;Jaelani et al. 2020), and surveys in the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS; e.g., Lemon et al. 2018;Cañameras et al. 2020) have been conducted to find lenses. So far we have several thousand lenses detected but mainly from the lower redshift regime. However, based on newer upcoming surveys like the Rubin Observatory Legacy Survey of Space and Time (LSST, Ivezic et al. 2008) located in Chile, which will target around 20, 000 deg 2 of the southern hemisphere in six different filters (u, g, r, i, z, y), together with the Euclid imaging survey from space operated by the European Space Agency (ESA, Laureijs et al. 2011), we expect billions of galaxy images containing on the order of a hundred thousand lenses (Collett 2015).
To deal with this huge amount of images, there are ongoing efforts to develop fast and automated algorithms to find lenses in the first place. These methods are based on different identification properties, for instance on geometrical quantification (Bom et al. 2017;Seidel & Bartelmann 2007), spectroscopic analysis (Baron & Poznanski 2017;Ostrovski et al. 2017) or color cuts (Gavazzi et al. 2014;Maturi et al. 2014). Moreover, Convolutional Neural Networks (CNNs) have also been extensively used in gravitational lens detection (e.g., Jacobs et al. 2017;Petrillo et al. 2017;Schaefer et al. 2018;Lanusse et al. 2018;Metcalf et al. 2019;Cañameras et al. 2020;Huang et al. 2020) as these do not require any measurements of the lens' properties. Once a CNN is trained, it can classify huge amounts of images in a very short time and is thus very efficient. Nontheless, such CNNs have limitations like completeness or an accurate grading, and the performance strongly depends on the training set design as it encodes an effective prior (in the case of supervised learning). In this regard unsupervised or active learning might be promising future avenues for lens finding.
However, these methods are only for finding the lenses, and a mass model is necessary for further studies. Mass models of gravitational lenses are often described by parametrized profiles, where the parameters are optimized for instance via Markov-Chain Monte-Carlo (MCMC) sampling (e.g., Jullo et al. 2007;Suyu & Halkola 2010;Sciortino et al. 2020;Fowlie et al. 2020). Such techniques are very time and resource consuming and are thus difficult to scale up for the upcoming amount of data. With the success of CNNs in image processing, Hezaveh et al. (2017) showed the use of CNNs to estimate the mass model parameters of a Singular Isothermal Ellipsoid (SIE) profile and investigated further error estimations , analyzing interferometric observations (Morningstar et al. 2018), and source surface brightness reconstruction with reccurent inference machines (RIM Morningstar et al. 2019). While they are mainly considering single band images and subtracting the lens light before processing the image with the CNN, Pearson et al. (2019) presented a CNN to model the image without lens light subtraction. However, for all deep learning approaches one needs a data set, containing the images and the corresponding parameter values, for training, validation, and testing the network. As there are not that many real lensed galaxies known, both groups mock up lenses for their CNNs.
We recently initiated the Highly Optimized Lensing Investigations of Supernovae, Microlensing Objects, and Kinematics of Ellipticals and Spirals (HOLISMOKES) programme (Suyu et al. 2020, hereafter HOLISMOKES I). After presenting our lens search project , hereafter HOLISMOKES II), we present in this paper a CNN for modeling strong gravitationally lensed galaxies with ground-based imaging, taking the advantage of four different filters and not applying lens light subtraction beforehand. In contrast to Pearson et al. (2019) we are using a mocked up data set based on real observed galaxy cutouts, since the performance of the CNN on real systems will be optimal when the mock systems used for training are as close to real lens observations as possible. Our mock lens images contain, by construction, realistic line-of-sight objects as well as realistic lens and source light distributions in the image cutouts. We are using the Hyper Suprime-Cam (HSC) Subaru Strategic Program (SSP) images together with redshift and velocity dispersion measurements from the Sloan Digital Sky Survey (SDSS) for the lens galaxies, and images together with redshifts from the Hubble Ultra Deep Field (HUDF) survey for the sources (Beckwith et al. 2006;Inami et al. 2017).
The outline of the paper is as follows. We describe in Sec. 2 how we simulate our training data, and we give a short introduction and overview of the used network architecture in Sec. 3. The main networks are presented in Sec. 4, and we give details of further tests in Sec. 5. We also consider the image position and time delay differences in Sec. 6 for a performance test and compare to other modeling techniques in Sec. 7. We summarize and conclude our results in Sec. 8. Throughout this work, we assume a flat ΛCDM cosmology with a Hubble constant H 0 = 72 km s −1 Mpc −1 (Bonvin et al. 2017) and Ω M = 1 − Ω Λ = 0.32 (Planck Collaboration et al. 2018). Unless specified otherwise, each quoted parameter estimate is the median of its onedimensional marginalized posterior probability density function, and the quoted uncertainties show the 16 th and 84 th percentiles (that is, the bounds of a 68% credible interval).
Simulation of strongly lensed images
For training a neural network one needs, depending on the network size, tens of thousands up to millions of images together with the expected network output, which are, in our case, the values of the SIE profile parameters corresponding to each image. Since there are far too few known lens systems, we need to mock up lens images. While previous studies are based on partly or fully generated light distributions (e.g., Hezaveh et al. 2017;Perreault Levasseur et al. 2017;Pearson et al. 2019), we aim to produce more realistic lens images by using real observed images of galaxies and only simulate the lensing effect with our own routine. We work with the four HSC filter, g, r, i, z (matched to HST filters F435W (λ = 4343.4Å), F606W (λ = 6000.8Å), F775W (λ = 7702.2Å) and F850LP (λ = 9194.4Å), respectively), to give the network color information to distinguish better between lens and source galaxies. The images of HSC for those filters are very similar to the expected image quality of LSST such that our tests and findings will also hold for LSST. Therefore, this work is a direct preparation and an important step for modeling the expected 100,000 lens systems which will be detected with LSST in the near future.
Lens galaxies from HSC
For the lenses, we use HSC SSP 1 images from the second public data release (PDR2; Aihara et al. 2019) with a pixel size of 0.168 . For calculating the axis ratio q light and position angle θ light of the lens, we use the second brightness moments calculated for the i band since redder filters follow better the stellar mass but the S/N is substantially lower in the z band compared to the i band. We crossmatch the HSC catalog with the SDSS 2 catalog to use only images of galaxies where we have SDSS spectroscopic redshifts and velocity dispersions. With this selection, we end up with a sample containing 145,170 galaxies that is dominated by LRGs. We show in Figure 1 in gray a histogram of the lens redshifts used for the simulation. We already overplot the distribution of the mock samples discussed in Sec. To describe the mass distribution of the lens, we adopt a SIE profile (Barkana 1998) such that the convergence (dimensionless surface mass density) can be expressed as with elliptical radius where x and y are angular coordinates on the lens plane, with respect to the lens center. In this equation θ E denotes the Einstein radius and q the axis ratio. 3 The mass distribution is rotated by the position angle θ. The Einstein radius is obtained from the velocity dispersion v disp with where c is the speed of light, and D ds and D s are the angular diameter distances between the lens (deflector) and source and the observer and source, respectively. The distribution of the velocity dispersion is shown in Figure 1 (bottom pannel, gray histogram). We compute the deflection angles of the SIE with the lensing software Glee (Suyu & Halkola 2010;Suyu et al. 2012). Based on the second brightness moments of the lens light distribution in the i band, the axis ratio q light and position angle θ light are obtained internally in our simulation code. Based on several studies (e.g, Sonnenfeld et al. 2018b;Loubser et al. 2020), the light traces the mass relatively well but not perfectly. Therefore, we add randomly-drawn gaussian perturbations on the light parameters, with a gaussian width of 0.05 for the lens center, of 0.05 for the axis ratio, and 0.17 radians (10 degrees) for the position angle, and adopt the resulting parameter values for the lens mass distribution. In case the axis ratio of the mass q (i.e. with gaussian perturbation) is above 1, we draw a second realization of the gaussian noise and otherwise set it to exactly 1.
While the simulation code assumes a parametrization in terms of axis ratio q and position angle θ, we parametrize for our network in terms of complex ellipticity e c which we define as e c = A e 2iθ = e x + ie y with e x = 1 − q 2 1 + q 2 cos(2θ) The back transformation is given by This is in agreement with previous CNN applications to lens modeling Pearson et al. 2019).
Sources from HUDF
The images for the sources are taken from HUDF 4 where also the spectroscopic redshifts are known (Beckwith et al. 2006;Inami et al. 2017). The cutouts are 10 × 10 with a pixel size of 0.03 . This survey is chosen for its high spatial resolution and we can adopt the images without point-spread function (PSF) deconvolution. Moreover it contains high redshift galaxies such that we can achieve a realistic lensing effect. The 1,323 relevant galaxies are extracted with Source Extractor (Bertin & Arnouts 1996) since the lensing effect is redshift dependent and we would otherwise lens the neighboring objects as if they were all at the same redshift, which would lead to incorrect lensing features. We show a histogram of the source redshifts in Figure 2 (gray histogram). Since we select randomly a background source (see Sec. 2.3 for details), the source galaxies can be used multiple times for one mock sample and thus the redshift distribution varies slightly between the different samples (colored histograms, see details in Sec. 4).
Mock lens systems
For training our networks we use mocked-up images based on real observed galaxies and only generate the lensing effect. We use HSC galaxies as lenses (see Sec. 2.1 for details) and HUDF galaxies as background objects (see Sec. 2.2) to obtain mocks that are as realistic as possible. Figure 3 shows a diagram of the simulation pipeline. The input has three images: the lens, the (unlensed) source, and the lens PSF image (top row). Together with the provided redshifts of source and lens, as well as the velocity dispersion for calculating the Einstein radius with equation 3, the source image can be lensed onto the lens plane (second row). For this we place a random source from our catalog randomly in a specified region behind the lens and accept this position if we obtain a strongly lensed image. Since the source images have previously been extracted, we use the brightest pixel in the i band to center the source. We have also implemented the option to just keep one of the two strong lens configurations, either quadruply or doubly imaged galaxies, classified based on the image multiplicity of the lensed source center. We also set a peak brightness threshold for the arcs in comparison to the background noise, which is the lowest root-mean-square (RMS) value of a 10%×10% square (rounded to an integer of pixel) placed in the four corners of the whole HSC cutout. The reason for calculating the RMS for each corner separately and then picking the lowest value is that there might be line-of-sight objects in the corners which would raise the RMS values. To avoid contamination to the background estimation from the lens, we use 40 × 40 image cutouts such that each corner is 4 × 4 . In the next step, the lensed source image with high resolution is convolved with the sub-sampled PSF of the lens, which is provided by HSC SSP PDR2 for each image separately. After binning up the high-resolution lensed, convolved source image to the HSC pixel size and accounting for the different photometric zeropoints of the source telescope zp sr and lens telescope zp ls , which gives a factor of 10 0.4(zp ls −zp sr ) , the lensed source image is obtained as if it had been observed through the HSC instrument (third row in Figure 3), i.e. on the HSC 0.168 /pixel resolution. We neglect at this point the additional Poisson noise for the lensed arcs. Finally, the original lens and the mock lensed source images can be combined which results in the final image (fourth row) that is croped to a size of 64 × 64 pixels (10.8 × 10.8 ). For better illustration, a color image based on the filters g, r, and i is also shown, but we generate all mock images in four bands which we use for the network training. We show more example images based on gri filters in Figure 4.
We test the effect of different assumptions on the data set, like splitting up in quads-only or doubles-only, or different as- Fig. 4: Examples of strong gravitational lens systems mocked up with our simulation code by using HUDF galaxies as sources behind HSC galaxies as lenses. Each image cutout is 10.8 × 10.8 .
sumptions on the distribution of the Einstein radii since we found this to be crucial for the network performance. For this we generate with this pipeline new, independent mock images which are based on the same lens and source images, but different combinations and alignments. The details of the different samples and the network trained on them will be discussed further in Sec. 4. For the set of quads-only and higher limit on the Einstein radius of 2 we use a modification of the conventional data augmentation in deep learning. In particular, we rotate only the lens image before adding the random lensed source image, but not the whole final image (which is done normally for data augmentation). Thus the ground truth values are also not exactly the same values given the change in position angle and another background source with different location and redshift.
Neural Networks and their architecture
Neural networks (NN) are extremely powerful tools for a wide range of tasks and thus in the recent years broadly used and explored. Additionally, the computational time can be reduced notably compared to other methods. There are generally two types of NN: (1) classification, where one has as ground truth different labels to distinguish between the different classes, or (2) regression, where the ground truth consists of a set of parameters with specific values. The latter is the kind we use here, i.e., the network shall predict a numerical value for each of the five different SIE parameters (x, y, e x , e y and θ E ).
Depending on the problem the network shall solve, there are several different types of networks. Since we are using images as data input, typically convolutional layers followed by fully connected (FC) layers are used (e.g., Hezaveh et al. 2017;Perreault Levasseur et al. 2017;Pearson et al. 2019). The detailed architecture depends on attributes such as the specific task, the size of the images or the size of the data set. We have tested different architectures and found an overall good network performance with two convolutional layers followed by three FC layers but no significant improvement for the other network architectures. A sketch of this is shown in Figure 5. The input has four different filter images for each lens system and each image a size of 64 × 64 pixels. The convolutional layers have stride of 1 and a kernel size of 5 × 5 × C, with C = 4 for the first layer, and C = 6 for the second layer, respectively. Each convolutional layer is followed by max pooling of size f × f = 2 × 2 and stride 2. After the two convolutional layers, we obtain a data cube of size 13 × 13 × 16, which is then passed through the FC layers after flattening to finally obtain the five output values.
Independent of the exact network architecture, the network can contain hundreds of thousands (or more) neurons. While initially the values of weight parameters and bias of each neuron are random, they will be updated during the training. To see the network performance after the training, one splits the data set into three samples: the training, the validation and the test sets. We further divide those sets into random batches of size N. In each iteration the network predicts the output values for one batch (forward propagation) and after running over all batches from the training and validation sets, one epoch is finished. The error, which is called loss, is obtained for each batch with the loss function, where we use the mean-square-error (MSE) defined as where η tr k,l and η pred k,l denotes, respectively, the l th true and predicted parameter, in our case from {x, y, e x , e y , θ E }, of lens system k, and p denotes the number of output parameters. We incorporated in our loss function L weighting factors w l , which are normalized such that p l=1 w l = p holds. This gives a weighting factor of 1 for all parameters if they are all weighted equally.
The loss value of that batch is then propagated to the weights and biases (back propagation) for an update based on a stochastic gradient descent algorithm to minimize the loss. This procedure is repeated in each epoch first for all batches of the training set and an average loss is obtained for the whole training set. Afterwards those steps are repeated for all batches of the validation set, while no update of the neurons is done, and an average loss for the validation set is obtained as well. The validation loss shows whether the network improved in that epoch or if a decreasing training loss is related to overfitting the neurons. A network is overfitting if it predicts better the values for the data from the training set compared to the data from the validation set. After each epoch we reshuffle our whole training data to obtain a better generalization. This concludes one epoch and is repeated iteratively to obtain a network with optimal accuracy. This whole training correspond to one so-called crossvalidation run, where several cross-validation runs are performed by exchanging the validation set with another subset of the training set. For example, if the training set and validation set form 5 subsets {A, B, C, D, and E}, then we can have 5 independent runs of training where in each run, the validation set is one of these 5 subsets and the training set contains the remaining 4 subsets. After the multiple runs, one can determine the optimal number of epochs for training by locating the epoch with the minimal average validation loss across the multiple runs. This procedure helps to minimize potential bias to certain types of lenses for a potentially unbalanced single split. The neural network trained on all five sets {A, B, C, D, E} up to that epoch can then be applied to the test set, which contains data the network has never seen before. In our case we, used ∼56% of the data set as training set, ∼14% as validation set, and ∼30% as test set 5 , such that we have a 5-fold cross-validation for each network.
Results
For training our modeling network we mock up lensing systems based on real observed galaxies with our simulation pipeline described in Sec. 2. Each lensing system is simulated in the four different filter griz of HSC to give the network color information to distinguish better between lens galaxy and lensed arcs. The network architecture assumes, as described in Sec. 3 in detail, images with size 64 × 64 pixels, which corresponds to a size of around 10 × 10 .
During our network testing, we found that the distribution of Einstein radii in the training set is very important, especially as this is a key parameter of the model. Therefore we trained a network under the assumption of different underlying data sets, e.g., a lower limit of the Einstein radius for the simulations or a different distribution of Einstein radii. We further tested the network performance by limiting to a specific configuration i.e. only doubles or quads. We give an overview of the different data set assumptions in Tab. 1.
For finding the best network for our specific problem, we test the network performance with several different variations of the hyperparameters of the network. Independent of the data set, we train each cross validation run for 300 epochs, and apart from a few checks with different values, we fix the weight decay to 0.0005 and the momentum to 0.9. For the learning rate, batch size, and the initializations of the neurons, we have done a grid search, varying the learning rate r learn ∈ [0.1, 0.05, 0.01, 0.008, 0.005, 0.001, 0.0005, 0.0001], and batch size as 32 or 64 images per batch, and exploring three different network initializations. For the weighting factors of the contribution to the loss we test mainly two options, either all parameters contribute equally (i.e. w l = 1 ∀ l in eq. 6) or the contribution of the Einstein radius is a factor 5 higher (w θ E = 5). The best hyperparameter values depend on the assumed data set and these values are listed in Tab. 1.
We present in the following subsections our CNN modelling results for various data sets.
4.1. Naturally distributed Einstein radii with lower limit 0.5 For this network we use 65,472 mock lens images simulated following the procedure described in Sec. 2. Here we assume a lower limit of the Einstein radii of 0.5 as otherwise the lensed source is totally blended with the lens and not resolvable given the average seeing and image quality. The resulting redshift distributions are shown as the blue histograms for the lens in Figure 1 (top panel) and for the source in Figure 2. The lens redshift peaks at z d ∼ 0.5. Concerning the possible strong lensing configurations, the data set is dominated by doubles as expected. In addition, systems with smaller Einstein radii are more numerous than those with larger Einstein radii as expected given the lens mass distribution, although the velocity dispersion, which is shown in Figure 1 (bottom panel), peaks at around v disp ∼ 280 km s −1 and thus tends to include more massive galaxies than the input catalog (gray histogram). The distribution of the Einstein radius is shown in Figure 6 on the left panel; the red histogram depicts the true Einstein radii and the blue one the predicted distribution. On the right panel we show the correlation between the true Einstein radius θ tr E (x-axis) and the predicted Einstein radius θ pred E (y-axis). The red line shows the median and the gray bands marks the 1σ (16 th to 84 th percentile) and 2σ (2.5 th to 97.5 th percentile) ranges, respectively. The dashed black line is the 1:1 line for reference. We do not show this for the other parameters (lens center and ellipticity) as the performance of those parameters is very similar to that presented in Sec. 4.3 where we assume a uniform distribution of Einstein radii. The red line shows the median of the distribution and the gray bands mark the 1σ (16 th to 84 th percentile) and 2σ ( 2.5 th to 97.5 th percentile) ranges. Note. The first and second columns indicate if quads and/or doubles are included in the data set. The parameter θ E,min represents the lower limit on the Einstein radius in the simulation, and w θ E is the weighting factor of the Einstein radius in the loss function. The other parameters (lens center, ellipticity) are always weighted by a factor of 1 and the sum of all five weighting factors is normalized to the number of parameters. The fifth and sixth columns give the value of the loss of the test set and the epoch with the best validation loss. This is followed by the specific hyperparameters: learning rate r learn , batch size N, and seed for the random number generator.
We see that the network recovers the Einstein radius better for lens systems with lower image separation than with high image separation (θ E 2 ), which is in the first instance counterintuitive. If the lensed images are further separated, they are better resolved and less strongly blended with the lens, and we would expect better recovery of Einstein radii from the network. The worse network performance at larger Einstein radii can therefore only be explained by the relatively low numbers of these systems in the training data. We have more than two orders of magnitude more lens systems with θ E ∼ 0.5 than with θ E ∼ 2.0 . Therefore the network is trained to predict often a small Einstein radius and just in a negligible fraction of times a larger Einstein radius. Since the lens systems with larger image separation are very interesting for a wide range of scientific applications, it is desirable to improve the network performance on specifically those lens systems. Therefore we test a network with the same data set where the Einstein radius difference contributes a factor of 5 more to the loss than the other parameters. In case of this weighted network, the prediction performance is very similar for the lens center and ellipticity, but slightly better for the Einstein radius. If we increase the contribution of the Einstein radius further, we worsen notably the performance on the other parameters.
As a further comparison of the ground truth with the predicted values of the test set, we show in Figure 7 the difference as normalized histograms (bottom row) and the 2D probability distributions (blue) where we find no strong correlation between the five parameters. The obtained median values with 1σ uncertainties for the different parameters are, respectively, (0.00 +0.31 −0.30 ) for ∆x, (−0.01 +0.29 −0.31 ) for ∆y, 0.00 +0.08 −0.09 for ∆e x , 0.01 +0.09 −0.08 for ∆e y , and (0.02 +0.21 −0.18 ) for ∆θ E , where ∆ denotes the difference between the predicted and ground truth values. As an example, a shift of e x = 0.3 to e x = 0.15 with fixed e y = 0 results in a shift from q = 0.73 to q = 0.86.
Finally, we show in Figure 8 the difference in Einstein radii as a function of the logarithm of the ratio between lensed source intensity I s and lens intensity I l determined in the i band, which we hereafter refer to as the brightness ratio. In the top right panel, we show the distribution of the brightness ratio. The lens intensity is defined as the sum of all the pixel values in the 64 pixels × 64 pixels cutout of the lens such that it is slightly overestimated due to light contamination from surrounding objects. The distribution peaks around −2 in logarithm to basis 10, which means that the lensed source flux is a factor 100 below that of the lens. The bottom-left plot shows the median with 1σ values of the Einstein radius differences for each brightness ratio bin. Focussing on the blue curve for this section, we find a bias in the Einstein radius which is driven by the small lensing systems with θ E 0.8 (compare Figure 6). Excluding these small lensing systems, we show the corresponding plot on the lower right panel. With this limitation, we find no bias anymore and obtain a median with 1σ values of 0.00 +0.17 −0.14 for the Einstein radius difference. We find a slight improvement of the performance with increasing brightness ratio for both the full sample (bottom-left panel) and the sample with θ E > 0.8 (bottom-right panel).
To further improve the network performance for wideseparation lenses, we train separate networks for lens systems with Einstein radius θ E > 2.0 in Sec. 4.2, and for lens systems where we artificially boost the number of lenses on the high-end of θ E in Sec. 4.3.
4.2. Naturally distributed Einstein radii with lower limit 2.0 Since the network presented in Sec. 4.1 cannot recover well large Einstein radius (θ E 2 ), we test the performance of a network specialized for the high end of the distribution and set the lower limit to θ E,min = 2 . Because of the higher limit on the Einstein radii, the velocity dispersion (see bottom, orange histogram in E, min = 0.5'', equally distributed E, min = 0.5'', naturally distributed E, min = 2.0'', naturally distributed Fig. 7: Comparison of the performance of the three networks described in Sec. 4. All samples include doubles and quads and a weighting factor of w θ E = 5, but different Einstein radius distributions or lower limits on the Einstein radius as indicated in the legend. In the lowest row we show the normalized histograms of the difference between predicted values and ground truth for the five parameters and above the 2D correlations distribution (1σ contour in solid and the 2σ contour in dotted).
Figure 1), is shifted towards the high end which corresponds to more massive galaxies. We also find that the lens and source redshifts, which are shown as orange histograms in Figure 1 and Figure 2, respectively, tend to slightly higher values. Since we use the natural distribution of Einstein radii as in Sec. 4.1, the image-separation distribution is again bottom-heavy and the number of mock lens systems (25,623) is smaller, as shown in Figure 9. From the blue (predicted) histogram, we see that the true distribution (red histogram) is well recovered. On the right panel in Figure 9, we show the correlation of predicted and true Einstein radii. The red line, which follows quite well the diagonal dashed line, shows the median. The gray shaded regions visualize the 1σ and 2σ regions. We find that the network performs much better for θ E ∼ 2 than the network trained in the full range (Sec. 4.1). However, this is again due to the dropping number of lens systems towards θ E ∼ 4 , and the scatter increases dramatically for the high-end of the data set.
We further show 1D and 2D probability distributions for this network in Figure 7 (orange) as well as the histogram of the brightness ratio, and the difference of the Einstein radii as function of the brightness ratio in Figure 8. While the performance for the lens center and complex ellipticity is very similar to the network presented in Sec. 4.1, we achieve an improvement for the Einstein radius. This is expected as the network is specifically trained for lens systems with large image separation. As we see from Figure 8, the larger systems do not have a higher brightness ratio on average as one might expect. As we saw already, the network performs notably better on the Einstein radii over the whole brightness ratio range. We no longer overpredict the Einstein radius for log I s I l −2.5, and also the 1σ values are smaller.
4.3. Uniformly distributed Einstein radii with lower limit 0.5 Because of the extreme decrease in the number of systems towards large image separation, we test a network trained on a more uniformly distributed sample. For this, we generate more lens systems with high image separation by rotating the lens image by nπ/2 with n ∈ [0, 1, 2, 3]. Here we do not reuse the same lens in the same rotation to avoid producing multiple images of lens systems that are too similar. We note that the background The upper-right panel shows the histogram of the brightness ratio of lensed source and lens. The bottom panel shows for the full sample (left) and limited to θ E > 0.8 (right) the difference in Einstein radius as a function of the brightness ratio with the 1σ values. We show the Einstein radius difference in the range −3 < log I s I l < −1 (white area in the histogram) where we have enough data points and shift the blue/orange bars slightly to the right side for better visualization. source and position are always different such that the lensing effect varies (see Sec. 2 for further details on the simulation procedure). We limit to a maximum of 8,000 lens systems per 0.1 bin resulting in a sample of 140,812 lens systems. This results in a more uniform distribution, though the largest-image-separation bins still have fewer lens systems since it is very difficult but also very seldom to obtain a lensing configuration with an image separation above ∼2.5 due to the mass distribution of galaxy-scale lenses. The biggest image separation within this sample is ∼4.5 while we set an upper limit to 5 corresponding to the size of the biggest Einstein radius so far observed from galaxy-galaxy lensing (Belokurov et al. 2007). The redshift distributions, shown as green histograms in Figure 1 and Figure 2, are similar to that of the naturally distributed sample (blue), whereas the lens velocity dispersions (Figure 1, bottom panel) tend to be higher (i.e., more massive galaxies), as expected.
Similar to the networks trained with natural Einstein radius distribution (see Sec. 4.1 and Sec. 4.2), we show in Figure 10 histograms (left column) and a 1:1 comparison (right column) but now for all five SIE parameters, i.e. from top to bottom for the lens center x and y, the complex ellipticity e x and e y , and the Einstein radius θ E . For this network we obtain a median value with 1σ scatter of (0.00 +0.30 −0.30 ) for ∆x, (0.00 +0.30 −0.29 ) for ∆y, −0.01 +0.08 −0.09 for ∆e x , 0.00 +0.08 −0.09 for ∆e y , and (0.07 +0.29 −0.12 ) for the Einstein radius ∆θ E . Comparing the performance on the Einstein radius to the network from Sec. 4.1 with a natural Einstein radius distribution, we see a significant improvement on the systems with larger image separation. Therefore we can confirm that the underprediction of the Einstein radius in Sec. 4.1 is due to the relatively small number of large-θ E systems in the training data. On the other hand, based on this plot the new network seems to be slightly worse on the low-image separation systems. It tends to overpredict the Einstein radius at θ E 2.0 such that when we limit to θ E > 0.8 as in Sec. 4.1, we get only a slight improvement in reducing the scatter and obtain ∆θ E = (0.07 +0.25 −0.08 ) . Therefore, it turns out that the performance depends sensitively on the training data distribution. If we look at the performance on the lens center, which is measured in units of pixels with respect to the image cutout center, it seems as if the network fails totally in the first instance. However, one has to recall how we obtain the lens mass center. In the simulation, we assume the lens light center to be the image center and add a gaussian variation on top (with standard deviation of 0.05 ) to shift to the lens mass center. Thus the ground truth (red histogram in Figure 10) follows a gaussian distribution while the predicted lens center distribution (blue) is peakier. This suggest that the network does not obtain enough information from the slight shift or distortion in the lensed arcs to predict correctly the lens mass center. The network has further difficulties on this parameter because all systems have the exact same lens light center (which is at the center of the image). If we would assume that the lens mass perfectly follows the light distribution and the lens light center is always the same, the lens (mass) center ground truth would become a delta distribution, and the network would perform much better. Accordingly, in many automated lens modeling architectures (e.g., Pearson et al. 2019) the lens center is not even predicted. Since the difference of the center is for nearly all lens systems smaller than ±1 pixel, it does not affect the model noticeably. We nonetheless keep five parameters for generality and suggest to investigate in future work more in this direction by relaxing the strict assumption of coincidence centers of image cutout and of lens light.
We also tested networks by weighting the contribution of the lens center to the loss with a higher fraction which results in a better performance on these two parameters but then the performance on the other parameter starts to deteriorate. We thus refrain from up-weighting the lens center.
If we now look at the performance on the ellipticity, it turns out that most of the lens systems are roundish, i.e. e x ∼ e y ∼ 0 and that the network can recover them very well. In case the lens is more elliptical, the network performance starts to drop. This might be an effect of the lower number of such lens systems in the sample especially as here the position angle becomes relevant and thus the number of systems in a particular direction is again lower. Note that e x = ±0.3 and e y = 0 corresponds to an axis ratio q = 0.73, i.e. quite elliptical. If the absolute value of e x or e y would be higher, the axis ratio would be even lower which occurs relatively seldom in nature.
With the 1D and 2D probability contours in Figure 7 (green) one can see that this network performs overall very similarly compared to the network trained on the naturally distributed sample (blue). For all three networks we find minimal correlation between the different parameters.
In analogy to the previously presented networks, we show in Figure 8 the histogram of the brightness ratio and the Einstein radius differences as function of the brightness ratio for this network. While the distribution matches that from the sample with naturally distributed Einstein radius, we overpredict the Einstein radius more than before. This is related to the overprediction at smaller Einstein radii (see Figure 10) which comes from weighting higher the fraction of systems with larger image separation. We still underestimate the Einstein radius at the very high end as already noted, but this is negligible for the overall performance compared to the amount of overestimated systems as we still have a factor of ∼ 100 more of them in our sample. This is the reason why the network tends to overpredict more strongly than that trained on the naturally distributed sample (Sec. 4.1, and blue lines in Figure 7 and Figure 8).
Finally, we show the loss curve in Figure 11. The training losses (dotted lines) and validation losses (solid lines) in different colors correspond to the five different cross-validation runs. Additionally, we give the mean of the validation curves with a black solid line. This line is used to obtain the best epoch, which is in this specific case epoch 122 that is marked with a vertical line. The corresponding loss is 0.0528 obtained with Eq. 6. In the corresponding colors, we plot the validation loss as solid lines together with the black curve that is the average of the five validation curves from the crossvalidation runs. From the minimum in the black curve, which is marked with the vertical gray line, we find the best epoch.
From the loss curve we see that the network is not overfitting much to the training set since the validation curves do not increase much for higher epochs, but still enough to define an optimal epoch to terminate the final training. This is a sign that drop-out, i.e. the omission of random neurons in every iteration, is not needed, which is supported by additional test in Sec. 5.3.
Further network tests
In addition to the networks described in Sec. 4, where we mainly investigated the effect of the Einstein radius distribution, we discuss here further tests on the training data set.
Data set containing double or quads only
We consider a specialized network for one of the two strong lensing options and limit our sample to either doubles or quads, where the image multiplicity is based on the centroid of the source (as the spatially extended parts of the source could have different image multiplicities depending on their positions with respect to the lensing caustics). In the case where we limit to doubles only, we have done our standard grid search for the different hyperparameter combinations for two samples with naturally distributed Einstein radii above 0.5 and above 2.0 . With these networks we got no notable difference compared to the sample containing both doubles and quads (see Sec. 4.1 and Sec. 4.2), which is expected as the doubles are dominating the sample including both doubles and quads by a factor of around 20-30 (for the different networks depending on the lower limit of the Einstein radii).
In case we limit the sample to quads only, we have done again our grid search for the different hyperparameter combinations for the both samples with naturally distributed Einstein radii above 0.5 and above 2.0 and also with equally distributed Einstein radii. Since the chance to obtain four images is smaller than the chance to observe two images based on the necessary lensing configuration probability, the sample sizes are smaller with, respectively, 42,063, 19,176, and 28,398 lensing systems. Therefore the output has to be considered with care as this is much lower than typically used for such a network.
It turns out that this networks performs equally well on the lens center and ellipticity but better for the Einstein radius shown in Figure 12. By comparing this plot to Figure 10, we find the main improvement that the 1σ and 2σ scatters are substantially reduced and with smaller bias for systems with larger θ E . An improvement on the Einstein radius is expected as the network get the same information of the lens but more on the lensed arcs. Even if now one image is too faint to be detected or too blended with the lens there are three images from the quad left over to provide information on the Einstein radius.
To increase the sample, we simulated a new quads only batch with the source brightness boosted by one magnitude which resulted in a ∼ 1.5 times larger sample as before. This is still small compared to the other double or mixed samples. Now we have a brightness ratio peak at log I s I l ∼ −1.5 instead of ∼ −2.0 (compare Figure 8). The obtained performance of this trained network (loss is 0.0673 for the network with w θ E = 5) is still similar as for the quads-only network without magnitude boost (loss is 0.0688) and no significant performance difference is observed for the individual parameters.
Comparison to lens galaxy images only
As further proof for the network performance on the Einstein radius, we test how well the network is able to predict the parameters from images of only the lens galaxies, i.e. without lensed arcs. As expected, the network performs similarly well for the lens center and axis ratio, but much worse for the Einstein radius with a 1σ value of 0.41 . This shows us that the arcs are bright enough and sufficiently deblended from the lens galaxies to be detectable by the CNN.
Different network architectures
For each of the networks presented in Sec. 4, we have done a grid search to find the best hyperparameters. We have consid- ered eight different values for the learning rate, three different network initializations, two different batch sizes, and two different sets of weights for the loss contribution. This gives already 96 different combinations of hyperparameters which we tested with cross-validation and early stopping. For a subset of the hyperparameter combinations, we test further possibilities. In particular, we explore the effect of drop-out with a drop-out rate p ∈ [0.1, 0.3, 0.5, 0.7, 0.9] but find no improvement. We further test different network architectures by adding an additional convolutional layer or fully connected layer, or varying the number of neurons in the different layers. We further test the effect of five different scaling options of the input images for our data set described in Sec. 4.1, but assume here the learning rate r learn = 0.001 for simplicity. First, we boost the r band only by a factor of 10. Since the network is still able to recover the parameter values, we see that the network performance is not heavily affected by the absolute value of the images. On the other hand, in case we normalize each filter of one lens system independently of the other filter, the network has huge difficulties to infer the correct parameter values. This shows us that the network is indeed able to extract the color information and need the different filter. In the fourth and fifth option, we normalize the images with the peak value of each filter or with the mean peak value. Lastly, we also rescale the images by shifting them by the mean value and dividing by the standard deviation 6 . Since we obtained no notable improvement with any one of these scalings, we use the images without rescaling for obtaining our final networks. 6 The four individual images are rescaled as with mean the number of filter f , and σ = f k=0 p1,p2 l,m=0,0 and p1 and p2 as image dimensions in pixels for the x and y axis, respectively. In our case we have f = 4 and p1 = p2 = 64.
Prediction of image position(s) and time delay(s)
After obtaining a network for different data sets (see Tab. 1), we compared the true and predicted parameter values directly. Since the main advantage of the network is the computational speed-up compared to recent methods and the fully automated application, the network is very useful for planning follow-up observations as this needs to be done relative quickly in case there is for instance a supernova (SN) or a short lived transient occurring in the background source.
Lensed SNe in addition to lensed quasars are very powerful cosmological probes. By measuring the time delays of a lensing system with an object that is variable in brightness, one can among others use it to constrain the Hubble constant H 0 . Such applications are mainly based on lensed quasars as the chance of a lensed SN is substantially lower. So far there are two lensed SNe known; one core-collapse SN named SN Refsdal behind a strong lensing cluster MACS J1149.5+222.3 (SN Refsdal;Kelly et al. 2015) and one SN type Ia behind an isolated lens galaxy (iPTF16geu; Goobar et al. 2017). Thanks to upcoming wide surveys in the next decades like LSST, this will change. LSST is expected to detect hundreds of lensed SNe (e.g., Goldstein et al. 2019;Wojtak et al. 2019). Therefore, it is important to be prepared for such exciting transient events in a fully automated and fast way. In particular, a fast estimation of time delay(s) is important for optimizing the observing/monitoring strategy for timedelay measurements.
Beside time-delay measurements, observing lensed SNe type Ia can help to answer outstanding questions about their progenitor systems. The basic scenario is the single degenerate case where a white dwarf (WD) is stable until it reaches the Chandrasekhar mass limit (Whelan & Iben 1973;Nomoto 1982) by accreting mass from a nearby star. Today there are also alternative scenarios considered where the WD explodes before reaching the Chandrasekhar mass, the so-called sub-Chandrasekhar detonations (Sim et al. 2010). Another possibility for a SN Ia is the double degenerated scenario where the companion is another WD (e.g., Pakmor et al. 2010) and both are merging to exceed the Chandrasekhar mass limit. It is still unclear which one, or both, main scenarios is correct to describe the SN Ia formation. To shed light on this debate, one possibility is to observe the SN Ia spectroscopically at very early stages which is normally difficult because of the SN detection close to peak luminosity, past the early phase. In case this SN is lensed, we can use the position of the first appearing image, together with a mass model of the underlying lens galaxy, which we can obtain with our network using "reference" images taken in an earlier epoch before SN explosion that show the lens galaxy with the lensed SN host, to predict the position and time when the next images will appear. Here it is very important to react fast as the time delays of galaxy-galaxy strong lensing are typically on the order of weeks. Our CNN can indeed provide a mass model within seconds of analyzing the reference image. We explore below how accurate we can predict the positions and time delays of the next appearing SN images.
To further test our model networks, we use the predicted SIE parameters from the networks to predict the image positions and time delays and compare them to those obtained with the ground-truth SIE model parameter values. This will give us a better understanding of how well the network performs and if the obtained accuracy is sufficient for such an application. For this comparison we compute the image positions of the true source center based on the true SIE parameters obtained by the simulation for the sytsems of the test set (hereafter true image posi-tions). After removing the "central" highly-demagnified lensed image as this would not be observable (given its demagnification and the presence of the lens galaxy in the optical/infrared), we compute the time delays for these systems (hereafter true time delays ∆t tr ) by using the known redshifts and our assumed cosmology. Based on these true image positions and time delays, we can select the first-appearing image and use its true image position to predict the source position with our predicted SIE mass model. This source position is then used to predict the image positions (hereafter predicted image positions) of the next-appearing SN images based on the SIE parameter values predicted with our modeling network. The predicted image positions are then used to predict the time delays (hereafter predicted time delays ∆t pred ) with the network predicted SIE parameter. We compare directly the image positions and time delays that we obtain with the true and with the network predicted SIE parameters, when we have the same number of multiple images. In case the number of image do not match, which happened for 7.8% for the network with equally balanced Einstein radii distribution containing double and quads, we omit the candidate in this analysis as a fair comparison is not possible. Since we always remove the central image, we obtain for a double and quad, respectively, two and four images and one and three time delays. Since the time delays can be very different, we also compare the fractional difference between the true and predicted time delays with respect to the true time delays.
We choose again the three main networks from Sec. 4 for this comparison and show them in Figure 13. All three sets contain quads and doubles, and assume a loss weighting factor of 5 for the Einstein radius. The first set assumes a lower limit on the Einstein radius of 0.5 (blue), the second a lower limit of 2 (yellow), and the third a lower limit of again 0.5 but with a uniform distribution on the Einstein radii instead of the natural distribution following the lensing probability (green). We plot the quantities as function of the brightness ratio log I s I l in analogy to Figure 7 and Figure 8.
In detail, Figure 13 contains in the upper row the median difference in the image position for the x coordinate (left) and y coordinate (right) with the 1σ value per brightness ratio bin, where only the additional image positions are taken into account as the first reference image is known and thus do not need to be predicted. We obtain for all three networks a median offset of nearly zero independent of the brightness ratio and whether we limit further in Einstein radii or not. The 1σ values are around 0.25 , corresponding to ∼ 1.5 pixels. Explicitly, we find for the equally distributed sample applied to θ E > 0.8 a median image position offset of (0.00 +0.29 −0.29 ) and (0.00 +0.32 −0.31 ) for the x and y coordinate, respectively. Interestingly the 1σ values are slightly larger for quads than doubles as we would have expected that quads provide more information to constrain the SIE parameter values and thus predict better the image positions. The reason for this is probably because quads have generally higher image magnification as doubles, and image offsets are larger with higher magnification.
The middle row of Figure 13 shows the legend (left) and a histogram of the difference between the predicted time delay ∆t pred and the true time delay ∆t true . The bottom row shows the difference in time delay divided by the absolute value of the true time delay per brightness ratio bin (left) and the difference of the time delays again per brightness ratio bin (right). In terms of time delay difference, the network trained on the naturally distribution (blue) performs better than that with uniform distribution (green), but especially for the network trained for lens systems with large Einstein radius (orange) we obtain notable differences. In detail, we obtain a median with 1σ value for the naturally distributed sample (blue, Sec. 4.1) for the time-delay difference of 2 +18 −6 days and a fractional time-delay difference of 0.05 +0.47 −0.09 . Since we find a strong correlation between the offset in the Einstein radius and the time delay offset as shown in Figure 14, we exclude again the very small Einstein radii systems (θ tr E < 0.8 ) and obtain then for the time-delay difference 1 +18 −11 days and for the fractional difference 0.01 +0.19 −0.12 . For the equally distributed sample (green, Sec. 4.3) we obtain, with θ E > 0.5 and θ E > 0.8 , respectively, for the time delay difference 7 +38 −6 and 6 +36 −8 days and for the fractional time delay difference 0.06 +0.45 −0.05 and 0.04 +0.27 −0.05 . This restriction is easily applicable in practice, since one will follow up only individual lensing systems at a given time, and one can check by looking at the image of the individual system whether the Einstein radius is >0.8 . Depending on the predicted time delay, one could also improve the model further by using traditional manual maximum likelihood modeling methods to verify the predicted time delay.
The fractional offset in the predicted time delays of 0.04 +0.27 −0.05 that we achieve with our CNN for systems with θ E > 0.8 (for the uniformly distributed θ E sample), i.e. with a symmetrized scatter of ∼16%, is close to the limit that would be achievable even with detailed/time-consuming MCMC models of groundbased images. This is because the assumption of the SIE introduces additional uncertainties on the predicted time delays in practice, even though detailed MCMC models of images would typically yield more precise and accurate estimates for the SIE parameters than our CNN. While galaxy mass profiles are close to being isothermal, the intrinsic scatter in the logarithmic radial profile slope γ (where the three dimensional mass density ρ(r 3D ) ∝ r −γ 3D ) is around ±0.15, translating to ∼ 15% scatter in the time delays (e.g., Koopmans et al. 2006;Auger et al. 2010;Barnabè et al. 2011). In other words, if a lens galaxy has a powerlaw mass slope of γ = 2.1, then our assumed SIE mass profile (with γ = 2.0) for it would predict time delays that are ∼10% too high (e.g., Wucknitz 2002; Suyu 2012). While constraining the profile slope γ with better precision than the intrinsic scatter for individual lenses is possible, this would require high-resolution imaging from space or ground-based adaptive optics (e.g., Dye & Warren 2005;Chen et al. 2016). Given the difficulties of measuring the power-law mass slope γ from seeing-limited ground-based images of lens systems (although see Meng et al. 2015, for the optimistic scenario when various inputs are known perfectly such as the point spread function), we conclude that our network prediction for the delays has comparable uncertainties as that due to the unknown γ . We expect these two sources of uncertainties to be the dominant ones in ground-based images.
We also find a decrease of the performance with increase of brightness ratio which is in the first instance counterintuitive. If we consider the fractional offset in the left panel, we see a better performance for the sample with an Einstein radius lower limit of θ E,min = 2 (orange), especially in terms of the 1σ scatter, when compared to the other two networks. This θ E,min = 2 network also has minimal bias, as shown by the median line. This is understandable as the time delays are longer for systems with a bigger Einstein radius, and therefore the fractional uncertainty is smaller. The accuracy in time delay difference (lower right plot) is good, although 1σ scatter is quite large, ∼20 days. With this reasoning, we can also understand the worse performance of the equally distributed sample (green) compared to the naturally distributed sample (blue) as it contains a much higher fraction of systems with bigger image separation. As higher brightness ratio (log(I s /I l )) tends to be associated with systems with higher θ E , the prediction of delays thus has larger scatter as shown in the bottom-right panel. Moreover, we have to note that we find a bet-ter performance for doubles than quads, which might be because of smaller image separation and shorter time delays of quads.
During this evaluation of the networks we have to keep in mind that the main advantage of this networks is the run time: we need only a few seconds to estimate the SIE model param- eters, the image positions, and the corresponding time delays. Therefore it is expected that we do not reach the accuracy of current modeling techniques using MCMC sampling which can take weeks. Nonetheless, the network results can serve as input to conventional modelling and help speed up the overall modelling.
Comparison to other modeling codes
There are already several modeling codes developed, and one can separate them into two main groups. The state-of-the-art codes which rely on MCMC sampling are widely tested and were used for most of the modeling so far. The advantage of such codes are their flexibility in image cutout size or pixel size and also in terms of profiles to describe the lens light or mass distribution. With the advantage of the variety of profiles comes the disadvantage that the codes require a lot of user input which limit the applicability of such codes to a very small sample, or specific lensing systems that are modeled. Moreover, based on the MCMC sampling of the parameter space is very computational intensive and thus can take up to weeks per lens system although some steps can be parallelized and run on multiple cores.
Since the number of known lens systems has grown in the past few years and will increase substantially with upcoming surveys like LSST and Euclid, those codes to analyze individual lens systems will not be enough anymore. Thus the modeling process must be more automated and a speed-up will be necessary. While some newer codes (e.g. Nightingale et al. 2018;Shajib et al. 2019;Ertl et al. in prep.) are automating the modeling steps to minimize the user input, they still rely on sampling the parameter space such that the run time remains on the order of days and some user input per lens system.
The second, new kind of modeling are based on machine learning such as that used in this work. The first network for modeling strong lens systems was first presented by Hezaveh et al. (2017). While they use Hubble Space Telescope data quality, we use with images from HSC ground based quality similar to Pearson et al. (2019) as most of the newly detected lens systems will be in first instance observed with ground-based facilities. Moreover, Hezaveh et al. (2017) suggest to first remove the lens light and then model with the network only the arcs. Therefore we cannot further compare the performance fairly. Pearson et al. (2019) consider both modeling with or without lens light subtraction but found no notable difference such that we only consider modeling the lens system without an additional step to remove the lens light. Since we provide the image in four different filters, the network is able to distinguish internally between the lens galaxy and the surrounding arcs. In contrast to Pearson et al. (2019), we use the SIE profile with all five different parameters while they assume a fixed lens center. Moreover, they mock up completely their training data, assume a very conservative threshold of S/N¿20 in at least one band and do not include neighbouring galaxies which can confuse the CNN while we are more realistic by using real observed images as input for the simulation pipeline. This way we have more realistic lens light distributions and also include neighboring objects which the network has to learn to distinguish from the lensing system. Pearson et al. (2019) make use of the same type of network as Hezaveh et al. (2017) and us, a CNN, but they use slightly smaller input cutouts (57 × 57 pixels) and a different network architecture (6 convolutional layers and 2 FC layers) than ours. Since they investigated mostly the effect of using multiple images of different filters and whether to use lens light subtraction or not, whereas we investigate the effect on the underlying samples and a simulation with real observed images, we do not have a scenario that assumes the exact same properties. The closest scenario, from Pearson et al. (2019) the results from LSST-like gri images including lens light and our results based on HSC griz images with naturally distribution of the Einstein radii, show that both networks are very similar in their overall performance. The reason that they do not suffer from the same biases in θ pred E , even with a non-flat θ tr E distribution in their simulations, is perhaps because they use idealised, simplistic simulations (high S/N, wellresolved systems, no neighbours).
There are also other recent publications related to strong lens modeling with machine learning. Bom et al. (2019) present a new idea by suggesting a network which predicts four parameters, the Einstein radius θ E , the lens redshift z d , the source redshift z s , and the related quantity of the lens velocity dispersion v disp . They adapt similar to us a SIE profile to mock up their training data with an image quality similar to that from the Dark Energy Survey. Since this code provides only the Einstein radius instead of a full SIE model, the applicability is somewhat limited.
Madireddy et al. (2019) suggest a modular network to combine lens detection and lens modeling which have been done so far with complete independent networks. In detail, they have four steps, the first one is to reduce the background noise (socalled image denoising), followed by a lens light subtracting step (so called "deblending" step), before the next network decides whether this is a lens system or not. If it detects the input image to be a lens, the module is called to predict the mass model parameter values. Each module of the network is a very deep network and both modules for detection and modeling make use of the residual neural network (ResNet) approach. They use a sample of 120,000 images, with 60,000 lenses and 60,000 nonlenses, and split this into 90% and 10% for the training and test set, respectively, without making use of the cross-validation procedure. Madireddy et al. (2019) uses similar to Pearson et al. (2019) completely mocked-up images based on a SIE profile with fixed centroid to the image center such that the modeling module predict three quantities, Einstein radius, and the two components of the complex ellipticity. Based on the different assumptions a direct comparison of the performance is not possible. However, we see that the performance is typical for the current state of CNNs based on Pearson et al. (2019).
Summary and Conclusion
In this paper, we present a Convolutional Neural Network to model fully automated and very quickly the mass distribution of galaxy-scale strong lens systems by assuming a SIE profile. The network is trained on images of lens systems generated with our newly developed code that takes real observed galaxy images as input for the source galaxy (in our case from the Hubble Ultra Deep Field), lenses the source onto the lens plane, and adds it to another real observed galaxy image for the lens galaxy (in our case from the HSC SSP survey). We choose the HSC images as lenses and adopt their pixel sizes of 0.168 as this is similar to the data quality expected from LSST. With this procedure we simulate different samples to train our networks where we distinguish between the lens types (quads+doubles, doubles-only, and quads-only) as well as on the lower limit of the Einstein radius range. Since we find a strong dependence on the Einstein radius distribution, we also consider a uniformly distributed sample and also a weighting factor of 5 for the Einstein radius' contribution to the loss. With this we obtain eight different samples for each of the two different weighting assumptions summarized in Tab.
1.
For each sample we then perform a grid search to test different hyper-parameter combinations to obtain the best network for each sample although we find that the CNN performance depends much more critically on the assumptions of the mock training data (like quads/doubles/both or Einstein radius distribution) rather than fine tuning of hyper-parameters. From the different networks presented in Tab. 1, we find a good improvement for the networks trained with quads-only compared to the networks trained on both quads and doubles. If the system type is known, we therefore recommend using the corresponding network. Since the Einstein radius is a key parameter, we weighted its loss higher than for the others and, although the minimal validation loss is higher, we advocate these networks for modeling HSC-like lenses.
After comparing the network performance on the SIE parameter level, we test the network performance on the image and time-delay level. For this we use the first appearing image of the true mass model to predict the source position based on the predicted SIE parameter. From this source position and the network predicted SIE parameters, we then predict the other image position(s) and time delay(s). We find for the sample with double and quads, a uniform distribution in Einstein radii and a weighting factor w θ E of five by applying the network to θ E > 0.8 an average image offset of ∆θ x = (0.00 +0.29 −0.29 ) and ∆θ y = (0.00 +0.32 −0.31 ) while we achieve the fractional time-delay difference of 0.04 +0.27 −0.05 . This is very good given that we use a simple SIE profile and need only a few seconds per lens system in comparison to current state-of-the-art methods which require at least days and some user input per lens system. We anticipate that fast CNN modeling such as the one developed here will be crucial for coping with the vast amount of data from upcoming imaging surveys. For fure work, we suggest to investigate further into creating even more realistic training data (e.g., allowing for an external shear component in the lens mass model) and also in exploring the effect of deeper or more complex network architectures. The outputs of even the network presented here can be used to prune down the sample for specific scientific studies, and followed up with more detailed conventional mass modeling techniques.
|
v3-fos-license
|
2023-01-09T05:07:32.648Z
|
2023-01-01T00:00:00.000
|
255521235
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/23/1/544/pdf?version=1672740833",
"pdf_hash": "d6ac9b811e13aef29a98b0815efc55f7b7a69388",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43145",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "d6ac9b811e13aef29a98b0815efc55f7b7a69388",
"year": 2023
}
|
pes2o/s2orc
|
DSTEELNet: A Real-Time Parallel Dilated CNN with Atrous Spatial Pyramid Pooling for Detecting and Classifying Defects in Surface Steel Strips
Automatic defects inspection and classification demonstrate significant importance in improving quality in the steel industry. This paper proposed and developed DSTEELNet convolution neural network (CNN) architecture to improve detection accuracy and the required time to detect defects in surface steel strips. DSTEELNet includes three parallel stacks of convolution blocks with atrous spatial pyramid pooling. Each convolution block used a different dilation rate that expands the receptive fields, increases the feature resolutions and covers square regions of input 2D image without any holes or missing edges and without increases in computations. This work illustrates the performance of DSTEELNet with a different number of parallel stacks and a different order of dilation rates. The experimental results indicate significant improvements in accuracy and illustrate that the DSTEELNet achieves of 97% mAP in detecting defects in surface steel strips on the augmented dataset GNEU and Severstal datasets and is able to detect defects in a single image in 23ms.
Introduction
Quality control is the key success aspect of steel industrial production [1][2][3]. Surface defect detection is an essential part of the steel production process and has significant impacts upon the quality of products. Manual defect detection methods are time-consuming and subject to hazards and human errors. Therefore, several traditional automatic surface defect detection methods have been proposed to overcome the limitations of manual inspection. These include eddy current testing, infrared detection, magnetic flux leakage detection, and laser detection. These methods failed to detect all the faults, especially the tiny ones [4]. This motivated researchers [5][6][7][8] to develop computer vision systems that are able to detect and classify defects in ceramic tiles [5], textile fabrics [9,10] and steel industries [7][8][9]11,12]. Structure-based methods extract image structure features such as texture, skeleton and edge, while other methods succeed to extract statistical features, such as mean, difference and variance [13], from the defect surface and then apply machine learning algorithms to train these features to recognize defected surfaces [14,15]. The combination of statistical features and machine learning achieves higher accuracy and robustness than structure-based methods [16]. Using machine learning, such as Support Vector Machine (SVM) classifier to classify different types of surface defects may take approximately 0.239 s to extract features from a single defect image during testing [14]. Therefore, it fails to meet the real-time surface defect detection requirements. However, convolutional networks (CNN) provide automated feature extraction techniques that take raw defect images and predict surface defects in a short time and lessen the requirements to manually extract suitable features [17][18][19]. The deep learning models for surface defects classification are more accurate than traditional image processing-based and machine learning methods. Defects in the surface steel strips have multiple of challenges, such as (1) low contrast due to change of light intensity, (2) defects are similar to background, (3) irregular shape of defects, (4) multiple scales of defects of the same kind, and (5) there are insufficient training samples. These challenges degrade the accuracy of the deep learning model. Therefore, to detect and classify defects of different sizes, other research efforts integrated multi-scale features with image classification CNN networks throughout successive pooling and subsampling layers [20][21][22][23]. The use of multi-scale features reduces resolution until obtaining a global prediction. To recover the lost resolutions different approaches have been designed, such as using repeated up-convolutions, atrous spatial pyramid pooling (ASPP) module and using multiple rescaled versions of the image as input to the network while combining the predictions obtained for these multiple inputs [24][25][26][27].
The main objective of this research is to enhance steel strips surface defects detection accuracy and produce a significant prediction model. Therefore, in response to the above challenges, we proposed a CNN, called DSTEELNet for detecting and classifying defects in surface steel strips that aggregates different feature maps in parallel without losing resolution or analyzing rescaled images [28]. The proposed module is based on parallel stacks of different dilated convolutions that support exponential expansion of the receptive field without loss of coverage or resolution. The dilated convolution can capture more distinctive features by shifting the receptive field [29], and able to gather multi-scale features. This paper investigates the performance of the proposed DSTEELNet with different number of parallel stacks and different dilation rates per stack. In addition, the author employs a specific order of dilated convolutions in DSTEELNet to cover square regions of input 2D image without any holes or missing edges. The main contributions of this paper are as follows: (1) We proposed and developed a novel framework called DSTEELNet that includes three parallel stacks of dilated convolution blocks with different dilation rates, which significantly enhance the inference speed and the detection accuracy of defects for surface steel strips. They are able to capture, propagate different features in parallel and cover square regions of input 2D image without any holes or missing edges; (2) We evaluated the proposed DSTEELNet architecture and the traditional CNN architectures on NEU [3] and Severstal [30] datasets to highlight the effectiveness of DSTEELNet in detecting and classifying defects in surface steel strips; (3) We proposed and developed the DSTEELNet-ASPP that adopts the atrous spatial pyramid pooling (ASPP) module [27] to enlarge the receptive field and incorporate multi-scale contextual information without sacrificing spatial resolution; and (4) We used a deep convolution generative adversarial network DCGAN to extend the size of the NEU dataset and consequently improve the performance of the generated models.
The rest of this paper is organized as follows. Section 2 reviews the related works. Section 3 illustrates the training datasets, augmentation techniques, the proposed DSTEEL-Net CNN framework, and demonstrates the experiments setup and performance metrics. Section 4 discusses the experimental results. Section 5 concludes this paper and provides the future research direction.
Related Work
There are several research efforts that have developed machine vision techniques for surface defect detection. They are mainly divided into two categories, namely: the traditional image processing method, and the machine learning methods. The traditional image processing methods, detect and segment defects by using the primitive attributes reflected by local anomalies. They detect various defects by feature extraction techniques that are categorized into four different approaches [31][32][33]: structural method [34,35], threshold method [36][37][38], spectral method [39][40][41], and model-based [42,43] method. In traditional image processing methods, multiple thresholds to detect various defects are needed and are very sensitive to background colors and lighting conditions. These thresholds need to be adjusted to handle different defects. The traditional algorithms require plenty of labor to extract handcrafted features manually [13]. Machine learning-based methods typically include two stages of feature extraction and pattern classification. The first stage analyzes the characteristics of the input image and produces the feature vector describing the defect information. These features include grayscale statistical features [44], local binary patterns (LBP) feature [45], histogram of oriented gradient (HOG) features [46], and gray level co-occurrence matrix (GLCM) [44]. Some research efforts have been developed to speed up the features extraction process in parallel using GPU as our previous research work in [47]. The second stage feeds the feature vector into a classifier model that trained in advance to detect whether the input image has a defect or not [16]. In a complex condition, handcrafted features or shallow learning techniques are not sufficiently discriminative. Therefore, these machine learning-based methods are typically dedicated for a specific scenario, lacking adaptability, and robustness.
Recently, neural network methods have achieved excellent results in many computer vision applications. Convolutional neural networks (CNN) have been used to develop several defect detection methods. Some of the CNN research efforts have been developed to classify the defects in steel images as in [11], authors employed a sequential structured CNN for feature extraction to improve the classification accuracy for defect inspection. They did not consider the effects of noise and the size of the training dataset. Authors in [48] developed a multi-scale pyramidal pooling network for the classification of steel defects. Authors in [49] developed a flexible multi-layered deep feature extraction framework. Both research work succeeded in classifying defects, however they failed to localize the location of the defects. Therefore, researchers convert the surface defect detection task into an object detection problem in computer vision to localize defects as in [50]. In [51] authors developed a cascaded autoencoder (CASAE) that first locates defect and then classifies it. In the first stage, it localized and extracted the features of the defect from the input image. In the second stage, it used compact CNN to accurately classify defects. The authors in [50] developed a defect detection network (DDN) that integrates the baseline ResNet34, ResNet50 [52] networks and Region proposal network (RPN) for precise defect detection and localization. In addition, they proposed the multilevel-feature fusion network that combined lower and high-level features. In other words, the inspection task classifies regions of defects instead of a whole defect image. The authors claimed that ResNet34 and ReNet50 achieved of 74.8%, 82.3% mAP, respectively, at 20 FPS (frames per second) [50]. The research work in [53] employed traditional CNN with a sliding window to localize the defect. In [54] authors developed a structural defect detection method based on Faster R-CNN [55] that is succeeded to detect five types of surface defects: concrete, cracks, steel corrosion, steel delamination, and bolt corrosion. Recently, authors in [56] reconstructed the network structure of two-stage object detection (Faster R-CNN) for small features of the target, replaced part of the CNN with a deformable convolution network [57] and trained the network with multiscale feature fusion on NEU dataset [3]. This work achieved low mAP of 75.2% and long inference speed. These models able to achieve high defect detection accuracy but low detection efficiency that cannot meet the real-time detection requirements of the steel industry. In addition, researchers in [58] developed single-stage object-detection module named Improved-YOLOv5 that precisely positioning of the defect area, crop the suspected defect areas on the steel surface and then used the Optimized-Inception-ResnetV2 module for defect classification. This works achieved the best performance of 83.3% mAP at 24 FPS.
In summary, the limitations of the stated research efforts are that they detect defects through one or multiple close bounding boxes but cannot identify the boundary of the defect precisely in real-time. They have shown acceptable levels of precision, but fail to achieve real-time defect detection requirements in the steel industry. The main aim of this paper is to (1) develop a real-time deep learning framework that accelerates the defect detection speed and improves the detection and classification precision to facilitate quality assurance of surface steel manufacturing; (2) enlarge the training dataset to avoid overfitting. Annotating the data collected from the manufacturing lines is a time-consuming task. To address this issue, there has been recent interest in the research community to mitigate it. The next section illustrates the (1) data augmentation techniques used to enlarge the NEU dataset and (2) proposed deep CNN architecture.
Materials and Methods
This section illustrates the training datasets, augmentation techniques, and the proposed DSTEELNet CNN framework to classify and detect surface defects in real-time. Finally, it demonstrates the experiments setup and performance metrics.
Datasets
For training and experiments, we used two steel surface NEU [3] and Severstal [30] datasets. This section introduces the NEU dataset and the expansion techniques in detail to facilitate the training of the proposed model. In our experiment, we used NEU dataset [3]. Originally, the NEU dataset has 1800 grayscale steel images and includes six types of defects as shown in Figure 1. The defect types are crazing, inclusion, patches, pitted surface, scratches, and rolled-in scale, 300 samples for each type. To annotate the dataset, each defect that appears in the defected images is marked by a bounding red box (groundtruth box) as shown in Figure 1. Approximately 5000 groundtruth boxes have been created. These bounding boxes were used only to localize defects. They were not used to represent either defect's borders or describe their shape. In addition, we trained the proposed model using Severstal dataset that includes 12,568 training steel plate images, 71,884 pixel-wise annotation masks among four different types of steel defects. The defect types are defect 1 (Pitted surface), defects 2 (Inclusion), defects 3 (Scratches), and defects 4 (Patches) as classified in NEU.
task. To address this issue, there has been recent interest in the research community to mitigate it. The next section illustrates the (1) data augmentation techniques used to enlarge the NEU dataset and (2) proposed deep CNN architecture.
Materials and Methods
This section illustrates the training datasets, augmentation techniques, and the proposed DSTEELNet CNN framework to classify and detect surface defects in real-time. Finally, it demonstrates the experiments setup and performance metrics.
Datasets
For training and experiments, we used two steel surface NEU [3] and Severstal [30] datasets. This section introduces the NEU dataset and the expansion techniques in detail to facilitate the training of the proposed model. In our experiment, we used NEU dataset [3]. Originally, the NEU dataset has 1800 grayscale steel images and includes six types of defects as shown in Figure 1. The defect types are crazing, inclusion, patches, pitted surface, scratches, and rolled-in scale, 300 samples for each type. To annotate the dataset, each defect that appears in the defected images is marked by a bounding red box (groundtruth box) as shown in Figure 1. Approximately 5000 groundtruth boxes have been created These bounding boxes were used only to localize defects. They were not used to represent either defect's borders or describe their shape. In addition, we trained the proposed model using Severstal dataset that includes 12,568 training steel plate images, 71,884 pixel-wise annotation masks among four different types of steel defects. The defect types are defect 1 (Pitted surface), defects 2 (Inclusion), defects 3 (Scratches), and defects 4 (Patches) as classified in NEU.
NEU Dataset Augmentation
The NEU dataset includes a small quantity of training samples and image-level annotation labels that are not adequate to provide sufficient information for industry applications. To expand the dataset with new samples, a naive solution to oversampling with data augmentation would be a simple random oversampling with small geometric transformations such as 8° rotation, shifting image horizontally or vertically, etc. There are other simple image manipulations such as mixing images, color augmentations, kernel filters, and random erasing can also be extended to oversample data as geometric augmentations. This can be useful for ease of implementation and quick experimentation with different class ratios. In this paper, we used data augmentation to manually increase the size of the NEU dataset by artificially creating different versions of the images from the original training dataset. Table 1 shows the images augmentation setting parameters used to generate augmented images such as flip mode, zoom range, width shift, etc. For example, width shift was used to shift the pixels horizontally either to the left or to the right randomly and generate transformed images. The generated images have been combined
NEU Dataset Augmentation
The NEU dataset includes a small quantity of training samples and image-level annotation labels that are not adequate to provide sufficient information for industry applications. To expand the dataset with new samples, a naive solution to oversampling with data augmentation would be a simple random oversampling with small geometric transformations such as 8 • rotation, shifting image horizontally or vertically, etc. There are other simple image manipulations such as mixing images, color augmentations, kernel filters, and random erasing can also be extended to oversample data as geometric augmentations. This can be useful for ease of implementation and quick experimentation with different class ratios. In this paper, we used data augmentation to manually increase the size of the NEU dataset by artificially creating different versions of the images from the original training dataset. Table 1 shows the images augmentation setting parameters used to generate augmented images such as flip mode, zoom range, width shift, etc. For example, width shift was used to shift the pixels horizontally either to the left or to the right randomly and generate transformed images. The generated images have been combined with the original NEU dataset. However, oversampling with basic image transformations may cause overfitting on the minority class which is being oversampled. The biases present in the minority class are more prevalent post-sampling with these techniques. Therefore, this paper also used neural augmentation networks such as Generative Adversarial Network (GAN) [59] to generate a new dataset called GNEU. The GAN can generate synthetic defect images that are nearly identical to their ground-truth original ones. Similar to [60], we developed a deep convolution GAN named DCGAN that includes two CNNs: generator G (reversed CNN) and discriminator D. Generator G takes random input and generates an image as output from up-sampling the input with transposed convolutions. However, D takes the generated images and original images and tries to predict whether a given generated image is (fake) or original (real). The GAN network performs min-max two players game with value function V(D, G) [59]: where D(ω) is the probability of ω is a real image, S data is the distribution of the original data, τ is random noise used by the generator G to generate image G(τ) and S τ is the distribution of the noise. During training, the aim of the discriminator D is to maximize the probability D(ω) assigned to fake and real images. Since it is a binary classification problem, this model is fit seeking to minimize the average binary cross entropy. Minimax Gan loss is defined as minimax simultaneous optimization of the disseminator and generator models as shown in Equation (1). The discriminator pursues to maximize the average of the log probability for real images and the LoG of the inverted probabilities of fake images. In other word, it maximizes the LoG D(ω) + LoG(1−D(G(τ))). The generator pursues to minimize the LoG of the inverse probability predicted by the discriminator for fake images. In other word, it minimizes the LoG(1−D(G(τ))).
GAN Architecture
In this paper, we used the similar GAN architecture developed in [60] as follows. Authors in [60] designed a generator G that includes first a dense layer with a ReLU activation function followed by batch normalization to stabilize GAN as in [59]. To prepare the number of nodes and reshaped into 3D volume, they added another dense layer with the ReLU activation function followed by batch normalization. Then, they added a Reshape layer to generate 3D volume from the input shape. To increase the spatial resolution during training they added a transposed convolution (Conv2DTranspose) with stride 2, 32 filters, each of which is 5 × 5, ReLU activation function and followed by batch normalization and dropout of size 0.3 to avoid overfitting. Finally, they added five up-sample and transposed convolutions (Conv2DTranspose), each of which uses stride 2 and tanh activation function. These convolutions increased the spatial dimension resolution from 14 × 14 to 224 × 224, which is the exact of the input images. Afterward, they developed the discriminator D as follows. It includes two convolution layers (Conv2D) with stride 2, 32 filters, each of which is 5 × 5 and Leaky ReLU activation function to stabilize training. As well, they added flatten and dense layers with sigmod activation function to capture the probability of whether the image is synthetic or real. Generating GNEU We trained the GAN to generate the synthetic images as follows. A noise vector randomly generated using Gaussian distribution and passed to G to generate an actual image. Then, authentic images from the training dataset (NEU) and the generated synthetic images were mixed. Subsequently, discriminator D trained using the mixed dataset with aiming to correctly label each image either fake or real. Again, a random noise generated and labeled each noise vector as real image. Finally, GAN trained using these noise vectors and real image labels even if they are not actual real images. In summary, at each iteration of the GAN algorithm, firstly it generates random images and then trains the discriminator to distinguish fake and real images, secondly it tries to fool the discriminator by generating more synthetic images, finally it updates the weights of the generator based of the received feedback from the discriminator which enable us to generate more authentic images. We stop training GAN after 600 iterations, where the mean of discriminator loss and adversarial loss converge to 0.031 and 1.617, respectively. We mixed the synthetic images with the original NEU images to generate the GNEU dataset. Figure 2 shows examples of the results of the generated images from the NEU dataset.
Sensors 2023, 23, x FOR PEER REVIEW 6 of 18 filters, each of which is 5 × 5 and Leaky ReLU activation function to stabilize training. As well, they added flatten and dense layers with sigmod activation function to capture the probability of whether the image is synthetic or real.
Generating GNEU
We trained the GAN to generate the synthetic images as follows. A noise vector randomly generated using Gaussian distribution and passed to G to generate an actual image. Then, authentic images from the training dataset (NEU) and the generated synthetic images were mixed. Subsequently, discriminator D trained using the mixed dataset with aiming to correctly label each image either fake or real. Again, a random noise generated and labeled each noise vector as real image. Finally, GAN trained using these noise vectors and real image labels even if they are not actual real images. In summary, at each iteration of the GAN algorithm, firstly it generates random images and then trains the discriminator to distinguish fake and real images, secondly it tries to fool the discriminator by generating more synthetic images, finally it updates the weights of the generator based of the received feedback from the discriminator which enable us to generate more authentic images. We stop training GAN after 600 iterations, where the mean of discriminator loss and adversarial loss converge to 0.031 and 1.617, respectively. We mixed the synthetic images with the original NEU images to generate the GNEU dataset. Figure 2 shows examples of the results of the generated images from the NEU dataset. This paper feeds approximately 1800 images of the NEU dataset to the DCGAN framework, which generates 540 synthetic images added to the original NEU dataset and creates a new dataset called GNEU. We divide GNEU dataset into training, validation and testing sets. The training set includes 1260 real and synthetic images, the validation set includes 540 real and synthetic images. The test set includes 540 real images.
Severstal Dataset
The Severstal dataset [30] includes approximately 12,568 steel plate training images and 71,884 pixel-wise annotation masks among four different types of steel defects. Figure 3 shows the types of steel defects and the frequency of occurrence of each defect class in the training images. Each steel plate, high resolution image is 256 × 1600 pixels. The training data has 5902 images without defect and 6666 images has defects. Furthermore, the number of images with one label is 6293, with two labels is 425 and 2 images with three labels. Images captured by using high frequency cameras mounted on the production line. The shape of each annotation mask is also 256 ×1600 pixels. Severstal dataset includes four types of surface defects. To annotate defects with small mask file size, the dataset uses run-length encoding (RLE) on the pixel values. The RLE represents the pairs of values that have a start position and a run length. For example, '10 5' means starting at pixel 10 and running a total of 5 pixels (10,11,12,13,14) where the pixels are numbered from top to bottom, then left to right: 1 is pixel (1,1), 2 is pixel (2,1), etc. The evaluation metric required by Severstal is the mean Dice coefficient as shown in equation 3 that is used to compare the This paper feeds approximately 1800 images of the NEU dataset to the DCGAN framework, which generates 540 synthetic images added to the original NEU dataset and creates a new dataset called GNEU. We divide GNEU dataset into training, validation and testing sets. The training set includes 1260 real and synthetic images, the validation set includes 540 real and synthetic images. The test set includes 540 real images.
Severstal Dataset
The Severstal dataset [30] includes approximately 12,568 steel plate training images and 71,884 pixel-wise annotation masks among four different types of steel defects. Figure 3 shows the types of steel defects and the frequency of occurrence of each defect class in the training images. Each steel plate, high resolution image is 256 × 1600 pixels. The training data has 5902 images without defect and 6666 images has defects. Furthermore, the number of images with one label is 6293, with two labels is 425 and 2 images with three labels. Images captured by using high frequency cameras mounted on the production line. The shape of each annotation mask is also 256 ×1600 pixels. Severstal dataset includes four types of surface defects. To annotate defects with small mask file size, the dataset uses run-length encoding (RLE) on the pixel values. The RLE represents the pairs of values that have a start position and a run length. For example, '10 5' means starting at pixel 10 and running a total of 5 pixels (10,11,12,13,14) where the pixels are numbered from top to bottom, then left to right: 1 is pixel (1,1), 2 is pixel (2,1), etc. The evaluation metric required by Severstal is the mean Dice coefficient as shown in equation 3 that is used to compare the pixel-wise agreement between a predicted segmentation and its corresponding ground truth. where A is the ground truth and B is the predicted set of pixels. |A| is the total number of pixels in A, the ground truth set of pixels. |B| is the total number of pixels in B, the predicted set of pixels. |A∩B| is the total counts of pixels in both A and B. When both A and B are empty then the Dice coefficient equals 1. Since Severstal dataset provides adequate number of images in this paper we did not use any augmentation technique to oversample the dataset.
Sensors 2023, 23, x FOR PEER REVIEW 7 of 18 pixel-wise agreement between a predicted segmentation and its corresponding ground truth.
where A is the ground truth and B is the predicted set of pixels. |A| is the total number of pixels in A, the ground truth set of pixels. |B| is the total number of pixels in B, the predicted set of pixels. |A∩B| is the total counts of pixels in both A and B. When both A and B are empty then the Dice coefficient equals 1. Since Severstal dataset provides adequate number of images in this paper we did not use any augmentation technique to oversample the dataset.
Proposed DSTEELNet Architecture
This section describes the proposed DSTEELNet CNN framework to detect and classify defects in surface steel strips. The proposed DSTEELNet aims to generate high quality training results through achieving fine details of the input 2D images by increasing feature resolutions. Expanding the receptive field ℛ ℱ increases the feature resolution, whilst ℛ ℱ is the portion of the input image where the filter extracts feature and defined by the filter size of the layer in the CNN [61,62]. To expand the ℛ ℱ , this paper used dilated convolution [29] with a dilation rate larger than 1, where, the dilation rate is the spacing between each pixel in the convolution filter. Adding the dilation rate to the conv2D kernel decreases the computational costs and expands ℛ ℱ . Equation (4) shows the form to calculate the receptive field ℛ ℱ where k is the size of the kernel and d is the dilation rate.
For example, using dilation rate of 1 and 3 × 3 kernel generates receptive field with size 3 × 3 which is equivalent to the standard convolution as shown in Figure 4b. The size of the output can be calculated using Equation (5) as follows: where g× g input with a dilation factor, padding and stride of d, p and s, respectively. If dilation rate of 2 is used, then each input skips a pixel. Figure 4c. shows 3 × 3 kernel with dilation rate of 2 has the same field of view as 5 × 5 kernel with a gap of d−1 between. For example, only 9 pixels out of 25 will be only computed around a pixel x when d = 2, and k = 3. As a result, the receptive field ℛ ℱ increased and enabled the filter to capture sparse and large contextual information [63].
Proposed DSTEELNet Architecture
This section describes the proposed DSTEELNet CNN framework to detect and classify defects in surface steel strips. The proposed DSTEELNet aims to generate high quality training results through achieving fine details of the input 2D images by increasing feature resolutions. Expanding the receptive field R F increases the feature resolution, whilst R F is the portion of the input image where the filter extracts feature and defined by the filter size of the layer in the CNN [61,62]. To expand the R F , this paper used dilated convolution [29] with a dilation rate larger than 1, where, the dilation rate is the spacing between each pixel in the convolution filter. Adding the dilation rate to the conv2D kernel decreases the computational costs and expands R F . Equation (4) shows the form to calculate the receptive field R F where k is the size of the kernel and d is the dilation rate.
For example, using dilation rate of 1 and 3 × 3 kernel generates receptive field with size 3 × 3 which is equivalent to the standard convolution as shown in Figure 4b. The size of the output can be calculated using Equation (5) as follows: where g × g input with a dilation factor, padding and stride of d, p and s, respectively. If dilation rate of 2 is used, then each input skips a pixel. Figure 4c. shows 3 × 3 kernel with dilation rate of 2 has the same field of view as 5 × 5 kernel with a gap of d−1 between. For example, only 9 pixels out of 25 will be only computed around a pixel x when d = 2, and k = 3. As a result, the receptive field R F increased and enabled the filter to capture sparse and large contextual information [63]. The use of systematic dilation expands receptive field ℛ ℱ exponentially without loss of coverage. In other words, the receptive field ℛ ℱ grows exponentially while the number of parameters grows linearly. However, employing a series of dilated convolutional layers with same dilation rate introduced gridding effect problem in which the computations of a pixel in bottom layer are based on sparse/ non-local information. To overcome the gridding effect, the authors in [64] proposed hybrid dilated convolution (HDC) that makes the final size of the ℛ ℱ of a series of convolutional operations fully covers a square region without any holes or missing edges. The HDC developed CNN that includes groups of dilated convolutional layers. Each group has a series of dilated convolutional layers with different dilation rates 1,2,3, respectively. The authors noted that using dilation rate having a common factor relationship (e.g., 2, 4, 8, etc.) in same group of layers may raise the gridding problem. This is contrary to atrous spatial pyramid pooling (ASPP) module [27] where dilation rates have common factors relationships.
In this paper, we developed DSTEELNet that includes parallel stacks of dilated convolution with different dilation rates, activation and Max-Pooling layers as shown in Figure 5. At the feature level, we added parallel layers and then performed convolution with activation on the resulting feature maps. We added flatten layer to unstack all the tensor values into a 1-D tensor. The flattened features are used as inputs to two dense layers (Multi-layer perception). To reduce/avoid overfitting, we applied dropout. For classification task, we added dense layer with softmax activation function. Finally, the architecture generates a class activation map. Figure 5 shows the proposed DSTEELNet architecture. It includes four dilated convolution blocks in three parallel stacks. Assume each stack includes m convolution blocks CB (i) where ∈ {1,2, … } and the corresponding output of each CB (i) is denoted by βi. The input features and output features are denoted as fin and fout, respectively, and fout can be obtained as follows: Each convolution block CBt=j = conv(n = F) followed by Max-pooled block to reduce the feature size and the computational complexity for the next layer. For efficient pooling, we used pool_size = (2,2) and strides = (2,2) [65]. Each convolution block CBt=j = conv(n = F) includes two Conv2D layers followed with ReLU activation function where F is total number of filters and j is the dilation rate. We have used 3 × 3 filters in all convolution blocks. The total number of filters in the first convolution block is 64, and the rest are 128, 256, 512 in order. The three parallel stacks (branches) are similar except they have different dilation rates j = 1,2 and 3, respectively as shown in Figure 5. We used different dilation rates that have no common factor. The use of systematic dilation expands receptive field R F exponentially without loss of coverage. In other words, the receptive field R F grows exponentially while the number of parameters grows linearly. However, employing a series of dilated convolutional layers with same dilation rate introduced gridding effect problem in which the computations of a pixel in bottom layer are based on sparse/ non-local information. To overcome the gridding effect, the authors in [64] proposed hybrid dilated convolution (HDC) that makes the final size of the R F of a series of convolutional operations fully covers a square region without any holes or missing edges. The HDC developed CNN that includes groups of dilated convolutional layers. Each group has a series of dilated convolutional layers with different dilation rates 1,2,3, respectively. The authors noted that using dilation rate having a common factor relationship (e.g., 2, 4, 8, etc.) in same group of layers may raise the gridding problem. This is contrary to atrous spatial pyramid pooling (ASPP) module [27] where dilation rates have common factors relationships.
In this paper, we developed DSTEELNet that includes parallel stacks of dilated convolution with different dilation rates, activation and Max-Pooling layers as shown in Figure 5. At the feature level, we added parallel layers and then performed convolution with activation on the resulting feature maps. We added flatten layer to unstack all the tensor values into a 1-D tensor. The flattened features are used as inputs to two dense layers (Multi-layer perception). To reduce/avoid overfitting, we applied dropout. For classification task, we added dense layer with softmax activation function. Finally, the architecture generates a class activation map. Figure 5 shows the proposed DSTEELNet architecture. It includes four dilated convolution blocks in three parallel stacks. Assume each stack includes m convolution blocks CB (i) where i ∈ {1, 2, . . . m} and the corresponding output of each CB (i) is denoted by β i . The input features and output features are denoted as f in and f out , respectively, and f out can be obtained as follows: Each convolution block CB t=j = conv(n = F) followed by Max-pooled block to reduce the feature size and the computational complexity for the next layer. For efficient pooling, we used pool_size = (2,2) and strides = (2,2) [65]. Each convolution block CB t=j = conv(n = F) includes two Conv2D layers followed with ReLU activation function where F is total number of filters and j is the dilation rate. We have used 3 × 3 filters in all convolution blocks. The total number of filters in the first convolution block is 64, and the rest are 128, 256, 512 in order. The three parallel stacks (branches) are similar except they have different dilation rates j = 1, 2 and 3, respectively as shown in Figure 5. We used different dilation rates that have no common factor. Each parallel branch/stack generates features from images at different CNN layers and then produces different context information as shown in Figure 6. We captured features from the input 2D image using different dilation rates that increases the receptive fields. Figure 6 visualizes 64 output features of three parallel convolutional stacks in Figure 5 with dilation rate 1, 2 and 3 at layers max_pooling2d_4, max_polling2d_9 and max_polling2d_14, respectively. Figure 6a-c shows the features of the input image of size 200×200 in a 200 × (200 × 64) matrix. The use of parallel stacks with different (i.e., no common factor) dilation rates succeed to cover a square region in the input 2D image without any holes or missing edges. Then, we concatenated the generated features from these parallel branches and handed the resulted features to the next convolution layer to produce the final low-level features. This convolution layer has 512 filters with a filter size 3 × 3, dilation rate 1, stride of 1 and followed by ReLU activation function. To convert the square feature map into one dimensional feature vector, flatten layer has been added. Two perception (fully connected) layers with size 1024 were used to feed the results of the flatten layer through dense layer that will perform classification. The last dense layer uses softmax activation function to determine class scores. To avoid/reduce overfitting during training, a dropout layer has been added to discard some weights produced from two fully connected layers. In this paper, we used dropout of size 0.3. Each parallel branch/stack generates features from images at different CNN layers and then produces different context information as shown in Figure 6. We captured features from the input 2D image using different dilation rates that increases the receptive fields. Figure 6 (200 × 64) matrix. The use of parallel stacks with different (i.e., no common factor) dilation rates succeed to cover a square region in the input 2D image without any holes or missing edges. Then, we concatenated the generated features from these parallel branches and handed the resulted features to the next convolution layer to produce the final low-level features. This convolution layer has 512 filters with a filter size 3 × 3, dilation rate 1, stride of 1 and followed by ReLU activation function. To convert the square feature map into one dimensional feature vector, flatten layer has been added. Two perception (fully connected) layers with size 1024 were used to feed the results of the flatten layer through dense layer that will perform classification. The last dense layer uses softmax activation function to determine class scores. To avoid/reduce overfitting during training, a dropout layer has been added to discard some weights produced from two fully connected layers. In this paper, we used dropout of size 0.3. Each parallel branch/stack generates features from images at different CNN layers and then produces different context information as shown in Figure 6. We captured features from the input 2D image using different dilation rates that increases the receptive fields. Figure 6 64) matrix. The use of parallel stacks with different (i.e., no common factor) dilation rates succeed to cover a square region in the input 2D image without any holes or missing edges. Then, we concatenated the generated features from these parallel branches and handed the resulted features to the next convolution layer to produce the final low-level features. This convolution layer has 512 filters with a filter size 3 × 3, dilation rate 1, stride of 1 and followed by ReLU activation function. To convert the square feature map into one dimensional feature vector, flatten layer has been added. Two perception (fully connected) layers with size 1024 were used to feed the results of the flatten layer through dense layer that will perform classification. The last dense layer uses softmax activation function to determine class scores. To avoid/reduce overfitting during training, a dropout layer has been added to discard some weights produced from two fully connected layers. In this paper, we used dropout of size 0.3. For better multi-scale learning and to improve the DSTEELNet architecture, we proposed an updated architecture called (DSTEELNet-ASPP). It replaced the Conv2D layer after concatenating the features from the parallel stacks in DSTEELNet in Figure 5 by an atrous spatial pyramid pooling (ASPP) module [27]. This module includes four Conv2D layers with different dilation rates 4, 10, 16, 22, respectively to capture defects of distinct size as shown in Figure 7. Then, we concatenated the generated features from these Conv2D layers and handed the resulted features to the flatten layer in Figure 5 to unstack For better multi-scale learning and to improve the DSTEELNet architecture, we proposed an updated architecture called (DSTEELNet-ASPP). It replaced the Conv2D layer after concatenating the features from the parallel stacks in DSTEELNet in Figure 5 by an atrous spatial pyramid pooling (ASPP) module [27]. This module includes four Conv2D layers with different dilation rates 4, 10, 16, 22, respectively to capture defects of distinct size as shown in Figure 7. Then, we concatenated the generated features from these Conv2D layers and handed the resulted features to the flatten layer in Figure 5 to unstack all the tensor values into a 1-D tensor. DSTEELNet-ASPP enlarges the receptive field and incorporates multi-scale contextual information without sacrificing spatial resolution. This contributes to improving the overall performance of the DSTEELNet architecture. all the tensor values into a 1-D tensor. DSTEELNet-ASPP enlarges the receptive field and incorporates multi-scale contextual information without sacrificing spatial resolution. This contributes to improving the overall performance of the DSTEELNet architecture. Figure 7. Atrous spatial pyramid pooling module (ASPP) replaced the Conv2D layer after concatenating the features in Figure 5. It includes four Conv2D with different dilation rates 4, 10, 16, 22, respectively, and associated feature maps.
Experiments
The performance of the DSTEELNet is evaluated on the NEU, generated dataset (GNEU) and Severstal dataset. We demonstrate that DSTEELNet achieves a reasonable design and significant results. Therefore, we compare the proposed DSTEELNet with state-of-the-art deep leaning detection and classification techniques such as Yolov5, VGG16, ResnNt50, and MobileNet.
Experiment Metrics
For the performance evaluation, this paper uses the following performance metrics: where, N is the number of classes, TP is the number of true Positives, FN is the number of false Negative, and FP is the number of false Positive. True positive TP refers to a defective steel image identified as defective. False positive is referred to defect-free steel image identified as defective. False negative is referred to defective steel image identifies as defectfree. Average Precision AP is calculated as the sum of recall and precision divided by two as seen in Equation (10). The F1 score is measured to seek a balance between Recall and Precision. In addition, the mean average precision (mAP) is calculated as the average of AP of each class that is used to evaluate the overall performance.
Experiment Setup
The experiment platform in this work is Intel(R) Core™ i7-9700L with a clock rate of 3.6 GHz, working with 16 GB DDR4 RAM and a graphics card that is NVIDIA GeForce RTX 2080 SUPER. All experiments in this project were conducted in Microsoft Windows 10 Enterprise 64-bit operating system, using Keras 2.2.4 with TensorFlow 1.14.0 backend. We trained the DSTEELNet, DSTEELNet-ASPP, VGG16 [66], VGG19, ResNet50 [52], Mo-bileNet [67] and Yolov5 [68] and modified Yolov5-SE [69] for approximately 150 epochs on Figure 7. Atrous spatial pyramid pooling module (ASPP) replaced the Conv2D layer after concatenating the features in Figure 5. It includes four Conv2D with different dilation rates 4, 10, 16, 22, respectively, and associated feature maps.
Experiments
The performance of the DSTEELNet is evaluated on the NEU, generated dataset (GNEU) and Severstal dataset. We demonstrate that DSTEELNet achieves a reasonable design and significant results. Therefore, we compare the proposed DSTEELNet with state-of-the-art deep leaning detection and classification techniques such as Yolov5, VGG16, ResnNt50, and MobileNet.
Experiment Metrics
For the performance evaluation, this paper uses the following performance metrics: where, N is the number of classes, T P is the number of true Positives, F N is the number of false Negative, and F P is the number of false Positive. True positive T P refers to a defective steel image identified as defective. False positive is referred to defect-free steel image identified as defective. False negative is referred to defective steel image identifies as defect-free. Average Precision AP is calculated as the sum of recall and precision divided by two as seen in Equation (10). The F1 score is measured to seek a balance between Recall and Precision. In addition, the mean average precision (mAP) is calculated as the average of AP of each class that is used to evaluate the overall performance.
Experiment Setup
The experiment platform in this work is Intel(R) Core™ i7-9700L with a clock rate of 3.6 GHz, working with 16 GB DDR4 RAM and a graphics card that is NVIDIA GeForce RTX 2080 SUPER. All experiments in this project were conducted in Microsoft Windows 10 Enterprise 64-bit operating system, using Keras 2.2.4 with TensorFlow 1.14.0 backend. We trained the DSTEELNet, DSTEELNet-ASPP, VGG16 [66], VGG19, ResNet50 [52], Mo-bileNet [67] and Yolov5 [68] and modified Yolov5-SE [69] for approximately 150 epochs on both NEU and GNEU training and validation datasets with batch size of 32 and image input size 200 × 200. Similarly, we trained DSTEELNet, VGG16, VGG19, ResNet50, and MobileNet on Severstal dataset where, the image input size is 120 × 120. We applied the Adam optimizer [70] with learning rate 1 × 10 −4 . In addition, we applied the categorical cross entropy loss function in the training. The loss is measured between the probability of the class predicted from softmax activation function and the true probability of the category. We did not use any pretrained weights such ImageNet because ImageNet has no steel surface images. We used Equations (8)- (12) to calculate the AP per class and the mAP for the tested models.
Results and Discussion
This section illustrates gradually the results of the proposed CNN architecture to detect defects in surface steel strips. Table 2 demonstrates the weighted average results. It illustrates that DSTEELNet performs the highest precision, recall and F1 scores when trained on both NEU and GNEU datasets as shown in bold values in Table 2. Additionally, it shows that the use of DCGN improved the precision, recall and F-Score of the DSTEELNet model by approximately 1%, 1.3% and 1.4%, respectively. Moreover, it shows that DSTEELNet outperforms recent CNNs for detecting single defect such as Yolov5 and modified Yolov5-SE [69] by 13.5% and 8.8%, respectively. The Yolov5-SE employs attention mechanism through adding squeeze-and-excitation (SE) block between CSP2_1 and CBL layers to dynamically adjust the characteristics of each channel according to the input. In addition, DSTEELNet outperforms the traditional CNNs such as Vgg16, Vgg19, ResNet50, and MobileNet. Tables 3 and 4 show the class-wise classification performance metrics listed in Equations (8)- (12). It illustrates the comparison between DSTEELNet and the state-ofthe-art CNN architectures. Table 3 shows that almost all models tend to enhance the classification of most categories (such as crazing, patches, rolled-in_scale and scratches). The state-of-the-arts models show poor performance to detect defects such as inclusion and pitted_surface due to some similarities in their defect's structures. However, the DSTEEL-Net succeeded in detecting all the class categories with high accuracy. Table 3 shows that DSTEELNet achieves 97.2% mAP which outperforms the other models, e.g., VGG16 (91.2%, 6% higher mAP), VGG19 (90.0%, 7.2% higher mAP), ResNet50 (93%, 4.2% higher mAP) and MobileNet (94%, 3.2% higher mAP). In addition, Table 3 shows that DSTEELNet delivers consistent results for the precision, recall and F1 for crazing, patches, pitted_surface, rolled-in_scale and scratches defects. The DSTEELNet succeeds in detecting inclusion defect with highest F1 score (0.91) followed by MobileNet (0.82), ResNet50 (0.79), VGG19 (0.69) and VGG16 (0.68), respectively, in order. Similarly, the DSTEENet succeeds in detecting pitted_surface defect with highest F1 score (0.92) followed by MobileNet (0.84), ResNet50 (0.84), VGG16 (0.79) and VGG19 (0.76), respectively, in order. The examples of DSTEELNet detection results are shown in Figure 8. It shows that DSTEELNet succeeds in detecting defects with significant confidence scores. Table 4 depicts a comparative results of single defect classification accuracy with Yolov5 and Yolov5-SE. The low accuracies achieved by Yolov5 and Yolov5-SE to detect small rolled-in-scale defects are badly lowers the average accuracy value. Therefore, DSTEELNet outperforms Yolov5 and Yolov5-SE in classifying the six defect types. Figure 9 shows the training and validation accuracy for DSTEELNet. It shows that both training and validation accuracy started to improve from epoch 25 and then converged to the highest accuracy values. Figure 10 shows the confusion matrices for DSTEELNet and ResNet50 evaluated models where the test dataset includes 90 images of each surface defect class. Figure 10a shows that DSTEELNet detects all the steel surface defects perfectly except the inclusion defects. It misclassified 13 inclusion defects out of 90 as pitted_surface.
Furthermore, as shown in Figure 10b ResNet50 misclassified 31 inclusion defects out of 90 as pitted_surface. In summary, DSTEELNet fails to detect 2.9% of defects in 540 images however, ResNet50, MobileNet, VGG19, and VGG16 fail to detect defects in 6.6%, Furthermore, as shown in Figure 10b ResNet50 misclassified 31 inclusion defects out of 90 as pitted_surface. In summary, DSTEELNet fails to detect 2.9% of defects in 540 images however, ResNet50, MobileNet, VGG19, and VGG16 fail to detect defects in 6.6%, Table 5 demonstrates the weighted average results on Severstal dataset. It illustrates that for steel surface defect detection DSTEELNet performs the highest precision, accuracy and F1 scores as shown in bold values in Table 5.
Dilation Rates Experiments
The proposed DSTEELNet architecture includes four dilated convolution blocks CB t=j in three parallel stacks. Each stack has a different dilated rate j = 1,2,3. In this section we examined different DSTEELNet architectures through variant dilation rate per stack and number of parallel stacks. We trained the DSTEELNet with (1) one stack includes groups of Conv2D layers having different order of dilation rates and (2) three parallel stacks with different dilation rates per stack. Table 6 depicts the weighted average results of different DSTEELNet architectures. In Table 6, the use of one stack of Conv2D layers with dilation rates 1,1,2,2,3 achieved better results than one stack with dilation rates 1,2,3,4,5. Table 6 and Figure 11 show that using three parallel stacks with dilation rates 1,2,3 achieved the highest F1-score and precision, respectively. Table 6 shows that the DSTEELNet-ASSP improved the precision, recall and F1-score by 2%, 2.2% and 2.1%, respectively, since it enlarges the receptive field and incorporates multi-scale contextual information without sacrificing spatial resolution. Table 7 shows the average inference time to detect defects in single image by the proposed technique DSTEELNet, and other deep learning and traditional techniques. It reveals that the traditional methods generally are not able to meet the steel industry requirements in real-time. In addition, Table 7 shows that the proposed DSTEELNet is the fastest one to detect defects and can meet the real-time requirements. DSTEELNet speeds the defect detection time of the traditional techniques by approximately 20 times and outperforms the deep learning techniques. The accuracy of the MobileNet and Resnet50 are higher than VGG16 and VGG19, but they take a longer time to detect defects. In summary, the DSTEELNet achieves the highest accuracy and shortest detection time due to the reduction of its computation complexity. It also outperforms the recent technique called end-to-end defect detection (EDDN) [71] that added to Vgg16 extra architectures including multi-scale feature maps and predictors for detection. The authors reported that EDDN achieved 0.724 mAP and can detect defects in a single image in 27ms. The DSTEELNet outperforms EDDN and can detect defects in single image with 0.972 mAP at 23ms. In addition, Yolov5-SE [66] succeeded in detecting defects in a single image with 0.88 mAP at 24ms. The DSTEELNet succeeds in detecting and classifying defects at 23ms with a higher precision than Yolov5-SE as shown in tables 2 and 7. Table 7 shows the average inference time to detect defects in single image by the proposed technique DSTEELNet, and other deep learning and traditional techniques. It reveals that the traditional methods generally are not able to meet the steel industry requirements in real-time. In addition, Table 7 shows that the proposed DSTEELNet is the fastest one to detect defects and can meet the real-time requirements. DSTEELNet speeds the defect detection time of the traditional techniques by approximately 20 times and outperforms the deep learning techniques. The accuracy of the MobileNet and Resnet50 are higher than VGG16 and VGG19, but they take a longer time to detect defects. In summary, the DSTEELNet achieves the highest accuracy and shortest detection time due to the reduction of its computation complexity. It also outperforms the recent technique called end-to-end defect detection (EDDN) [71] that added to Vgg16 extra architectures including multi-scale feature maps and predictors for detection. The authors reported that EDDN achieved 0.724 mAP and can detect defects in a single image in 27ms. The DSTEELNet outperforms EDDN and can detect defects in single image with 0.972 mAP at 23ms. In addition, Yolov5-SE [66] succeeded in detecting defects in a single image with 0.88 mAP at 24ms. The DSTEELNet succeeds in detecting and classifying defects at 23ms with a higher precision than Yolov5-SE as shown in Tables 2 and 7.
Conclusions
This paper designed and developed a CNN architecture that is suitable for real-time surface steel strips defect detection task. It proposed a DSTEELNet that employs sparse receptive fields and parallel convolution stacks to generate more robust and discriminative features for defect detection. The experiment results show that the proposed DSTEELNet with three parallel stacks with different rates 1,2,3 achieved 97% mAP and outperformed state-of-the-art CNN architectures, such as Yolov5, VGG16, VGG19, Resent50 and MobileNet with 8.8%, 6%, 7.2%, 4.2% and 3.2% higher mAP, respectively. In addition, we developed DSTEELNet-ASSP that improved the precision, recall and F1-score. As future research, we will explore methods to achieve more precise defect boundaries, such as performing defect segmentation based on deep learning techniques.
Funding: This work was supported by the Vice Provost for Research at Southern Illinois University Carbondale as a startup package for the author.
Data Availability Statement: Two publicly available datasets NEU and Serverstal to illustrate and evaluate the proposed architecture were used.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2003-03-01T00:00:00.000
|
8639030
|
{
"extfieldsofstudy": [
"Geography",
"Medicine"
],
"oa_license": "pd",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1289/ehp.111-a142a",
"pdf_hash": "b1ad786aa9a7f12e52f75010457d13d2400bd6ef",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43146",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "2ceefb01dc34f5168ba128795b5ecdf89096cc10",
"year": 2003
}
|
pes2o/s2orc
|
Importance of the Great Lakes.
The article by Knap et al., “Indicators of Ocean Health and Human Health: Developing a Research and Monitoring Framework” (Knap et al. 2002), was a welcome overview of issues that link the environmental condition of marine/ocean ecosystems and human disease. The complement to the growing concern about the connection between health and the marine environment is a corresponding emphasis on large freshwater lake ecosystems and human health. In the United States and Canada, for example, the Great Lakes basin contains a set of inland seas that are oceanographic in scale. They serve as a highway for international maritime commerce and support a $1 billion/year recreational and commercial fishing industry. In addition, they must also supply drinking water for over 15 million people. The Great Lakes hold about 20% of the world’s surface freshwater. In this context, the degradation of the Great Lakes ecosystem through chemical and biological contamination presents an enormous challenge for the future. Questions about the impact of methyl mercury, polychlorinated biphenyls, and other chemicals on the health of those who eat fish from the Great Lakes; about the role of bacterial loading of coastline beaches on disease; and about the quality of drinking water taken from the lakes are among those in need of intense study. Surprisingly, in comparison with the number of research organizations and funding opportunities that concentrate on the marine environment, there are very few governmental or academic programs that target the Great Lakes environment. In this context, it should be a priority to develop research programs that can enlarge the knowledge base so that the Great Lakes can be sustained as the centerpiece of our freshwater resources.
Appreciation for "Remembering Alice Stewart"
EHP deserves appreciation for publishing "Remembering Alice Stewart" (Mead 2002). However, I would like to address a few inaccuracies and important omissions about this scientist's contributions that warrant comment.
The Oxford Survey of Childhood Cancers (OSCC) did not limit itself to "children [who] had died of lymphatic leukemia," (Mead 2002) but it included all children who had died of any form of cancer, anywhere in the United Kingdom. Mead's statement that Stewart found that children who died of cancer had received prenatal X-rays twice as often as healthy children should read that among the children that had died of cancer, twice as many had been exposed to prenatal X-rays, as compared to the group of healthy children serving as controls.
Mead correctly stated Stewart's conclusion that radiation protection committees… had grossly underestimated the [unavoidable] number of cancers due to background radiation, but failed to refer to her pivotal study linking variations in local background exposure levels over a narrow grid all across the British Isles with variations in local childhood cancer (Knox et al. 1988). This observation led Stewart to infer that while about 7% of all childhood cancers for were associated with prenatal X-rays (declining thereafter with declining doses), more than 70% were associated with unavoidable in utero exposures to natural background radiation (Knox et al. 1988). This study contradicted the popular contention that small anthropogenic increases in population exposures from radioactive fallout or environmental contamination have no detectable detrimental health consequences.
Based on her findings, Stewart developed a model of cancerogenesis that links a strongly age-dependent risk for radiationinduced cancer (highest during early fetal development, lower by at least a factor of three before birth, lowest in young adults, then rising again sharply after 40 years of age) with an age-and general health-dependent variation in individual immune defense competence. The evidence for this relation combines the findings of the OSCC with those of nuclear workers studies. Mead's article (2002) and other reports have focused primarily on the politically explosive challenges that Stewart's work presented to official radiation risk assessments. However, for the history of pioneering scientific ideas it is far more significant to note that most of these contradictions derive from Stewart's unconventional insights into the confounding effects of selection (such as healthy worker and healthy survivor effect) and of competition between malignant and nonmalignant causes of deaths on epidemiologic mortality studies. Ignoring these factors have led to dramatically different outcomes in the analysis of the same statistical database.
Nitrate and Methemoglobinemia
After they had collected extensive particulars in the Transylvania Region of Romania for an epidemiologic cohort study exploring a hypothetical relationship between high nitrate infant exposure and later neuropsychologic development, Zeman et al. (2002) tried to take advantage of these data in order to settle the question of whether infant methemoglobinemia is correlated with mean daily nitrate intakes or with diarrheal disease in the first months of life. However, we have serious reservations about their paper, especially their methodology.
In the study, proxy interviews of primary caregivers were used to reconstruct mean daily dietary nitrate exposures, but these interviews took place nearly 5 years after the clinical events. Although such data may be accurate enough for the study of a chronic disease, their reliability and accuracy are questionable in the study of an acute condition such as methemoglobinemia.
Well-water samplings were intended to evaluate water nitrate levels and reconstruct mean daily dietary nitrate exposures; they too were taken nearly 5 years after the clinical incidents. Nitrate levels in well water vary with time and season, which again opens the method to criticism.
Infant methemoglobinemia is an acute event. Usually the only clinical symptom (i.e., cyanosis) spontaneously disappears in several minutes, at the most in a few hours. We wonder what good it serves to try to correlate such an acute and transitory infant disease with mean daily nitrate intake during the first months of life. Moreover, why did interviewers (Zeman et al. 2002) ask primary caregivers to recall dietary habits of the child at both 2 and 6 months of life, when all clinical incidents regarded as methemoglobinemias appeared before (in their Table 1) or around (noted in text) 2 months of life?
Our primary criticism of the paper (Zeman et al. 2002) refers to the recruitment criteria of the infant methemoglobinemia cases: in all the cases, diagnoses are merely clinical. It is commonly known that cyanosis appears when methemoglobin levels exceed 10% (not 3% as might by construed from the comments of the authors). Of course, methemoglobinemia is not the only cause of infant cyanosis. Other pathologic conditions are quite possible. The only way to diagnose a case of infant methemoglobinemia with assurance is to measure the methemoglobin level in the blood at the time of the clinical incident. The physician will be justified in recognizing the case as infant methemoglobinemia if, and only if, the methemoglobin level exceeds 10%.
These important reservations having been stated, it is not at all surprising that Zeman et al. (2002) found the strongest association with estimated nitrate exposure, given that the infants were exposed to extremely high nitrate levels. The mean nitrate content of the drinking water in the cases was estimated at over 25 times the current U.S. drinking water maximum contaminant level (MCL) of 10 ppm nitrate nitrogen (U.S. Environmental Protection Agency 1991). In all of the cases, the nitrate content of the drinking water was at least 5 times higher than the MCL, and one case was 120 times the U.S. official limit.
Despite these extreme water nitrate concentrations, it is possible to note, as Zeman et al. (2002) did, that at lower estimated nitrate exposures diarrhea seems to be able to favor the appearance of infant methemoglobinemia. We are not sure that their work really succeeds in demonstrating such a link, but a number of papers published in the last few years [cited by Avery (1999) and L'hirondel and L'hirondel (2001)] had already convinced us of it.
Methemoglobinemia Risk Factors: Response to Avery and L'hirondel
Doing and discovery are always harder than critique. Should one have the opportunity, or misfortune, of merely reviewing the fieldwork of others, one would have the luxury of never being criticized. Real world, field epidemiology does not provide that luxury.
In the real world of epidemiology, where individuals apply shoe leather to pavement-or to the village farmer's field, as the case may be-answers are carefully pried from conditions as they exist; there is nothing as tidy as a controlled laboratory setting.
Despite the best efforts, case-control studies are always open to criticisms and specific weaknesses. Case-control studies are subject to recall bias; they are subject to difficulties in classification of cases and controls; you cannot calculate incidence rates from this study design; and causality can be difficult to establish (which Avery and L'hirondel failed to point out) (Gordis 2000).
What do field researchers do under such conditions? They use as many sources of information as possible to assure that internal validity of study design is the best that it can be, given the circumstances in which diseased individuals are found. Careful design included several safeguards.
First, when methemoglobin levels were not available on medical charts, multiple criteria for the determination of a case were used (Zeman et al. 2002a), including exposure history and positive ascorbic acid response to dyspneic respiratory distress. We are well aware that a clinical case of methemoglobinemia does not manifest at 3% methemoglobin in the blood and are not at all sure how that was "construed" from the text.
Second, we accessed recall bias in reported feeding regimes by comparing reports from cases and controls of amount and frequency of feeding; we found no significant differences between the groups using analysis of variance (Zeman et al. 2002a). Although we must always assume that some recall bias operates in the case of dietary recall, research has indicated that surrogate interviews are most accurate when the surrogate recalls information from salient periods of a dependents life, such as the first 6 months (Baranowski 1991;Livingstone 1992). Further, it is important to focus on windows of exposure for the sake of recall and to clearly define exposure, another concern of the case-control study design. In this case, we used both the needs of a larger cohort study and a nested case-control study to choose windows of exposure (Zeman et al. 2002a), which is entirely defensible on the grounds that the majority of methemoglobinemia cases occur in the first 6 months of life. These allowed us to capture the majority of methemoglobinemia cases and, in many instances, to see how exposures changed over time. Additional methodologic reviews are cited in our article (Zeman et al. 2002b).
We correlated levels of analyzed nitrate/ nitrite in well water with Sanitary Police Records made following methemoglobinemia incidents (Zeman et al. 2002a). In all cases and controls, the water source for the child had to be the same source implicated in the original incident. The average age of wells included in this study was 38.6 years, ranging from 6 years to an estimated > 100 years of continuous use. Further, the use of nitrogen-containing fertilizers has decreased since the early 1990s, and compost-based applications have gone up because of economic conditions. All current measured levels were taken in the spring of the year, the same time that the majority of methemoglobinemia cases occurred (Zeman et al. 2002a).
All epidemiologic studies are subject to weaknesses, but the determinates of causality include strength of association, biological mechanism, time series of events, dose-response relationships, and consistency of findings (Greenberg et al. 1996). It is in these basic tenets of the practice of epidemiology in which we find the reasons, justification, and value of this work.
Any case-control study can be improved through the use of a prospective study design; but is that ethical, given the extremely high levels of exposure that exist in these rural communities? We prefer to focus additional work on finding ways to alleviate these exposures, and we wonder why more non-governmental organizations are not working to do the same, rather than to befuddle the practice of public health and roll back maximum contaminant levels for known environmental toxins.
Addressing Global Warming
The issue of global warming is one that reaches beyond the question of whether the atmosphere is indeed heating up due to the long enduring emission rates of greenhouse gases. Although debate may continue regarding whether global warming is real or not, another issue worth examining is whether democratic systems of government are effective in protecting society's welfare against systematic long-enduring threats such as global warming.
Democracy appears to harness a collective intelligence of a population for the purpose of protecting the population's welfare. Candidates for political office are elected on the basis of how well they represent the views and interests of the populace.
But just what are the interests of the populace? Is society's long enduring survival interests always aligned with the collective interests of a given population?
In the early stages of global warming, when there is no imminent threat to the population-regardless of the threat to future generations-a given populace cannot be expected to democratically elect leadership who will force the population to responsibly address the global warming issue. The single lifetime interests of citizens in the population will not be sufficiently aligned with the multiple lifetime interests of the society they live in.
The global warming issue needs to be addressed strongly in its early stages in order to safeguard life on our planet from threat of future extinction. For this reason, it appears to me that democracy may be an inadequate means of governing all issues relevant to a society's welfare in the 21st century.
Joe Kinney Engineering Graduate
Danville, Indiana E-mail: JCK17@yahoo.com
Estimating Costs of Environmental Disease
Due to effective control programs in the industrialized world, childhood mortality from infectious diseases has decreased dramatically over the past 140 years (DiLiberti and Jackson 1999). In recent years, increased attention has focused on chronic childhood diseases such as asthma and certain neurodevelopmental disorders. Although the etiology of these diseases is complex, there is substantial evidence linking the environment to the onset or exacerbation of certain chronic conditions. Although such relationships have been proposed, significant uncertainties remain, and more research is needed to assess and quantify the impact that environmental chemicals have on children's health. estimated a total cost to society of $54.9 billion (range: $48.8-$64.8 billion) per year resulting from disease associated with the exposure of children to environmental chemicals. Although such an exercise can be valuable for setting public health priorities, quantification of cost estimates may be overstating the scientific certainty of the disease-environment relationships. This leaves the methodology open to criticism and makes the results difficult to interpret.
The well-known effect of lead exposure on neurodevelopment in children provides the largest component ($43.4 billion) of the estimate produced by . Meta-analysis of several data sets shows an inverse relationship between blood lead level and IQ in children (Schwarz 1994). This relationship has served as the basis for several estimates of the potential costs savings resulting from the reduction in environmental lead, as well as its impact on intelligence and health over the past 25 years Salkever 1995). Decreased lead in the environment since the 1970s has reduced the average blood lead level in children by approximately 15 µg/dL-from 17.8 µg/dL in 1970 to a current level of approximately 2.7 µg/dL . have suggested that this has resulted in an increase in average IQ of almost 4 points. based their cost estimate for blood lead on the impact of making further reductions from the current blood level down to zero. Even if the linear relationship between blood lead levels and IQ points is valid below the current levels-a hypothesis that is not directly supported by any data-the average annual cost estimated by is on the high end of the range of other estimates based on the same data (e.g., . Although acknowledged the lack of data to support the blood lead-IQ correlation at low blood concentrations, the dollar figures they estimated contribute by far the most substantial portion to the overall cost estimate. also estimated the costs associated with neurodevelopmental disorders, childhood cancer, and asthma caused by environmental chemicals. In this pioneering area, they estimated an environmentally attributable fraction (EAF) for each disease by convening a panel of several experts for each disease. The EAF used in the cost estimate is the mean value provided by the respondents on the panel, which we determined to be 30% for asthma, 5% for cancer, and 10% for neurobehavioral disorders such as dyslexia, attention-deficit hyperactivity disorder, autism, and intelligence reductions. estimated that environmentally related neurodevelopmental disorders cost $9 billion/year. Asthma and cancer make up the remaining $2.5 billion of the $54.9 billion estimate. There is no doubt a great deal of debate within the scientific community over what the actual EAFs may be. Some scientists may speculate that an appropriate EAF is zero, while others speculate a higher number. The "average" opinion of a small expert panel will not reflect this debate. As an example, Wallstein and Whitfield (1986) asked six experts for estimates of the amount of IQ deficit that would result in children with blood lead levels of 5 µg/dL. Three of the six experts (50%) estimated no IQ deficit. The remaining experts estimated values less than 3 IQ points. The mean estimate from all six would be just less than 1 IQ point, neglecting the fact that one-half of the experts estimated no deficit. used the estimate of $54.9 billion/year in costs related to environmental diseases in children to argue that the expense of further research in this area is comparatively minimal. This is a laudable goal, but it may not be the best approach. The fact that there is such uncertainty in EAFs for these diseases is justification enough for further research. Cost estimates such as this must be produced and published with caution. The cost numbers presented here have been picked up by the mass media.
Because of the brevity of such presentations, details of how the estimate was obtained and the uncertainties in the methodology often cannot be made clear. The extremely high cost numbers could provide easy targets for critics. Instead of serving as a justification for more research, such estimates may cause researchers studying the health effects of environmental chemicals to lose credibility within the larger community of scientific and medical research.
Marc L. Rigas Clark County Health District
Las Vegas, Nevada E-mail: rigas@cchd.org
Estimated Costs of Environmental Disease: Response to Rigas
We thank Rigas for his thoughtful comments. We would be the first to agree with him that more research is needed to better define etiologic associations between pediatric disease and toxic chemicals and to further refine estimates of the costs of diseases of environmental origin in children. The field is still very much in its early stages.
That said, however, we disagree with Rigas' argument that an effort to quantify the costs of disease of environmental origin in children is not credible. We are of the opinion that this effort is, in fact, essential a) to counter one-sided and often ill founded claims about the high costs of controlling pollution, b) to examine the costs of diseases of environmental origin in relation to the costs of other societal problems, and c) to guide the establishment priorities for research and prevention.
In defense of our analysis , we note that the environmentally attributable fraction (EAF) methodology that we used derives directly from an Institute of Medicine report "Costs of Environment-Related Health Effects: A Plan for Continuing Study" (Institute of Medicine 1981); the committee that developed that report was chaired by Nobel Laureate economist Kenneth Arrow, who also advised our study. This methodology has been used with great success in estimating the costs of occupational disease in American workers (Fahs et al. 1989;Leigh et al. 1997), and those estimates have been relied upon extensively by the National Institute for Occupational Safety and Health (NIOSH) and by state health agencies for over a decade. We acknowledge, of course, that uncertainty surrounds any estimate of EAF and that the estimates will reflect the beliefs and the experience of the members of the consensus panel. That is why we indicated a range of uncertainty around each of our estimates, and why we populated our panels with naturally recognized subject matter experts who had no financial or other conflicts of interest in regard to the topics under evaluation.
Rigas challenges our analysis of the costs of current levels of exposure to lead on the grounds that it has not been proven that the linear, inverse dose-response relationship that has been observed repeatedly between blood lead level and intelligence (IQ) extends downward to a blood lead level of zero. In our report we acknowledged that limitation, but we argue that the linear calculation is biologically plausible because "to date cognitive deficits have been associated with all ranges of blood lead concentration studied, and no evidence of a threshold has been found." Indeed, the most recently conducted research (Lanphear et al. 2000) found a negative association between blood lead level, reading ability, and mathematical ability at blood lead levels as low as 5 µg/dL, with no evidence of a threshold below that mark. Moreover, our conclusions on the high costs associated with present day exposures of American children to lead are buttressed by another report recently published in Environmental Health Perspectives, which analyzed the benefits to American society that have resulted from the removal of lead from gasoline . That study found that the gain in children's intelligence that resulted from the reduction in blood lead levels following the removal of lead from gasoline has created an increase in national economic productivity, which in each annual birth cohort amounts to $213 billion.
Finally, in counterpoint to Rigas' implied criticism that our conclusions are inflated, we note that in developing our estimates we consistently erred in the direction of conservatism. We examined only four categories of disease. We avoided consideration of disease entities for which there exist strong suspicions of environmental etiology but for which quantitative data are lacking. We chose not to estimate costs for which public data were not readily available, for example, the costs associated with the special education of children who have suffered lead poisoning. And last, we chose not to quantify the costs of pain and suffering.
Almost certainly the true annual costs of disease of environmental origin in American children are greater than our estimate of $54.9 billion. Those true costs will come to be more fully appreciated as future research elucidates additional etiologic connections between environmental exposures and pediatric illness.
|
v3-fos-license
|
2018-12-12T22:25:27.954Z
|
2012-05-30T00:00:00.000
|
108486101
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://ojs.library.queensu.ca/index.php/ijsle/article/download/4242/4343",
"pdf_hash": "bf478550bda05221f2af5b04253dbc359308db4a",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43147",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "bf478550bda05221f2af5b04253dbc359308db4a",
"year": 2012
}
|
pes2o/s2orc
|
Wastewater Treatment for a Coffee Processing Mill in Nicaragua : A Service-Learning Design Project
An undergraduate capstone design project team consisting of candidates for the BS degree in Civil and Environmental Engineering and faculty designed a wastewater treatment facility for a coffee processing mill in rural Nicaragua. The team visited Nicaragua to interview the client community, survey the actual site, and hold discussions with local development agencies, agricultural cooperatives, and potential contractors. Three alternative designs were developed, and the community chose one involving a settling basin and a series of three infiltration pits. The treatment system is expected to meet MARENA discharge standards for pH, turbidity, and chemical oxygen demand (COD) of discharge wastewater. The project team further secured sufficient funding from Tetra Tech, an environmental consulting firm, to build the coffee mill and the water treatment facility. On a subsequent visit, the mill and treatment facility were seen to be well constructed and functional. Several important lessons regarding international service learning design projects were learned: the need for a large cost overrun buffer; the utility of having contributors on-site during both important decision making stages and construction; the value of working with local organizations to facilitate remote work projects; the possibility of private sponsorship and partnership for charitable projects; and the need to work with the community to design and select appropriate technologies.
INTRODUCTION
Strongly fluctuating world market prices for green coffee have long posed difficulties for small-scale coffee producers in developing nations.Over the period 1970-2010, for example, world coffee prices varied greatly, with the annual average ICO Composite Indicator Price fluctuating 1,2 between an all-time high of $2.29/lb in 1977 to less than $0.46/lb in 2001.After languishing well below $0.60/lb during the early 2000s, it only rose to above $2/lb during 2011 i .The period during the early 2000s, when coffee prices were very low, became known as the "Coffee Crisis". 3 One response to the widespread economic displacement and poverty of the Coffee Crisis was to emphasize production of higher quality coffee that also bore international certifications such as Fair Trade and Organic.Success in these specialty markets, drawing prices as much as 40% above those of the commodity markets, required progress in two areas: 1) Adaptation of farms and methods to comply with the requirements of the desired certifications; and 2) Production of coffee of sufficiently high quality that it will be purchased.Certification alone does not guarantee sale on the specialty market.At its best, the traditional Nicaraguan process (described below) is capable of producing truly fine gourmet coffees. 4To successfully participate in the specialty market, farmers realized that they needed to raise the level of, and minimize the variation in, the quality of their coffee.Development agencies worked with the agricultural communities to improve coffee quality as a strategic way to address their poverty; the Quality Coffee Program of USAID 5 represented one of these efforts.There are a number of studies assessing the overall success of these specialty certification initiatives. 6,7,8he design project described in this paper arose as part of Coffee for Justice, a project initiated at Seattle University by one of the authors (SCJ) in collaboration with her American and Nicaraguan partners and their students.In response to the Coffee Crisis, the mission of Coffee for Justice was to place scientific and technical expertise in service to the small-scale Nicaraguan coffee producer.A discussion of the establishment of Coffee for Justice as an effective program and a review of the outcomes achieved prior to those described here has been accepted for publication elsewhere. 9The results of applied chemistry studies conducted in a field laboratory and by the producers on their farms have also been reported. 10,11n order to engage the coffee farmers through their organizations and listen carefully to the questions and requests that they might have for us, a process of gradual engagement, as described in Reference 9, was undertaken that began by working through development organizations and agencies such as USAID ii and CRS-NI iii , then progressing on to regional agricultural cooperatives such as ADDAC iv and CECOSEMAC v , and finally to individual communities and farmers.Through this network of trusted contacts we were able to learn what was wanted by the farmers and their organizations and to gain access to the farms in order to study their processes.
i These historical prices are not adjusted for inflation.ii
Sorting, depulping, fermentation, and drying
The local practice in coffee production involves hand-picking of ripe coffee cherries, followed within a few hours by manual selection to eliminate defects and mechanical depulping, all taking place on the farm.After the exocarp (skin) and mesocarp (pulp) of the ripe coffee cherry are eliminated by depulping, the coffee beans retain a sticky, firmly attached mucilage layer that must be removed prior to drying and storage.The mucilage-coated beans are generally placed in an open tank, where natural fermentation is allowed to proceed for 10-48 hours, depending on weather, altitude, etc.At some point (termed "completion") the fermentation process has loosened the mucilage from the bean surface, such that it can then be washed away with water, a process that halts the fermentation and thoroughly cleans the beans.
The clean beans are moved to a patio on the farm, where they are sun-dried to approximately 40% water content.In most cases, they are then sacked and quickly transported to "dry mills" in hotter and drier locations at lower elevation.After further sun-drying to approximately 12% water content, the beans are sufficiently stable for storage prior to export as green coffee.
On-farm mill or beneficio
Coffee producers carry out the steps described above on the farm in a "wet mill" facility known as a beneficio vi .This well ventilated, sheltered space, which contains the depulping machinery, fermentation tanks, and washing trough requires a plentiful source of clean water, both to convey the cherries and beans within the mill and to wash them thoroughly.Disposal of a large amount of wastewater, which is heavily laden with carbohydrates and other nutrients from the depulping and washing must also be provided by the beneficio.
In small communities, immediate neighbors may share a beneficio by coordinating their picking and processing schedules.The least advantaged communities sometimes lack even this basic shared processing facility, often resorting to the use of woven plastic sacks for fermentation and improvised washing methods in tubs.Without a clean and well ventilated beneficio, producers cannot be expected to produce coffee of the highest quality.
Wastewater Treatment
The process described above includes discharge of large amounts of wastewater 12 resulting from the various processing steps: sorting of cherries by flotation; pulp removal; and washing of the beans after fermentation is complete.Overall, this water is acidic, deoxygenated, and laden with suspended solids and organic material from the pulp and mucilage. 13In many traditional beneficios, the wastewater is simply discharged into the nearest stream or river, with considering environmental consequence.
vi "Beneficial mill" in Spanish Modern, "ecological" designs (beneficio ecologico) both reduce the overall water requirement and provide treatment of the wastewater before discharge.The reduction in volume is largely achieved by using pulpers designed to operate with less flowing water and by designing the mill to require less water for transport of the cherries and beans between stages.The remaining water, largely from the washing stage, is then usually treated in a set of settling pits before infiltration into the soil or discharge into the surface waters.On the smaller farms, these pits are often dug without any formal design, and the water usually receives no pre-treatment before discharge into them.In the most advanced installations, generally found only on the largest of haciendas, the treatment of the wastewater may include carefully designed anaerobic digestion with capture of biogas gas for farm fuel.
The stream discharge of polluted wastewater degrades the overall environment of the farm and the water quality for down-stream neighbors, which affects drinking water, sustenance crops, and livestock.Also, some desired certifications 14,15 require that waste water receive at least minimal treatment before discharge.In our conversations, the small-holder farmers were often interested in appropriate treatment of their wastewater, provided that treatment was neither too expensive nor too difficult to implement and maintain.
DESIGN PROJECT BACKGROUND
In a spring 2007 meeting with the leadership of the agricultural cooperative CECOSEMAC, we were informed that a member community at La Suana possessed the potential for excellent coffee, but lacked the basic beneficio needed for well-controlled, high-quality, on-farm depulping and fermentation.This community possessed strong leadership, excellent coffee growing conditions, a good water supply, a site suitable for a beneficio, and a desire to treat their wastewater in an environmentally appropriate manner.This community need for a beneficio was well suited for a service-learning capstone design project for a team of environmental engineering majors in collaboration with the Coffee for Justice Project and the Seattle University Science and Engineering Project Center.The resulting project, Wastewater Treatment Design for a Coffee Processing Beneficio 16 was accomplished as a capstone design project by a team of four BS in Civil and Environmental Engineering students during the 2007-8 academic year.
Preliminary Visit
It was decided during a preliminary visit by the faculty leadership in spring 2007 that the student team would focus only on design of the wastewater treatment facility, with the design of the beneficio building and construction of the entire facility to be accomplished by a small contractor familiar to CRS and CECOSEMAC.It became apparent that this project would be different than the ordinary Project Center capstone in a number of ways: 1.The project client was a poor rural community in Nicaragua that did not have resources to fund it.Normally, the industrial sponsor funds the project and serves as its client.
2. The client interviews and site assessment necessary for requirement gathering would require the team to visit the rural community at La Suana.This would involve international travel, the transport of field survey and test equipment, and the presence of someone with adequate language skills.3. The student team would see the project through to implementation and evaluation.Normally designs resulting from completed student projects are delivered to the sponsor, with subsequent implementation of the design being the prerogative of the client.By involvement in implementation, the service-learning project goes beyond the theoretical and offers a greater opportunity for professional service and development.
Funding available for the project would need to be adequate not just for the design phase, but also for the construction of the entire beneficio and water treatment facility.And, since significant funding was to be expended in Nicaragua, the team needed to establish a mechanism for responsible remote project management and expenditures.This project management was complicated by the fact that none of the team could be present on-site during the construction phase.
Full Team Visit
The faculty and the four-member student team visited Matagalpa, Nicaragua in December 2007.They toured and observed several functioning beneficios, including measurements of the volume and quality of wastewater (see Figure 1) at the ADDAC model farm La Canavalia.
FIGURE 1 WASTEWATER QUALITY MEASUREMENTS AT LA CANAVALIA
This beneficio utilized an ad hoc system of three settling pits to provide water improvement through filtration, neutralization, and microbial decomposition of organic material before infiltration to the nearby stream.
Subsequently, the student team interviewed the client community at La Suana and went to the proposed site, carefully surveying it and determining that there was suitable land for both the mill and adjacent water treatment options (see Figure 2).Working with the staff of CECOSEMAC and CRS-NI, the team identified a contractor with the necessary expertise to build the water diversion/storage structure, the wet mill, and the water treatment option that would be selected by the community.The initial construction time line and cost estimates were established for construction to be completed prior to November 2008, the start of the next harvest season (see Figure 3).
Project Goals
Upon returning to Seattle in December 2007, the team established a set of goals: design a set of wastewater treatment options to be presented to the client community in late winter or early spring of 2008; work with the community at long range to select an option and finalize the design with the contractor; establish a mechanism to both disburse funds to the contractor as he completed phases of the project and keep the project on track for completion prior to October 2008, and; construct the beneficio prior to the 2008 harvest season in the dry season when roads are passable.Trusted Coffee for Justice contacts in Nicaragua (staff and leadership at CRS-NI and CECOSEMAC) agreed to serve as on-site observers and as our representatives.
Project Constraints
The team identified several project constraints: 1. Capacity: The beneficio and water treatment facility must meet the processing requirements of five families and meet or exceed regional standards for high quality coffee.2. Water Conservation: The process steps must be designed to minimize water usage, particularly in the transport and mechanical pulping of the coffee cherries.3. Sustainable and Organic Standards: The entire design must be compatible with sustainable and organic standards as required by the local farmer's cooperative CECOSEMAC and by the Nicaragua Fair Trade program.For discharge to surface waters, the wastewater effluent must meet or exceed standards of the Ministerio Del Ambiente y Los Recursos Naturales de Nicaragua (MARENA).4. Electrical Power: The location does not have access to electricity.5. Limited Footprint: The wastewater treatment system adjacent to the beneficio must occupy the space available between the existing road and hill, currently a drainage route approximately 3 m wide and 70 m long.6. Geotechnical Considerations: The road leading up to the site is in poor condition and will not permit large equipment or delivery trucks.Impact on the stability of the existing road was considered in the alternatives.7. High Strength Wastewater: The wastewater is high strength with respect to BOD/COD content, nutrient concentration, and suspended solids.It also has a low dissolved oxygen (DO) concentration.8. Low pH: The wastewater is very acidic and must be neutralized to avoid ecological and concrete deterioration.9. Construction Materials: Materials commonly available in the region include but are not limited to concrete, plastic liner, gravel, PVC pipe, and wood.
Site Assessment
La Suana is located in the municipality of San Ramon (12°55' north latitude and 85°50' west longitude).The climate is of sub-tropical rainforest with average temperature between 20° and 26° C and annual rainfall between 2000 to 2400 mm.The site was surveyed using hand held equipment, and a planview of the area available for wastewater treatment (Figure 4) shows that it is to be located in a ditch on a hillside adjacent to a road or path.The ditch is approximately 10 feet wide (opposing arrows in Figure 4) and 220 feet long, for an area 2200 ft 2 (200 m 2 ).The adjacent road is 15-100 meters from the nearby stream.
In Figure 5 is a picture of the site, looking from top to bottom, an orientation opposite to that of the plan-view map.In the picture, the stream is down hill to the left, the drainage ditch is to the right of the path, and the site of the beneficio is in the field to the right.
Soil Assessment
A grab sample of soil was taken one foot below the surface and qualitatively observed to be a dark brown silty fat clay soil with a high organic content.Due to time constraints, the design team was unable to generate reproducible and reasonable soil infiltration rate data via standard percolation tests, and detailed soil survey data was unavailable for this region.Therefore, Natural Resource Conservation Service 17 data for Puerto Rico was used for representative soil profiles selected based on the five soil forming factors: parent material, topography, biota, time, and climate.Three representative soil series chosen (Aibonito, Lirios, and Cortada) are all clay/silt dominated soils derived from volcanic rock parent material and exist on 20-40 percent slopes in mountainous regions receiving approximately 70-90 inches of annual rainfall at an average annual temperature of 75 o F. These soils formerly supported rainforest and are now used to cultivate shade-grown coffee.The reported permeability (velocity) for each of these soils at depths less than 30 inches is 0.6 to 2 in/hr. 18,19
Wastewater Analysis
To characterize the wastewater anticipated at La Suana during its operation, measurements were done on the effluent from the operating beneficio at La Canavalia, which utilizes a system of three open infiltration pits in series.Measurements of chemical oxygen demand (COD), pH, turbidity, and dissolved oxygen (DO) were made at the following locations: the fresh water source; the river above and below the beneficio; the depulping wastewater; the rinse wastewater; entrance to each of the three infiltration pits, and the infiltration effluent near the river.The depulping-and rinsing wastewater are produced at different times, but reside in pit #1 long enough to thoroughly mix.The wastewater overflows from pit #1 in turn into pits #2 and 3.
Sampling was done at intervals throughout the rinsing process, which can take from 0.5 to 3 hours, and a representative sample was created by combining the range of interval samples.The samples for COD measurement were collected in 2 ml vials that were frozen and returned to Seattle University for analysis (Chemetrics high-range COD vials & HachDRB200 Digital Reactor Block).The pH was determined using a battery operated colorimetric reader for test strips (ReflectoQuant).The turbidity was determined using a Turbidity meter on site (Hach DR/820 Colorimeter).The DO was determined using a DO Probe (Vernier Dissolved Oxygen Probe).
The depulping and rinsing processes produce wastewater with different characteristics, with the depulping waste being higher in dissolved organics but less acidic.Being of an "ecological" design, the new beneficio at La Suana should produce relatively less depulping wastewater, and a conservative set of design parameters were determined by taking volume-weighted averages of the rinse (80%) and depulping (20%) wastewater measurements at La Canavalia (Table 1).Based on water volume measured at La Canavalia and conversations with the producers at La Suana, it was estimated that the new mill would produce a peak wastewater volume of 9 m 3 /day, for a daily coffee bean production of 12 quintales (qq).vii Assuming 2.75 qq of cherries yields 1 qq of processed coffee beans 13 , this estimate would be approximately 6 m 3 water per tonne of coffee cherry, which is at the low end of the range cited in Ref. 13 (4-20 m 3 /t).This estimate was considered reasonable for a modern "ecological" beneficio.
In order to model the removal efficiency of the primary sedimentation pond in subsequent designs, water quality was measured at La Canavalia in triplicate at four depths in infiltration pit #1 (2", 7", 11", 14.5").As shown in Figure 6, turbidity increases 6-fold from a depth of 2 to 14 in, and COD increases 2.5fold.The DO and pH remain relatively constant (1 mg/L and 4.2, respectively) throughout pit #1.
To assess the treatment capability of a series of infiltration pits, data was collected at various stages within the wastewater treatment process.When compared, this data (Table 2) shows that pit #1 is effective in removing about 55% of the COD and turbidity and that seeping infiltrate (through soil) further demonstrates a very large improvement in pH.There is little further improvement in COD in pits 2 and 3, in large part because the pH in pits 2 and 3 remains too low to facilitate the growth of the necessary anaerobic microorganisms for biodegradation.
WASTEWATER TREATMENT DESIGN ALTERNATIVES
The entire wastewater treatment system will occupy a drainage route of 200 m 2 (see Figs 4 and 5) that maintains a grade of approximately 10%.The designs are sized to accommodate an average daily volume of approximately 9 m 3 , a maximum effluent rate of 1L/s (0.001 m 3 /s), and influent COD concentration of 5850 mg/L (Table 2).The design parameters for wastewater characteristics, flow rates, and residence times of the treatment facility conservatively achieve an overall "safety factor" of approximately two.All treatment alternatives are intended to meet the 1995 MARENA standards 20 for surface water discharge.
For the purposes of using empirical design equations, the design COD was converted to BOD 5 .The BOD 5 /COD ratio of the wastewater is estimated to be 0.60.This value is obtained by weighting the BOD 5 /COD ratios reported by Deepa et al 21 for rinse and depulping coffee wastewater with the same volume ratio used for design parameters in Table 1.This yields a design parameter in terms of BOD 5 of 3510 mg/L.
The team produced three alternative designs, each with a three stage process: Primary treatment in a settling basin with pH neutralization (common to the alternatives); Secondary treatment in an anaerobic process unique to each alternative; and A finishing step by either subsurface infiltration (Alt 1) or overland flow (Alts 2 & 3).Common to the three alternatives and described below are: a) a fresh water valve; b) simple concrete pad for neutralizing wastewater and screening coffee beans from it; and c) a primary settling basin (i above).
Freshwater Valve
A 3-inch PVC pipe brings water to the site from a makeshift sandbag dam and pond at a rate of 1.3 L/sec.Water conservation can be accomplished through installation of a simple ball valve that regulates flow up to the max requirement of the operating beneficio (1 L/sec).
Neutralization
To provide an optimal pH range (6.5-7.7) 22for biodegradation of the nutrients by diverse microbial populations, the wastewater will be neutralized from an initial pH of 4.2 to approximately pH 7. Neutralization of the wastewater, which is to be used for irrigation (Alternatives 2 and 3), will also promote healthy soils and robust vegetative growth. 23Beneficio workers will be asked to throw calcium oxide (locally available as "cal") onto the concrete slab where the rinse water is channeled enroute to the treatment system.Assuming the "cal" will have hydrated to Ca(OH) 2 during storage, the stoichiometric amount can be provided using a container marked to hold approx 34 cm 3 .
Primary Settling Basin
Common to the three alternatives is a primary treatment settling basin with the following characteristics: A volume of approximately 10.8 m 3 ; Lining to prevent direct soil infiltration, and; A hydraulic retention time of approximately three hours, a value based on typical settling times for agricultural wastewaters of 3-6 hours. 24hese parameters are based on the beneficio flow rate (1L/s) and daily operating cycle (< 3h).In practice, since the mill is used only once each day, the wastewater will have a residence time of approximately eighteen hours before flowing from the settling basin to an anaerobic biodegradation stage for further COD/BOD reduction.
For mosquito control, the settling basin should be covered with staggered wire-mesh window screens.By staggering the typically available 18x16 mesh (1.0x1.2 mm squares), the hole size can be halved to less than 1mm in both directions (0.5x0.6 mm), preventing passage of even the smallest mosquitoes.While the hole size in unstaggered 18x16 mesh is deemed acceptable by the WHO 25 , for Anopheles control via bednets, a study by Bidlingmayer 26 (1959) found that dengue vectors, Aedes aegypti and Aedes taeniorhynchus, passed through an 18x14 mesh copper screen 1.5% and 26% of the time respectively, compared to only 0.1% and 0.6% when using a 22x22 mesh fiberglass screen (~0.8 mm diameter holes).Because dengue is a concern in the region, and the staggering of common window screens is easy, our recommendation is to employ them to cover the surface of the settling basin.It should be noted that even if the low pH wastewater during harvest season precludes mosquito larvae growth, accumulation of rainwater at other times of the year still necessitates covers.
Building upon the common elements described above, the team developed three alternative designs for anaerobic secondary treatment and finishing steps.
3. Upflow anaerobic sludge blanket; surface discharge and infiltration.The first two were considered very practical for the community, and the third one was provided to illustrate the small-community implementation of a state-of-the-art technology.
Design Alternative 1
A sketch of the overall design is shown in Figure 7.The settling basin in this design is lined with HDPE plastic and has dimensions of 1.5 m(w) x 1.5 m (d) x 6 m (l), with 2:1 (V:H) side slopes to provide stability in the absence of concrete.The bottom of the basin has a 1:16 slope to encourage settling near the input pipe. 22
FIGURE 7 PRIMARY BASIN & INFLITRATION PITS (ALT #1)
After primary treatment in the settling basin, the wastewater flows out over a V-notch weir wall and is gravity fed into three parallel rock-media infiltration pits, characterized as follows: Dimensions: The rock media provides a growth surface for the microorganisms, helps distribute the water throughout the pit at low flow rates, and reduces the danger from falling into the pits.Organic material will biodegrade similar to being in a subsurface flow wetland as defined by the EPA.Based on the measured COD reduction from settling in Tables 1 and 2, the BOD 5 of the effluent from the settling basin to the first pit is modeled to be 45% of the design influent, or 1580 mg/L.BOD 5 biodegradation in the pits is modeled via first order kinetics (k = 1.104 d -1 ) 28 and time weighted average of 12 h, for a further reduction of 42%.The infiltrate from these pits into the subsurface should have a BOD 5 of approximately 910 mg/L, a reduction of 74% before soil infiltration.In comparison, first-order biodegradation in an anaerobic fixed film process operates with a rate constant of 10.3 d -1 for wastewater containing carbohydrates and protein, 29 making the present value very conservative.
Physical filtration by soil can remove a broad range of BOD 5 , reported from 30-40 percent by Gohil 30 (p. 36), to 86-100 percent for a number of rapid infiltration systems. 31It should be noted that rapid infiltration systems typically have lower BOD loading rates than the infiltration pits, and these results use domestic wastewater.As a conservative estimate, it is assumed that 30% of the BOD is removed via soil filtration, yielding a BOD 5 concentration of 640 mg/L entering the subsurface.
To estimate BOD 5 reduction due to biodegradation in the subsurface, a first-order constant 32 of 0.0127 d -1 is used.This constant applies to biodegradation of aromatic hydrocarbons in northern Minnesota, and is therefore likely to be conservative.Biodegradation time is estimated with hydraulic conductivity and soil porosity.Since only feasible on-site analysis was a qualitative determination that the soil is a silty-clay, an exact hydraulic conductivity and porosity could not be determined.The range of typical hydraulic conductivities for clay is 1x10 -5 -1x10 -7 m/s. 33Porosity of clay/silt soils typically range over 50-60%. 33,34Combining these numbers, the expected residual BOD 5 reaching the river approximately 40 m away at the system midpoint would range from 0 mg/L with K= 1x10 -7 m/s, to 460 mg/L with K = 1x10 -5 m/s.Using the intermediate hydraulic conductivity, K = 1x10 -6 m/s, results in an estimated residual BOD 5 concentration of 25 mg/L.This number is used for future analysis.
Alternative #1, with a total cost for wastewater treatment of $787 US, meets all of the design constraints and the MARENA standards for surface water discharge.viii Although similar in its design to others used in the region, the plastic lined settling basin and the use of "cal" for neutralization will improve performance dramatically.The operation and maintenance implications are: daily addition of "cal"; periodic removal of sludge from settling basin; and renewal of the plastic liner, perhaps annually.The primary "negative" for alternative #1 is that it does nothing to remove nitrate from infiltrating wastewater.The phosphorous concentrations will be reduced via soil adhesion, however.
Design Alternative 2
In this design, the three infiltration pits of Alternative 1 are replaced by a horizontal-flow anaerobic basin to provide secondary treatment of the effluent from the primary settling basin.Effluent from the anaerobic basin flows on the surface into a hand-excavated shallow sinusoidal channel with a crop such as corn grown between the channel loops.The wastewater will infiltrate the soil from the channel and will not flow overland into the stream.
Since the anaerobic basin will be formed from concrete, the design specifies that the primary settling basin should also be of concrete and attached to it, with the following characteristics: Concreteno further lining required, Dimensions: 3.0 m (w) x 3.8 m(l) x 1.0 m(d), Divided into two sections by concrete wall 1 m from end, with three 10 cm PVC pipes passing through at 2/3 of full depth.The concrete settling basin has volume and hydraulic characteristics similar to those of Alternative #1, but with a different shape.The concrete wall dividing it insures that both the sediment near the bottom and floating mass of organic material near the surface remain in the first section and do not pass through the weir into the anaerobic basin.
Water flows through a simple rectangular weir from the second section of the settling basin into the adjacent anaerobic horizontal-flow attached-growth basin that is filled with sand or gravel to enhance the interaction area between microbes and wastewater.The anaerobic basin is characterized as: Dimensions: 2.8m (w) x 4.2m(l) x 1.2m(h) ; Volume approximately 14 m 3 , Hydraulic retention time (HRT) of approximately 36 hours, Open to atmosphere.Literature values for up-flow and down-flow anaerobic basins indicate they can accept organic loadings in the range 3000-9300 mg/L BOD 5 and can be expected to reduce it by 75-90% for HRT's of 18-30 hours. 35he combination of unstirred horizontal flow and the extremely low DO of the influent (Table 1 MEASURED WASTEWATER CHARACTERISTICS ) assure that the basin will remain anaerobic except possibly at the surface.In this horizontal flow case, we are using a conservative estimate of 55% reduction, which in combination with the 55% reduction estimated for the settling basin, allows us to estimate the BOD 5 of effluent from the anaerobic basin as only 20% of its initial value(0.20 x 3510 mg/L) or 710 mg/L.
The maximum influent BOD 5 loading rate recommended 35 for slow-rate soil infiltration is 0.05 kg BOD 5 /m 2 -day, which yields a maximum of 10.0 kg/day if the entire ditch (200 m 2 in Figure 4) is utilized.This maximum requires that wastewater at peak volume of 9 m 3 /day, have its BOD 5 loading no more than 1.1 kg/m 3 or 1100 mg/L.The conservative estimate above of 710 mg/L for the effluent from the anaerobic reactor is well within this recommendation.
In addition to the reduction of organic components, the phosphorous and nitrogen containing waste components are also of concern for surface disposal.Phosphorous, with estimated concentration of 15.8 mg/L in coffee processing wastewater, 13 is expected to bind with organic and mineral soil components and not leach significantly through the soil profile. 36Nitrogen removal will also occur in the rhizosphere of the overland soil infiltration step.At the daily flow rate above for a maximum of 120 days per year, the expected wastewater concentration of nitrogen (150 mg/L) 13 yields an estimated maximum actual rate for nitrogen loading of 0.94 kg/m 2 -yr.The acceptable nitrogen level for water leaching into the ground water is set 37 at 10 mg/L in Washington State and is used as the design standard here.In conjunction with an analysis 35 for crop uptake, denitrification, and volatilization, this value yields a maximum allowable loading of 1.00 kg/m 2 -yr, which is 6% greater than the actual maximum loading rate estimated above.Harvest of corn, which is commonly grown in this region, would result in removal of nitrogen and phosphorous from the soil, aiding in the wastewater treatment.
Alternative #2, with a total cost for wastewater treatment of $1,506 US, meets all of the design constraints and the MARENA standards for surface water discharge.Although initially more expensive than alternative #1, it offers the farmers a simple method of wastewater treatment that would also yield irrigated corn and removal of inorganic nutrients that are left unremediated in alternative #1.The only regular maintenance would involve removal of sludge from the primary basin and daily addition of "cal" during the coffee harvest period.The "negatives" of this design include greater cost, farmers' resistance to re-using wastewater even for irrigation, and reliance on consistent addition of "cal."
Design Alternative 3
Alternative #3 is similar to alternative #2, except that it uses an upflow anaerobic sludge blanket (UASB) instead of the attached growth anaerobic basin for secondary treatment.Primary treatment would again involve a settling basin as in other designs, and the final disposal would be through an overland channel as in alternative #2.UASB reactors demonstrate BOD 5 reductions of 85-95%, 22 and are used in some larger haciendas in the region.Similar to Alternative #2, this design would meet all performance criteria, including nitrogen and phosphorous removal.Construction would be simple, but would require purchasing a prefabricated UASB.The production of biogas to be used as a fuel is not a compelling attraction at La Suana where the use of the gas in the kitchen would require transporting it some distance across the farm.Also, many of these farms have adequate supplies of fuel wood that is grown for that purpose within the shade canopy of the coffee plantation.Since it can be difficult to restart the UASB reactor, the system must be maintained and supplied biomass year around, posing a significant operational burden for a small community with no beneficio employees.For these reasons, alternative #3 was presented, but not recommended, and was not given strong consideration by the community.Details of the design are available upon request.
EVALUATION AND RECOMMENDATION
In May 2008, the student team presented the alternatives to the small producer clients.Due to academic schedule and travel budget constraints, the report, recommendation, and discussion were transmitted electronically rather than presented in person.In Table 4 is outlined a comparison of the two most feasible alternatives with respect to four significant parameters.
Alternative #1 is an attractive option as it is the least expensive, is a technology familiar to the farmers in this community, and provides good treatment for BOD, pH, and TSS, which are the main pollution concerns.However, it provides no reduction of the high nitrogen content before infiltration to the soil, and relies on subsurface treatment which can be highly variable.
Alternative #2 involves greater initial expense and poses greater construction challenges due to the materials transport and concrete work to be done on site.However, it provides full treatment of wastewater, including nitrogen and phosphorous removal and provides a useful crop for the farm.Although simple and easy to understand, the overland discharge approach is not as common and familiar to the clients and may not be fully "trusted." The team recommended Alternative #2 to the clients, believing that the initially greater expense would be more than compensated for by the improved water treatment quality, the low level of long term maintenance required, and the benefit of the corn crop produced.
FINAL INSPECTION AND OVERALL ASSESSMENT
In December 2008, three of the faculty team members visited the completed beneficio and water treatment facility at La Suana.Although the coffee harvest at the altitude of La Suana had not yet started, the farmers did find a small sample of cherries in order to demonstrate the operation of the beneficio.The beneficio itself was fully functional and built to high quality standards (see Figure 8).The community had made some "modifications" to the proposed wastewater treatment plan, however.The four basins had been dug to specifications and connected properly with PVC piping.Instead of the primary settling basin being lined with a plastic sheet as specified and the three infiltration pits being unlined, all four excavations had unlined bottoms with walls constructed from bricks and mortar (Figure 9).While this change seemed like an improvement to the community, the unlined settling pit rendered it just another infiltration pit, and the impermeable walls decrease the infiltration rate from the pits to the soil.
FIGURE 8 BENEFICIO AT LA SUANA
In order to reduce the cost of delivering supplies and because of a community concern about the infiltration pit volumes not being adequate with the gravel in place, the community chose not to fill the pits with gravel, thus significantly affecting the rate at which they are expected to reduce BOD.This is an example of the client trusting what could be seen (the larger volumes of unfilled pits) rather than what had been carefully calculated.When informed of this community decision, it was deemed acceptable by the team leader (MDM) with the knowledge that it could be reversed at any time by adding gravel to the pits.One unexpected result of this effort was that, upon hearing of the foreign investment in the community, the local government decided to improve the road to La Suana in order to facilitate the beneficio construction.This allowed trucks to get close to the site, greatly facilitating materials delivery.The community viewed the road improvement as being more valuable than the beneficio itself, since the improved road served approximately 100 farms year around.
Additional funding for travel and construction cost overruns would have been most helpful to the overall project.The presentation of the designs and the recommendation favoring Alternative #2 might have been more effective if the team leadership had visited La Suana in April or May 2008.A visit during construction of the wastewater treatment facility might have led to an implementation in closer compliance with the original design.The costs of construction of the beneficio itself turned out to be somewhat higher than originally predicted, and the lack of sufficient funding for cost overruns made the community more conservative when considering the cost of the wastewater treatment facility.
From a technical and development point of view, the project was quite successful.The poor community at La Suana obtained a state-of-the-art beneficio and a wastewater treatment facility that is an improvement upon the general practice in their region.
From a service learning point of view, the project was an unqualified success.A team of engineering students put their capstone design project into direct service to a poor community in the developing world.These students traveled to Nicaragua, interviewed the clients, and designed options for them that were technically sound, of appropriate technology and culturally acceptable.The students also saw their design actually built and implemented.They learned the importance of effective communication with clients across language, economic, and cultural barriers and gained experience working as a multi-disciplinary team of engineers, scientists, officials of international development organizations, and small community leaders.
FIGURE 2 SITE ASSESSMENT AT LA SUANA
FIGURE 4 PLAN
FIGURE 4 PLAN VIEW OF THE DRAINAGE DITCH ALONG WITH SITE MEASUREMENTS 1.5 m by 1.5 m at a depth of 1 m; Side slopes 3:1 (V:H), Volume approx 3.4 m 3 , in accord with the recommendation of the King County Surface Water Treatment Manual 2005 27 , Rocks are 10 mm in diameter, and vertical filling pipes have smaller holes, Covered with framed and staggered window screens (see Settling Basin above).
FIGURE 9 SETTLING
FIGURE 9 SETTLING BASIN (FOREGROUND) AND INFLITRATION PITS AT LA SUANA United States Agency for International Development iii Catholic Relief Services -Nicaragua iv Asociación para la Diversificación y el Desarrollo Agrícola Comunal (Association for Diversification and Development of the Agricultural Community), Matagalpa, Nicaragua.
v Central de Cooperativas de Servicios Multiples Aroma del Café, a union of agricultural cooperatives in Matagalpa, Nicaragua
TABLE 2
COD AND TURBIDITY AS A FUNCTION OF DEPTH IN PIT #1 FROM TOP TO BOTTOM
|
v3-fos-license
|
2021-05-04T22:04:34.276Z
|
2021-05-15T00:00:00.000
|
233548249
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://zenodo.org/record/4506322/files/Paper_CompStruct_2021.pdf",
"pdf_hash": "1de50499a64b0f91ab641d2b2efa6424e8f2ddc9",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43150",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"sha1": "16081aef0f029543c7e47ad12ad3583288d261b6",
"year": 2021
}
|
pes2o/s2orc
|
Statistical Homogenization of Polycrystal Composite Materials with Thin Interfaces using Virtual Element Method
Polycristalline materials with inter-granular phases are modern composite materials extremely relevant for a wide range of applications, including aerospace, defence and automotive engineering. Their complex microstructure is often characterized by stochastically disordered distributions, having a direct impact on the overall mechanical behaviour. In this context, within the framework of homogenization theories, we adopt a Fast Statistical Homogenization Procedure (FSHP), already developed in Pingaro et al. (2019), to reliably grasp the constitutive relations of equivalent homogeneous continua accounting for the presence of random internal structures. The approach, combined with the Virtual Element Method (VEM) used as a valuable tool to keep computational costs down, is here successfully extended to account for the peculiar microstructure of composites with polycrystals interconnected by thin interfaces. Numerical examples of cermet-like linear elastic composites complement the paper.
Introduction
Modern composites materials are conceived to fulfil specific design requirements calling for the need of materials with increasingly high performances in terms of stiffness, strength, fracture toughness and lightness, among others. Special attention has been devoted in the last decades to advanced ceramics, namely composites with ceramic (CMC) or metal matrix (MMC) [1,2] which are gradually more used for a wide range of challenging applications, ranging from bioengineering, with the production of biocompatible ceramics for prosthesis and artificial organs [3], aerospace, defence, automotive [4] up to mechanical engineering for the production of wearing parts, seals, low weight components 10 and fuel cells, and cutting tools with distinguished thermo-mechanical and wear properties [5]. Often ceramic composites exhibit a peculiar microstructure, characterized by polycrystals interconnected by a second-phase interphase of small thickness in comparison to the grain diameter. Consistently with [6], different combinations of materials lead to four kinds polycrystalline materials with inter-15 granular phases: i) ceramic grains with brittle interphases; ii) ceramic grains and a metallic interphase; iii) metallic grains surrounded by ceramic interphases; iv) metallic grains surrounded by a layer of soft ductile material. In all cases, such materials show complex phenonema occurring at different scales of interest, ranging from the microscopic up to the macroscopic one, so that the elastic 20 properties of the homogenized composite material is strongly influenced by the random distribution of grains, by the role played by the intergranular layer, as well as by the value of the contrast, that is defined as the ratio between the elastic moduli of the inclusions and the matrix [7,8,9]. 25 The reliable evaluation of homogenized constitutive properties to be adopted for the macroscopic investigation of their behaviour pertains to the well-established homogenization theories, and is a long-standing problem of great interest for many researchers [10, 11,12,13,14,15,16], also with the purpose of designing innovative smart materials [17,18]. In this paper, we adopt a statistical ho- 30 mogenization approach [11,19] to derive the in-plane linear elastic constitutive properties polycristalline materials with thin layers interphases and to evaluate how they change in relation to contrast.
In the case of materials with random micro-structure, the lack of periodicity in 35 the microscopic arrangement makes the homogenization process more challenging, asking for special attention in the detection of the Representative Volume Element (RVE) along with the evaluation of homogenized constitutive moduli.
Several approaches have been proposed [20,21,22,23,24,25,26,27,14,13,28,29], also referred to non-classical continua [30,19,9]. 40 Among these models, we focus our attention on the possibility of approaching the RVE using finite-size scaling of intermediate control volume elements, named Statistical Volume Elements (SVEs), and proceed to homogenization (e.g. [31]). In this work we refer to [19], where a homogenization procedure consistent with the Hill-Mandel condition [32] has been coupled with a statisti- 45 cal approach, by which scale-dependent bounds on classical moduli are obtained using Dirichlet and Neumann boundary conditions (BCs) for solving boundary value problems (BVPs). This statistically based homogenization procedure has provided significant results [30,33,34], with particular reference to the debated problem of the convergence in the presence of materials with very low (or very 50 high) contrast [9,31,25,35].
Here, the procedure is to specifically account for the microstructural topology exhibited by polycrystalline composites with thin interphases, making the most of VEM to improve the computational efficiency. The result is a powerful numerical tool suitable for analyses of large portions of random microstrure, very important in the case materials with very high or low contrast [37]. In particular the strategy proposed is to use a single virtual element for each grain (polygons of any shape with n-nodes) with a significant reduction of the computational 70 burden, while the interconnecting layer is discretized with a fine mesh of triangular elements. Other advantages of VEM are: capability of using hanging nodes, that permits local refinements and coupling different degree elements; robustness to distortion of the elements; perfectly coupling with FEM elements; accuracy because the stiffness matrix is computed in precision machine; easy 75 implementation.
In this paper, low order virtual elements are used: the adoption of virtual elements of order one is particularly suitable for the homogenization procedure, as shown in [36,37,58]. Moreover, stress/strain is constant over the elements, but this approximation do not affect homogenization results. Furthermore, by 80 the adoption of the hanging nodes we subdivide the edges of the grains for generating a sufficiently fine mesh in the interphase part.
Exploiting FSHP combined with VEM, several parametric analyses have be performed to characterize overall mechanical parameters of polycrystalline composites with thin interphase layers and to investigate their sensitivity to the contrast 85 [59,60]. Attention is paid both at the identification of the RVE size and at the evaluation of the overall mechanical properties, that vary in relation with the contrast of the material.
The outline of the paper is as follows. In Section 2 the statistical procedure is recalled and specialized to the case at hand of polycrystals with thin inter-90 granular layers. Section 3 is devoted to the basic assumption of the first order virtual element formulation. Section 4 provides results of a set of parametric analyses on cermet-like polycrystals with intergranular phases stiffer of softer than grains. In order to check the capabilities of the proposed procedure, a successful comparison between the results obtained with VEM and those with a 95 very fine FEM mesh is performed. In Section 5 some final remarks are presented, highlighting the expected advantages of the proposed approach with respect to more standard approaches.
Fast Statistical Homogenization Procedure
The main features of the so-called Fast Statistical Homogenization Procedure 100 (FSHP), developed by the authors in [36,37], are detailed in this Section. First, the attention is focused on two types of composites, characterized either by circular inclusions embedded in a base matrix or by polycrystalline material with thin interfaces, for which a linear elastic constitutive behaviour is assumed at both the microscopic and the macroscopic level. Then, the key ideas of the 105 statistical homogenization procedure are briefly recalled and specialized for the materials at hand.
Computational homogenization
We take into account a two-dimensional continuum and describe, at the microscopic scale, the heterogeneous composite as a two-phase material. We 110 both investigate the case of composites made of circular inclusions of diameter d, randomly distributed in a matrix ( Fig. 1(a)), and the case of polycrystals (considered as inclusions) bounded by thin interfaces with average dimension d ( Fig. 1(b)). The former case is representative of ceramic/metal matrix composites or also concrete or rocks, while the latter one is representative of cermets.
115
In the present instance of non periodic composite materials and in view of the statistical homogenization procedure, it is useful to introduce a scale parameter δ = L/d * defined as the ratio between the edge of a square test window L, and the characteristic dimension d * . We furthermore define the material contrast 120 as the ratio between the elastic moduli of inclusions, E i , and matrix, E m , c = E i /E m . When 0 < c < 1 (c = 1 being the case of a homogeneous material), inclusions are softer than the matrix and we refer to this case as low contrast materials, that are suitable to properly represent porous media [61]. On the other hand, when c > 1 inclusions are stiffer than the matrix and we refer to 125 high contrast materials [19].
In order to perform homogenization, we describe the material at two scales of interest: the microscopic and the macroscopic levels. At the microscopic level the heterogeneous material is represented in detail, accounting for each constituent in terms of geometry and constitutive behaviour. At the macroscopic level the composite material is ideally replaced by an equivalent material whose global behaviour is representative of the actual heterogeneous material. The governing equations are formally the same as those defined at the microscopic level, except for the constitutive law that is not 'a priori' defined at the macroscopic level, but directly descends from the lower level as result of the homogenization procedure. In the following, lower case letters always refer to the micro-scale, while upper case letters to the macro-scale.
We refer to a linearised two-dimensional framework. At the lower level each material phase is characterized by linear elastic isotropic behaviour with the stress-strain relations written as: where ε ij and σ ij (i, j, k = 1, 2) are the components of the micro-strain and micro-stress tensors, λ and µ are the Lamé constants, and δ ij (i, j = 1, 2) is the Kronecker symbol. At the macroscopic level, the general anisotropic stressstrain relations, read: where E ij , Σ ij (i, j = 1, 2) are the components of the macro-strain and macrostress tensors and C ijhk (i, j, h, k = 1, 2) are the homogenized moduli, i.e the components of the macroscopic elastic tensor obtained via a homogenization procedure based on the Hill's macro-homogeneity condition [32]: to ∂B δ (i, j = 1, 2).
Statistical Homogenization
FSHP is based on the statistical homogenization procedure previously developed in [19] and briefly described in Section 1. The proposed homogenization procedure is conceived for evaluating both the homogenized elastic parameters of a non-periodic heterogeneous material and for identify the Representative Volume Element (RVE), that in the absence of a repetitive micro-structure is 140 not known a priori.
According to the approach presented in [19], as well as in [21,11], the presented procedure requires the statistical definition of a number of realizations called Statistical Volume Elements (SVEs), representing the micro-structure, sampled in a Monte Carlo sense, which allows for determining series of scale-145 dependent upper and lower bounds for the overall elastic moduli and to approach the RVE size, δ RV E , using a statistical stopping criterion, which is based on the variation of the average elastic moduli.
All steps of the homogenization procedure are completely integrated in the FSHP and they are described below.
150
Step 1 Input: in the case of disk-shaped inclusion, we set set the nominal volume fraction of the medium (ρ ≤ 40%) as T ol, based on data dispersion, as defined in above. 165 Step 2 Input: initialize the window size, L = L 0 , and number of simulations, Step 3 Realizations: for the case of circular inclusion the procedure automatically determine for each window size the number of inclusions (Poisson random variable) via simulations exploit Knuth's algorithm (see [59,36]). PolyMesher developed by [62]. Then all edges are shifted inwards by s/2 175 in order to obtain the final geometry of all realizations B δ . In Fig. 2 a schematic of the procedure is depicted for a generic grain: the initial polygon has vertices P i ; additional pointsP i are identified as a result of the shifting procedure, so that the interface area (in light blue) and the grain area (in light red) are defined. In all cases, each realization is supposed to be independent from any previ- Step 4 Generate/Solve: for each SVE, generate the relative mesh and solve both the Dirichlet and Neumann (Eq. (4)) BVP, and compute the homogenized constitutive parameters.
Step 5 Compute: evaluate the average bulk modulus, K δ , the relative standard deviation σ( K δ ) and variation coefficient CV ( K δ ). Then com- 2 , which ensures that the confidence interval of the average homogenized constitutive parameter set at 95%, evaluated over the normal standard distribution, is within the tolerance allowed, T ol. Repeat Steps 3-4 until N i < N lim .
Step 6 Checking: if the number of realizations necessary for ensuring the require-195 ment at Step 5 is small enough, stop the procedure. We choose as the number of realizations necessary the most unfavourable number between those obtained by solving BVPs of Neumann or Dirichlet. Otherwise choose an increased value of δ and go to Step 3.
The whole procedure has been schematized in the flow-chart in Fig. 4. The statistical convergence criterion adopted is based on a 95% confidence level of the Normal Standard distribution, which provides the number N of realizations at which is possible to stop the simulations for a given window size δ.
When this number is small enough, the average values of the effective moduli 210 converge and the RVE size is achieved. This circumstance also corresponds to reaching the minimum window size δ RV E for which the estimated homogenized moduli remain constant, within a tolerance interval less than 0.5% for both the Dirichlet and Neumann solutions. The minimum number of simulations, N lim , and the tolerance parameter, T ol, are chosen in order to define a narrow con-215 fidence interval for the average and to obtain a reliable convergence criterion. The adopted statistical criterion allows us to detect the RVE size also when the Dirichlet and Neumann solutions do not tend to the same value. The values of the tolerance are assumed as a function of the data dispersion [19].
For all realizations of the micro-structure the Virtual Element Method (VEM) 220 has been adopted as a numeric tool, that permits to adopt single polygonal element for the inclusions without meshing with consequently high reduction of the computational burden with respect to finite elements. Moreover, the adoption of the VEM allows us to create a refined mesh in the interface without meshing the granular elements thanks to the use of so-called hanging nodes 225 (Fig. 5). In the zones of interfaces a triangular virtual element mesh have been adopted, determined using a code for generating random mesh of Delaunay type (Triangle, [63]). As demonstrated in [64], three nodes virtual elements behaves just like 3-noded triangular finite elements.
The computational strategies adopted are aimed at making the statistical ho-
Virtual Element Framework
In this section we briefly recall the governing equation of the linear elastostatic problem and the related weak formulation, mandatory starting point to describe 235 the construction of virtual element. We restrict our investigation to virtual elements of degree 1 [38,39] also referred as lower order virtual elements. The Voigt notation is adopted since is more suited to easily develop the VEM formulation.
The linear elastic problem
We consider a two-dimensional domain, Ω, with Γ being its boundary. The where ∂ (·) denotes the partial derivative with respect to the (·)-coordinate.
The weak form of the linear elastostatic problem, provided by the virtual work principle, reads: where the continuous bilinear form a(·, ·) : V × V → R, in witch R is the set of 250 the real numbers, reads: in witch C is the 3 × 3 elastic tensor, and the linear functional F(·) : V → R, reads:
Virtual element formulation
In this subsection we introduce the virtual element discretization used in the homogenization procedure. In order to approximate the solution of the problem (6) we consider a decomposition T h of the domain Ω into non overlapping 255 polygonal elements E. In the following, we denote by e the straight edges of the mesh T h and, for all e ∈ ∂E, n i denotes the outward unit normal vector to e i ( Fig. 6(a)). The symbol n e represents the number of the edges of the polygon
E.
Let k be an integer ≥ 1. Let us denote by P k (Ω) the space of polynomials, 260 living on the set Ω ⊆ R 2 , of degree less than or equal to k.
By the discretization introduced, it is possible to write the bilinear form (6), as in the finite element methodology, in the following way: The discrete virtual element space, V h , is: where the local space, V h|E := V h|E 2 . For the virtual element of degree k = 1, V h|E is defined as: The dimension of the space V h|E then is: We can observe that, in contrast to the standard finite element approach, the local space V h|E is not fully explicit. Moreover v h is a polynomial of degree 1 on each edge e of E and globally continuous on ∂E. The problem (6) restricted to the discrete space V h becomes: where a h (·, ·) : V h × V h → R is the discrete bilinear form approximating the continuous form a(·, ·) and, F(v h ) is the term approximating the virtual work of external load. In this work the body force is null, so no details for its imple-mentation are reported. The discrete bilinear form is constructed element by element as By the above definition of the local space (11), the following important obser-265 vations can be made: -the functions v h ∈ V h|E are explicitly known on ∂E (linear functions); -the functions v h ∈ V h|E are not explicitly known inside the element E; The related degrees of freedom for the space V h|E are 2n e point-wise values
Projection operator and construction of the stiffness matrix
In accordance to [65] we define the projector operator of the strain as: More specifically, the projector operator respects the following orthogonality condition: As in the finite element method, we define the vectors v h ∈ V h|E andε ∈ P 0 (E) 2×2 sym using the degrees of freedom v h ∈ R 2ne , ε ∈ R 3 : The matrices N u and N V are where φ i is the standard i−th shape function. The projection operator in matrix is: with Π ∈ R 3×m .
By putting Eqs. (17), (19) into the Eq. (16) we obtain: Integrating by part the right hand side we obtain: where N E contains the components of the outward normal n = {n 1 , n 2 } T , i.e.
After some algebra: Finally the local projector operator Π is: where: We can observe that the matrix G ∈ R 3×3 is computed by knowing the area of the elements | E |: where | E | is computed using the Gauss-Green formula: where ∂E + is the boundary of the element oriented anticlockwise and the matrix where the boundary integrals in the matrix B are computed using the degrees of freedom of the element. In particular, the boundary integrals are computed resorting to the Gauss-Lobatto quadrature rule with k + 1 points, since this choice has turned to be adequate. In the case of VEM k = 1 we can obtain: where l i =| e i | is the length of the i−th edge of the element E and n i j (i = 1, · · · , n e ;j = 1, 2) is the i-th outward normal of components j. Moreover if The main difference between Finite Element Method and Virtual Element Method regards the approximation of the bilinear form of Eq. (14), that for the VEM is as follow where the first term of the right hand side is the consistent term and the second term is the stabilization term. The stabilization part is not presented in Finite Element Method and it is one of the peculiarity of the VEM. For computing the consistent term M we use Eq. (19) and Eq. (7) where the stiffness matrix M is defined as Eq. (32) has been rearranged in compact form as follow adopting a constant elastic tensor Concerning the stabilization term, we adopt the choice introduced in [38,39].
However other authors have proposed a different stabilization terms [66]. Some preliminaries are required before introducing the stabilization term. In particular we introduce the space of the scaled monomials p ∈ (P k (E)) 2 since In the case of VEM of degree k = 1 the space of the scaled monomials is are the coordinates of the centre of E and h E its diameter. The stabilization term is defined as follow where τ ∈ R is a coefficient defined by the user that can be set equal to 1/2 for linear elasticity [65] and tr (·) denotes the trace operator. The matrix D ∈ R m×6 collects the values of the polynomials p i at the degrees of freedom on E (nodes of the element in the case of degree k = 1). An accurate inspection of Eq.
280
(35) reveals that the stability term is basically a rough approximation of the internal energy associated with the difference between VEM shape function and its projection, while τ tr(M ) scales the term with respect to the consistency part, which is the properly used for providing convergence of the method [38].
Local load vector
Regarding the approximation of right hand side of the equation (13), the same assumption proposed in [65] is used, i.e.
so that the body load over the element E is subdivided over all nodes υ i of the polygonal element.
Numerical Results
In this section, the results of parametric analyses performed using the described computational FSHP in conjunction with VEM are provided. We restrict our 290 investigation to polycrystalline material with thin interfaces, since the other type of composite has been already extensively studied in [67,58,68].
Comparison between VEM and FEM discretization
As a preliminary test, we consider an ideal material that mimics a cermet-like composite with grains and thin interfaces and investigate, for a given window The components of the homogenized elastic tensor C ijhk are evaluated using both Dirichlet (apex D) and Neumann (apex N) boundary conditions. The structure of the computed homogenized stiffness matrix is equal to: More specifically, the results obtained using with virtual elements are compared 310 to those obtained using finite elements, see
FSHP applied to materials A and B
In this example we take into account both materials A (stiff grains) and B (soft grains) with elastic properties reported in Tab. 3, where also the contrast c is 325 reported, and the average dimension of grains and the hard-core have been fixed equal to d = 40 µm and s = 2 µm, respectively.
As an example, in Fig. 8 The convergence trend for the different materials depends on the different dispersion of results, as shown in Fig. 10 and Fig. 11, where the Coefficient of Variation, CV (K), is also plotted for several values of window size L for Dirichlet (blue solid line) and Neumann (red solid line) BVPs.
340
In this paper we present an efficient extension of the Fast Statistical Homogenization (FSHP), previously developed by the authors for the homogenization of particle composites in [36,37,69], to account for the peculiar geometry of the polycrystal and composites with grains and thin interfaces. The Virtual Element Method is used as numerical tool to solve the partial differential equa-345 tions governing the elastic problem. Starting from a random polygonal mesh, a novel automatic procedure to insert thin interfaces between grains is presented.
Each grain is discretized using a single VEM, while the interfaces zones present a dense triangular mesh (exploiting Delaunay triangulation). Clear advantages in terms of computational burden are found with respect to the Finite Element for which large window domains are required, as reported in [30,19,34,9].
The results shown for a set of ideal materials, characterized by different mismatches between elastic moduli of grains and interfaces (material contrast), order to better describe the behaviour of micro-structured material, is one of 360 the future developments. Moreover, we will upgrade the procedure to take into account possible anisotropic behaviours, since this type of materials are gener-ally characterized by anisotropic properties, that varies grain-by-grain. Another interesting application and future development will be applying the procedure for the design of new materials with prescribed mechanical characteristics.
|
v3-fos-license
|
2017-05-06T09:03:37.038Z
|
2016-11-07T00:00:00.000
|
41869374
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2071-1050/8/11/1143/pdf?version=1478508829",
"pdf_hash": "91a91f1c11c73f4fe705b6728bd5635e695acf01",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43151",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"sha1": "91a91f1c11c73f4fe705b6728bd5635e695acf01",
"year": 2016
}
|
pes2o/s2orc
|
Design and Implementation of a Microgrid Energy Management System
A microgrid is characterized by the integration of distributed energy resources and controllable loads in a power distribution network. Such integration introduces new, unique challenges to microgrid management that have never been exposed to traditional power systems. To accommodate these challenges, it is necessary to redesign a conventional Energy Management System (EMS) so that it can cope with intrinsic characteristics of microgrids. While many projects have shown excellent research outcomes, they have either tackled portions of the characteristics or validated their EMSs only via simulations. This paper proposes a Microgrid Platform (MP), an advanced EMS for efficient microgrid operations. We design the MP by taking into consideration (i) all the functional requirements of a microgrid EMS (i.e., optimization, forecast, human–machine interface, and data analysis) and (ii) engineering challenges (i.e., interoperability, extensibility, and flexibility). Moreover, a prototype system is developed and deployed in two smart grid testbeds: UCLA Smart Grid Energy Research Center and Korea Institute of Energy Research. We then conduct experiments to verify the feasibility of the MP design in real-world settings. Our testbeds and experiments demonstrate that the MP is able to communicate with various energy devices and to perform an energy management task efficiently.
Introduction
A microgrid is a low-voltage distribution network that is composed of a variety of energy components such as controllable energy loads and Distributed Energy Resources (DERs).Controllable loads include HVAC (heating, ventilation, and air conditioning) systems and EVs (Electric Vehicles), and DERs include PV (Photovoltaic), WT (Wind Turbine), CHP (Combined Heat and Power), fuel cells, and ESS (energy storage systems) [1].By integrating DERs and controllable loads within the distribution network, the microgrid is capable of operating either in a grid-connected mode (i.e., it is connected to the power grid) or in an islanded mode (i.e., it is disconnected from the grid and uses various DERs to supply power to the loads).While such integration differentiates the microgrid from conventional power systems, it also introduces new challenges to the way of power management and control.An Energy Management System (EMS) has been responsible for the management and control operations in the traditional power systems, and it is now necessary to advance the EMS so as to cope with emerging challenges.
A number of research ideas in the literature have discussed the advancement.Su and Wang examined the role of EMS in microgrid operations in detail [2].They also listed four essential functionalities which a new EMS (say, a microgrid EMS) should support; they are forecast, optimization, data analysis, and human-machine interface.Authors in [3][4][5][6] proposed various types of EMS frameworks that can work in a microgrid environment.While previous research focuses on a list of design issues for the EMS, they hardly take into account engineering challenges that frequently occur in the implementation of a microgrid EMS.The first type of engineering challenge relates to operational properties of energy components in the microgrid.The operation of typical DERs like photovoltaics is characterized by intermittency and variability, and that of controllable loads by spatiotemporal uncertainty.These properties complicate the microgrid management, and the microgrid EMS must be able to handle them in an appropriate manner.Next, a microgrid operation involves running a list of energy applications including demand response and coordinated EV charging as well as running innovative control algorithms [7,8] that are not necessarily implemented in a single system.Therefore, the microgrid EMS must be able to interface with them seamlessly.Finally, various types of energy components from different vendors are deployed and interconnected in the microgrid, but most of them still use proprietary protocols, which hinders them from interoperating with each other [3].The microgrid EMS must resolve the heterogeneity and interoperation challenges.
We believe that a microgrid EMS must be designed and implemented both to overcome engineering challenges and to satisfy aforementioned functional requirements.Unfortunately, few previous works have accomplished them simultaneously To address these two orthogonal concerns together, this paper proposes a Microgrid Platform (MP), an advanced EMS for efficient microgrid operations.We also develop and deploy its prototype and run experiments in real-world settings within two smart grid testbeds built in the UCLA Smart Grid Energy Research Center (SMERC) and Korea Institute of Energy Research (KIER).The contributions of this paper are three-fold: (1) We design a microgrid EMS with consideration of both the functional requirements and the engineering challenges.Many existing energy management systems have focused on one aspect.On the one hand, a system highlighting the functional requirements usually assumes the existence of computer systems, software, and communications and regards them as a black box.This setting, however, often uses proprietary technologies and thus is not extensible.Moreover, the system often provides predefined energy applications.It is hard to upgrade the system in order to support emerging applications.A microgrid EMS must be flexible from the software point of view to accommodate brand-new applications easily.There is an analogy in the cellular phone area.In the feature phone era, users used pre-installed applications that were very crude.Now, we observe that a user can develop any smartphone applications and sell them at APP stores.On the other hand, a system focusing on computer systems and communications usually implements specialized scheduling and control algorithms.Such algorithms are often customized to the underlying communication technologies and network topologies.In order to adopt new algorithms, the system may be rebuilt and these configurations are re-customized.
To address these challenges, we design the MP with a modular system in mind.The MP is developed as a framework in which a variety of modules (e.g., scheduling algorithm module and communication module) are added and/or deleted seamlessly.For instance, a specific power generation model can be added and incorporated in an existing optimization module.In this way, the MP supports the functional requirements and addresses the engineering challenges.(2) We develop the MP prototype in a resource-oriented architecture (ROA) style [9].Most previous microgrid systems have been implemented in a multi-agent system architecture or a service-oriented architecture (SOA) style that functions well in a homegeneous, proprietary, and server-centered system environment.However, an emerging microgrid environment includes deployment of heterogeneous energy devices using different communication technologies and use of a variety of standard message formats.A new microgrid system, therefore, must be able to cope with heterogeneity and diversity so as to communicate with energy devices seamlessly in an interoperable manner.A plug-and-play trend would be an example-say, a new smart meter from a random third-party vendor using new technologies is added to a microgrid.
This device must be able to communicate with the microgrid system or with other energy devices (if necessary) with minimum configuration so as to be ready to be used.With traditional architecture styles, we must re-build a microgrid management system to customize so as to communicate with the brand-new device.The MP prototype addresses this system engineering issue by adopting the ROA that abstracts an energy device as a resource-a software conterpart of the hardware itself.Just as the concept of Class in a Java programming language, a resource in the ROA maintains states and takes actions.Unlike the Java, however, the resource makes real communications and interactions with other energy devices or the microgrid system.Because of this abstraction concept, our MP can work in a distributed environment.To implement the sofware part and the abstraction, we take an Energy Service Interface (ESI) technology [10].(3) We deploy the MP prototype in our testbeds and run experiments to evaluate performance of microgrid management and controls.A microgrid is a complicated and delicate system, and thus development, deployment, and evaluation of its management system must be carefully designed and performed.When deploying the prototype and connected energy devices, thus building a microgrid system testbed, we must consider how much data we can obatin from the testbed.The more we get data, the more accurately we are able to run and evaluate optimization algorithms.We also take into account the diversity of energy devices.Unlike a simulation study, there are many challenges in a testbed environment.For instance, it is not trivial to install EV charging stations on a testbed because of both technical problems and administrative issues.Even if installed, we may not obtain ample information mainly due to low penetration of EVs in the real world.The MP as an energy management system in a microgrid must be able to communicate with external systems such as a demand response server.For evaluation, we must consider what external signals are delivered into the microgrid because these signals directly affect performance of scheduling and control algorithms.This paper designs the deployment of the prototype and connected energy devices by taking into account all the major factors.As a result, we build two real-world testbeds of microgrid including the MP prototype.
A primary issue in the evaluation is about how to design and run scheduling algorithms.Unlike simulations, each microgrid testbed has intrinsic properties, and thus a specifically-designed algorithm may not operate well in every microgrid configuration.To address this challenge, we develop a generic system model of a microgrid and formulate the energy scheduling and demand response as optimization problems.The next question is about how well a generic model works in a real-world environment.Does the model require to customize itself to every testbed?Does the model work well in a specific configuration and bad in other ones?How different would experimental results be from simulation ones?While this paper may not answer all the questions this time, we try to design and run experiments step-by-step in order to disclose clues to the answers.In particular, we discuss what we learned from our evaluation about the difference between experimental results and those from simulations in Section 4.2.
The rest of the paper is organized as follows.Section 2 reviews the functionalities of a microgrid EMS and addresses its design issues.Section 3 shows our implementation of the MP in details.In Section 4, we deploy our MP prototype to two microgrid testbeds and conduct experiments.Section 5 concludes this paper.
Design of a Microgrid Energy Management System
In this section, we discuss two categories of design issues-functional requirements and engineering challenges-which are necessary for an EMS to work properly on an emerging microgrid environment.Figure 1 illustrates an overview of a microgrid EMS system for our discussion; internal boxes denote its roles.We refer to [2] for details.
Forecasting Energy Activities
As generation, storage, and consumption of energy in a microgrid become more dynamic and complex, it is critical to predict such activities accurately for the purpose of energy balance.Forecasting is preformed on different time scales (e.g., hour-ahead, day-ahead, etc.) and predicted data is fed into an optimization process for microgrid operations.Forecasting has been challenging in a microgrid setting because of operational properties-inherent intermittency and variability in DERs and spatiotemporal uncertainty in controllable loads (e.g., electric vehicles).Previous studies focused on developing various forecast models of high accuracy given this randomness.They use various types of data sources, from historical data to mathematical models, weather data, and other societal data [11,12].Zhu et al. run demand forecast and solar generation forecast from history data, and then develop a battery (dis)charging scheduling algorithm [13].Huang et al. propose a hybrid mathematical model that takes weather forecasts and history data to improve the prediction accuracy of a solar panel [11].
Optimization: Making a Control Decision for Optimal Operations
An EMS must be able to make control decisions to optimize the power flows by adjusting the power imported/exported from/to the grid, the controllable loads, and the dispatchable DERs.Different optimization decisions are made for different applications (e.g., demand response and energy/power management) that are typically formulated as non-linear optimization problems with different objectives.Extensive algorithms have been proposed for them [7,8].Given EV owners' charging profiles and real-time power price, Mal et al. developed a V2G scheduling algorithm working at a large scale EV charging structure [14].
Analysis on Energy Data
An EMS collects a huge amount of data from DERs, energy loads, and energy market.Data collected must be analyzed properly, providing insights to better understand the characteristics of energy activities.This can be further used to improve the performance of the forecast and the optimization models.Bellala et al. analyze time series data of energy usage in a commercial campus [15].Then, they detect anomalous usage periods representing unusual power consumption.Detecting and correcting the anomaly can save on the electricity bill.The Monitoring-Based Commissioning (MBCx) project exploits the measurement data and diagnostic tools in order to perform commissioning.on 24 non-residential buildings throughout the state of California [16].
Human-Machine Interface
An EMS must provide a Human-Machine Interface (HMI) for real-time monitoring and controls of a microgrid.The HMI allows a microgrid operator to interact with other modules inside a microgrid system.It must be able to provide useful information and knowledge rather than raw data by means of visualization and archiving [17].The HMI is expected to allow active customer interactions [2].
A microgrid EMS is also responsible for communicating with external systems outside the microgrid; it translates data and signals transmitted from external systems to internal protocols and semantics.Energy services instantiate such interoperation.Two pieces of literature presented use cases of energy services [22,23], and we classify the services into two categories: facility service and grid service.In the facility service, a customer facility such as a commercial building and a community microgrid provides service data to external systems sitting on a national grid, whereas it receives and consumes service data delivered from the grid in the grid service.The EMS must be able to support both services.
The communication interface in the microgrid EMS must be extensible.New energy applications and innovative algorithms will be continuously added to the microgrid, and they do not necessarily reside in a single system.It is essential that the EMS is able to connect to them seamlessly, and such new connection must not affect the operations of existing functionalities.
Microgrid Platform
To demonstrate the feasibility of the new design discussed in the previous section, we propose a Microgrid Platform, a new microgrid EMS, and develop its prototype implementation running on top of a Linux distribution.This section also describes two algorithms that the MP runs for efficient microgrid operations.Figure 2
System Architecture
We implement the MP in a Resource-Oriented Architecture (ROA) style [9], which abstracts energy components in a microgrid in the form of resources.Each resource implements well-defined interfaces that allows the MP to support plug-and-play of DERs, loads, and functionalities.As shown in previous works [3][4][5][6], the ROA has advantages over a Service-Oriented Architecture.It fits best for "linking and referring" to energy resources, thus maximizing the interactivity efficiency in the EMS.The ROA is more lightweight without complicated interface description.
Interoperation-Energy Services from the Facility
The MP provides energy services to the grid, which makes the microgrid play an energy service provider role in the smart grid.In addition to basic energy services, it realizes the facility-side forecasting that helps the grid understand the facility's energy behaviors accurately.
Energy Services
The MP provides fundamental data services that most EMSs can do.These include (1) historical energy data for individual resource as well as for the aggregated one; (2) real-time measurement on resources' status, their energy activities (consumption, generation, and storage), and power quality; (3) the MP also accepts command messages from the grid that eventually control the internal energy resources.This corresponds to a Direct Load Control (DLC) service on the grid side.In addition, the MP provides various types of future forecasting services including demand and generation forecasts.
Energy Service Interface
The MP develops the ESIs using the existing implementation model [10].That is, the service data is represented via the open Building Information Exchange (oBIX) specification [24] and is then exchanged via the Web Service model with Representational State Transfer (RESTful) style [9].Our security algorithm carries out access control on action levels (i.e., Read, Write, and Invoke) [25].In addition to the oBIX, we extend the IEC 61850 specification [26] to represent data from our solar panels and energy battery.
Interoperation-Energy Services from the Grid
In addition to basic DLC services, our testbed implements two Facility-centric Load Control (FLC) type of services in which the microgrid is interested most-Automated Demand Response (ADR) service and Real-Time Pricing (RTP) service.
Open Automated Demand Response
We deploy an OpenADR 1.0 server [27] that provides the ADR service by exploiting the open source [28].The server issues an EventState signal (All the XML schemas for data used in OpenADR are available at http://openadr.lbl.gov/src/1.) to initiate a new demand response event.It is able to communicate with both smart and simple clients.A smart client can interpret the EventInfo information within the EventState signal.Included in SmartClientDREventData entity, the EventInfo contains event details.For example, the eventInfoTypeID denotes an event type and takes one value out of PRICE_ ABSOLUTE, PRICE_RELATIVE, LOAD_AMOUNT, etc.To communicate with a simple OpenADR client, the server translates the EventInfo information into a simpler form, named SimpleClientEventData.The entity contains two variables to describe the event state.The EventStatus element denotes the temporal state of the event (FAR, NEAR, or ACTIVE).The OperationModeValue indicates the operational state of the energy loads in the event (NORMAL, MODERATE, or HIGH).MP implements a Message Authentication Code (MAC) addressing the message integrity to address the security issue in the OpenADR.Following the NISTIR (National Institute of Standards and Technology, Internal/Interagency Reports) 7628 guideline [29], our testbed takes a hash-based MAC (HMAC) with SHA-256.
Real-Time Pricing for Retail Energy Market
To assess the feasibility of the RTP service, our testbed implements an RTP server that provides price forecast for a retail energy market.The server, in the absence of an RTP model in the real world, exploits the wholesale market price provided by California Independent System Operator (CAISO) (http://oasis.caiso.com/mrioasis/).More specifically, it obtains three types of price forecasts from CAISO-Day-Ahead Market (DAM); Hour-Ahead Scheduling Process (HASP); and Real-Time Market (RTM).The DAM provides an estimated power price of every hour for 24 h ahead.The HASP and RTM provide an hour-ahead/10-min-ahead price estimation of every 12/5 min, respectively.Since CAISO does not provide the price forecast for the location of our campus, the server takes the price value for the city of Long Beach.The RTP server also takes inputs of demand forecast and weather forecast from CAISO, and then eventually determines three types of price forecasts (DAM, HASP, and RTM) for the retail energy market.
Consuming the Service Data
The MP implements communication counterparts of the above two energy services for interoperations.With respect to the ADR service, it implements both smart and simple clients that periodically "pull" the EventState message from the server.This PULL mode is often preferred over a PUSH mode since the OpenADR client has more control over the communications, e.g., firewalls.It, then, identifies when the event starts and ends and other event contexts.The MP also pulls the price forecast from the RTP server periodically.Different applications may use three types of forecasts differently.Our testbed primarily fetches the HASP and RTM forecasts every hour and 10 min and executes scheduling algorithms according to the price changes.
Communication Model
The MP communicates with the energy resources via Ethernet, RS-485 serial, and IEEE 802.15.4.It supports various application protocols such as Modbus, IEC 61850, IEC DLMS, BACnet, SEP 1.0 (Modbus-http://www.modbus.org/;BACnet-http://www.bacnet.org;SEP-http://www.zigbee.org/Standards/ZigBeeSmartEnergy; DLMS-http://www.dlms.com/),and several proprietary protocols.The MP collects and stores both power-related measurements and status information from the energy resources every 5 min on average.In addition, it maintains meaningful meta data regarding each resource.For instance, each mini submeter is managed with a load type, location, and the load's priority.A resource owner configures the meta data, and thus the data keeps reflecting physical characteristics of the plugged load and user contexts.The MP provides basic scheduling functions through which a user pre-schedules the operations of energy resources.The dimmable LED lights are now reserved to be ON only during office hours, while a user can still turn them on/off any time.
User Interface
The MP implements a web-based user interface (UI), as shown in Figure 3, for real-time monitoring and control of the microgrid.The UI also allows users to interact with the MP and eventually with energy components in a microgrid.The MP allows real-time data to flow and provides control services with which users can read real-time measurements and send control messages to the DERs and the loads.The UI includes a variety of data visualization tools such as interactive graphs and tables that illustrate energy data and derived knowledge (e.g., historical or forecast data of DERs and loads) at a glance.
Microgrid Control
We implement two algorithms in the MP that support optimal operations in a microgrid: An energy scheduling algorithm and a Demand Response (DR) algorithm.The MP also implements forecast services for optimization.In particular, it adopts three different models for forecasting: A persistence model, an auto regressive moving average (ARMA) model, and a machine learning model for load forecasting, PV forecasting [11], and EV forecasting [12], respectively.The MP uses the CAISO's forecast data to provide market forecast services.Note that the algorithms here are designed based on the configuration of the two testbeds.Other algorithms [30,31] can also be implemented in the MP for different applications.
System Model
We present the system model of a microgrid and formulate the energy scheduling and demand response as optimization problems.
Let us consider a microgrid consisting of a set of Distributed Generation (DG) units denoted by G := {g 1 , g 2 , . . ., g G }, Distributed Storage (DS) units denoted by B := {b 1 , b 2 , . . ., b B } and controllable loads denoted by L := {l 1 , l 2 , . . ., l L }.We use a discrete time model with a finite horizon in this paper.We consider a time period or namely a scheduling horizon which is divided into T equal intervals ∆t, denoted by T := {0, 1, . . ., T − 1}, where t 0 is the start time.
DG Model: For each DG unit g ∈ G, we assume that there is an upper bound and a lower bound on its power: where p min g (t) and p max g (t) are the minimum and maximum output power, respectively.Typical DG includes PV, WT, diesel, and CHP.We note that we do not consider specific generation models for different types of DG.They can be easily incorporated into the optimization framework.If the DG unit is dispatchable (e.g., diesel), the output power p g (t) is a variable.If the DG unit is non-dispatchable (e.g., PVs and WTs), the output power p g (t) cannot vary and its value is equal to the forecasted value (i.e., p min g (t) = p max g (t) = p f g (t) where p f g (t) is the forecasted power at time t).We denote the generation cost of a DG unit g ∈ G by C g (p g (t)).We assume that the cost function is strictly convex.For renewable DG units such as PVs and WTs, the generation cost is zero.
DS Model:
We consider batteries as the DS units in the microgrid.Given a battery b ∈ B, we assume its output power p b (t) is positive when charging and negative when discharging.Let E b (t) denote the energy stored in the battery at time t.The battery can be modeled by the following constraints: where We use a cost function to capture the damages to the battery by the charging and discharging operations.Three types of damages are considered: fast charging, frequent switches between charging and discharging, and deep discharging.We model the cost of operating a given battery b as [32]: where p b is the charging/discharging vector p b (p b (t), t ∈ T ) , α b , β b , γ b , δ b , and c b are positive constants.
The above function is convex when α b > β b .This cost function captures the damages to the battery by the charging and discharging operations.The three terms in the function penalize the fast charging, the charging/discharging cycles, and the deep discharging, respectively.We choose δ b = 0.2.
Load Model: For each load, the demand is constrained by a minimum and a maximum power denoted by p min l (t) and p max l (t), respectively: For deferrable loads such as EVs, the cumulative energy consumption of the loads must exceed a certain threshold in order to finish their tasks before deadlines.Let E min l and E max l denote the minimum and maximum total energy that the load is required to consume, respectively.The constraint on the total energy consumed by a deferrable load is given by: We use a cost function to capture customer loss of comfort in the scheduling.The cost function C l (p l ) quantifies a customer's loss or discomfort obtained by the load l ∈ L using the demand vector p l (p l (t), t ∈ T ).We assume the cost function is a convex function.
Supply-Demand Matching: The net demand of the microgrid is equal to the total demand minus the total generation: If the microgrid is operated in islanded mode, then P(t) = 0.If the microgrid is operated in grid-connected mode, then P(t) is the power traded between the microgrid and the main grid.We note that islanded mode also involves other control and operational issues [33].We model the cost of energy purchase from the main grid as C 0 (t, P(t)) ρ(t)P(t)∆t, where ρ(t) is the market energy price.
Note that P(t) can have a negative meaning that the microgrid can sell its surplus power to the main grid (We assume that the selling price is the same as the purchasing price.Depending on the market pricing scheme, the two prices may be different in reality).
Energy Scheduling
The objective of the energy scheduling is to schedule the day-ahead operation of the DERs and the loads in a way that (i) the total costs of generation, energy storage, load, and energy purchase are minimized; and (ii) the DER constraints, the load constraints, and the supply-demand matching constraint are satisfied.The scheduling horizon T in this problem is one day.
We define p g (p g (t), t ∈ T ), p b (p b (t), t ∈ T ), p l (p l (t), t ∈ T ), and C g (p g ) ∑ t∈T C g (p g (t)).The energy scheduling in the microgrid can be formulated as a convex optimization problem [34]: (1)-( 9), where ξ l , ξ g , ξ b , and ξ 0 are the parameters to trade off among the utility maximization and the cost minimizations.
Solving the problem gives the optimal schedules including the generation schedules p g , the battery schedules p b , and the load schedules p l .
Demand Response
Upon receiving DR event signals from the utility, the microgrid EMS responds by coordinating the operation of energy devices in the microgrid properly.
A DR event is characterized by a time schedule T that specifies the start time and the end time and a demand limit P max (t) that is determined from the event information.The DR constraint on the net demand of the microgrid is given by: Similar to the day-ahead scheduling problem, the DR problem can be formulated as a convex optimization problem: (1)-( 10).
In the above problems, the control variables are assumed to be all continuous.However, some of them may be discrete in reality (e.g., on/off).A two-stage approach [35] can be used to solve this issue.In the first stage, a solution is obtained assuming that all the control variables are continuous.Then, the discrete variables are rounded to the nearest discrete levels and treated as constants in the second-stage solution.
Testbeds and Experiments
This section describes two microgrid testbeds in which we deploy various types of energy resources.For experiments, we also develop several external energy services.On top of the testbeds, we run experiments of microgrid operations.We note that this paper omits basic experimental results, i.e., measurement of energy usage and direct resource control.Instead, our experiments focus on the optimal energy scheduling and DR operations in the KIER and UCLA SMERC testbed, respectively.We refer to [10] for our previous results.
Smart Submeter
Unlike a conventional smart meter that measures aggregated energy usage, a smart submeter provides fine-grained measurement and control.Our testbed deploys two types of submeters.We instrument a panel-level multi-submeter that simultaneously connects up to 36 single phase circuits within a panel (http://www.satec-global.com/eng/products.aspx?product=42).Using it, we monitor two groups of energy loads-the lightings and power outlets at an office.We also install mini submeters that are instrumented to single power lines (http://www.bspower.co.kr/ en/smartmeter.do).For instance, it can directly connect to a light switch that turns on/off a set of fluorescent lights.These submeters use current transformers to convert current to voltage, and an embedded microcontroller calculates the real, reactive, and apparent powers and energy usage.They are with relays, and the microcontroller switches the power upon requests.
Office Appliance with Plug-Load Meter
As the plug-loads including all the office appliances account for more than one third of the total power consumption in a building [36], it is necessary to manage them carefully.To this end, we deploy two types of plug-load meters: smart plugs and smart power strips.Office appliances are plugged into them: computers, monitors, desk lamps, and network switches.The plug-load meter is functionally the same as the submeter, i.e., energy measurement and control.It communicates with the MP using a ZigBee [37] module.
Smart Equipment
Smart equipment represents such energy resources that must be accessed directly.Recent programmable thermostats and LED lights fall into this category.Each piece of equipment has its own operation cycles beyond a simple on/off control and is able to adjust the operations upon external requests.Our testbed deploys dimmable LED panel lights that adjust their brightness and color temperature in eight steps.Each light uses a ZigBee module to transmit its status and to accept control commands to/from the MP.For scalable experiments, we additionally develop a light emulator that creates 200 LEDs, each of which operates exactly the same as the real device (brightness, temperature, and energy consumption).
Smart Home Appliance
The home appliances are functionally the same as the smart equipment.Each manages its own operation cycles and must be accessed directly.The MP connects to two types of appliances via the Ethernet: a clothes dryer and a refrigerator (http://smartgrid.ucla.edu/projects_adr.html).It is able to change the strength of the heat (high, low, or no heat) as well as turn on/off the operation by sending signals to the dryer.The refrigerator adjusts the operating cycles of compressor, defrost, and fan.To measure energy usage, the mini submeters are instrumented to their input power cables.
EV Charging Station
UCLA has instrumented a number of charging stations at campus parking structures [38].Each station powers several EVs via J1772 connectors (http://smartgrid.ucla.edu/projects_evgrid. html) simultaneously and supports multiple charging levels (http://standards.sae.org/j1772_201210).It is capable of measuring charging capacity as well as charging rate.Each station sends the charging data in real-time to a management server in our laboratory that controls the stations based on subscribers' profiles and preference.The MP communicates with the stations via the server.Because of low penetration of EVs, however, we could not collect enough data for experiments.As a complementary work, we simulate charging activities based on measurements and obtain an ample amount of data.
Solar Panel and Battery
The MP is connected to a photovoltaic panel and a Battery Management System (BMS) that performs the whole monitoring function of battery system (voltage/electic current/SoC/SoH/temperature).A 50 kW PV system is being installed on the roof and a 25 kWh BMS in the lab.Table 1 describes the parameters in BMS.The current version of testbed implements the PV and the BMS simulations where data is generated from the real devices.The PV and BMS also implements the IEC 61850 standard to communicate with the MP.The details of the simulators can be found in our previous research [39].KIER is a research organization that focuses on improving energy efficiency and supporting energy policy in terms of technological development.It has built an entire building-level testbed having a peak demand of 300 kW that is mainly consumed by computers, air conditioners, lighting, and EVs. Figure 5 shows the system model of the testbed including two subsystems-a power hardware-in-the-loop simulation (PHILS) and real hardware-that our experiments use.The PHILS provides a real-time digital simulation of a hybrid power system with power hardware that can produce kW-level electrical power.It allows a flexible modeling of DERs, such as PVs, EVs, and ESS, and a real DC-AC inverter, and is expected to effectively fill the gap between the analytical simulation and practical implementation.The PHILS is built with Regatron's Integrated bidirectional power supply TopCon TC.GSS and family TC.G (http://www.regatron.com/en/products-topcon/bidirectional-power-supply-gss), and Figure 6 pictures the PHILS testbed system.The other subsystem in the testbed is a real hardware system that includes three types of DERs connected: PVs, EVs, and ESS.The real hardware DER system is partially under development.Our experiments use the real-time power hardware-in-the-loop PV/ESS simulation systems for DERs and real-hardware slow/quick EV charging systems for controllable loads.The PHILS system is also used to verify the operation of the microgrid.Figure 7 illustrates the architecture of our testbed, and Table 2 shows the specification details of energy devices used in the testbed.The testbed adopts IEC 61850 as the communication and control interface.The devices are all connected to two IEC 61850 gateways and one commercial gateway supporting also the IEC 61850 standard.The MP communicates with the gateways using the manufacturing message specification (MMS) as well as REST/oBIX.
Energy Scheduling
The devices in the KIER testbed considered in the energy scheduling include a 16 kW PV, a 5 kW ESS, a 16 kW quick charging EV system, two 2.5 kW slow charging EV systems, and two hundred 63 W dimmable LEDs.The maximum energy allowed to be stored in the battery E max 2 for the LEDs and C l (p l (t)) := η l (∑ t∈T p l (t)∆t − ∑ t∈T p f l (t)∆t) for the EVs, where p f l (t) is the forecasted load and η l is the priority of the load given by the customer.The higher the priority is, the more important the load is to the customer.Our experiment assumes that the LEDs can be shedded and the EVs can be shifted.The maximum shedding precentage of the LEDs Figure 9a shows the forecast of the devices, serving as the baseline consumption.We then run the algorithm in Matlab (v8.x,MathWorks, Natick, MA, U.S.) to produce the optimized schedules as shown in Figure 9b.Comparing Figure 9b with Figure 9a, we observe the battery charging/discharging cycles: the battery is charged when the energy price decreases and discharged when the price increases.We can also observe load shedding and load shifting from the results: the LEDs are shedded, the EVs are shifted to the time when the price is low, and the total charging energy of the EVs is also reduced.The total operational cost of the testbed using the optimized schedule is 12, 424 KRW, compared with 7899 KRW without scheduling, saving the cost by 12,424−7899 12,424 = 36.43%.Next, we use the schedule produced by the algorithm to control the hardware-in-the-loop simulators in the testbed in order to validate the simulation in real-hardware settings.The experimental results are presented in Figure 10.Compared with the simulation results in Figure 9b, the result of using the real-time hardware simulators roughly follows the optimized schedule obtained from the simulation.The main cause of the differences between them lies in the resolution of the inputs for the hardware simulators.The hardware simulators are not able to accept inputs of any precision.The effect of communication delay can be also observed in the LED control: the total power of the LEDs changes linearly in time.This is because we have 200 LEDs, and it takes time to control all of them and wait for them to respond to the control signals.Both the input resolution and the communication delay need to be considered for optimal energy scheduling in a real microgrid system.
Demand Response Algorithm
In the experimental scenario, the MP changes the demand in response to the change of real-time energy prices.In order to highlight the DR effect, the dimmable LEDs only participate in the DR-that is, they respond to the price changes.The DR optimization is then simplified as a problem to be solved at each time t: where L includes only the LEDs and p f l (t) corresponds to the brightness preferred by the customer.In our DR experiment, P max (t) is defined as a linear piecewise function that translates the real-time price ρ(t) from the CAISO to the maximum allowed total power: The LEDs are assumed to be equally deployed in four offices with different priorities (η l = 5, 10, 15, 20).The brightness is dimmed in the range [0, 100].The minimum, maximum, and preferred brightness are set to 20, 100, and 80, respectively.We implement the DR algorithm as a web service using JOptimizer (http://www.joptimizer.com/).Upon receiving DR signals from a DRAS (Demand Response Automation Server), the MP sends control commands to the LEDs.
Figure 11 shows the changes of the real-time price, the total power consumption, and the power consumption grouped by priority.The figure also observes that the total power consumption reduces as the price changes.A demand reduction rule uses priority-that is, the devices having lower priorities reduce more power consumption than those with higher priorities.
Conclusions
This paper proposes a Microgrid Platform, an EMS for a microgrid, by taking into account both the functional requirements and the engineering challenges.The MP is flexible and extensible in the sense that it supports plug-and-play of DER devices, loads, and functionalities by adopting the resource-oriented architecture style.The MP fulfills interoperability via energy service interfaces.We develop and deploy a prototype system both in the UCLA and KIER testbeds and run experiments to show the feasibility of the microgrid management and control in real-world settings.Our experimental results demonstrate that the MP is able to (i) manage various devices in the testbed; (ii) interact with external systems; and (iii) perform efficient energy management.Integral parts of our future works include conducting more experiments for statistical analysis and implementing/evaluating various control algorithms.We note that this work has extended our previous research [40].
Figure 1 .
Figure 1.An illustration of a microgrid energy management system.
Figure 2 .
Figure 2. System architecture of the Microgrid Platform implementation.
Figure 3 .
Figure 3. Web interface showing the overview of the microgrid.
max b is the maximum charging rate, η b ∈ (0, 1] captures the battery efficiency, −p min b is the maximum discharging rate, E min b and E max b are the minimum and maximum allowed energy stored in the battery, respectively, and E e b is the minimum energy that the battery should maintain at the end of the scheduling horizon.
Figure 4 presents some of them.* 4--CH EV charging station * Mini submeter * Power strip * Multi--submeter in a panel * Solar panel
Figure 4 .
Figure 4. Energy resource devices used in the testbed at UCLA Smart Grid Energy Research Center.
Figure 5 .
Figure 5.A system model of the entire testbed in Korea Institute of Energy Research.
Figure 6 .
Figure 6.A capture of the power hardware-in-the-loop simulation testbed.
b
is 50 kWh, and we set E min b = 5 kWh.We set E b (0) = E e b = 12.5 kWh.The parameters in the battery cost function are chosen to be α b = 1, β b = 0.75, and γ b = 0.5.We choose C l (p l (t)) := ∑ t∈T η l (p l (t) − p f l (t)) be 30%.The energy upper bound of the EVs E max l is chosen randomly from [18 kWh, 23 kWh] and the energy lower bound of the EVs E min l is chosen randomly from [13 kWh, 18 kWh].Perfect forecasting of the DERs and loads is assumed.We use the Korean time-of-use (TOU) price in the experiment as shown in Figure 8.The parameters in the algorithm are chosen as ξ l = 1, ξ g = 1, ξ b = 0.01, and ξ 0 = 1.
Table 1 .
Parameters of battery management system in the testbed.
|
v3-fos-license
|
2022-05-10T16:17:23.556Z
|
2022-03-19T00:00:00.000
|
248594663
|
{
"extfieldsofstudy": [],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "http://shanlaxjournals.in/journals/index.php/management/article/download/4894/4038",
"pdf_hash": "f73d1cb3a0ec7c844110dd9a02f35f2957395b03",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43152",
"s2fieldsofstudy": [
"Business"
],
"sha1": "3ca9a344bf0c01961e9bac7650e40c93569d646a",
"year": 2022
}
|
pes2o/s2orc
|
Customer Loyalty: A Case Study Involving the Three Indian Airlines Indigo, SpiceJet, and AirIndia OPEN ACCESS
India’s aviation industry is largely untapped with huge growth opportunities. The three largest domestic airlines by market share in India are IndiGo, SpiceJet, and Air India.Some travellers have grown disillusioned with traditional loyalty programs, citing a focus on high-value business travellers with no attention to the less-frequent, leisure flyer. So, this research aims at how airlines should be rethinking customer loyalty. The objective of this research isto explore the nature of consumer loyalty and its major determinants for the three airlines in India-Indigo. SpiceJet, and Air India, and To identify the differences concerning attitude, habit, satisfaction, loyalty, and augmented service (Service, safety, comfort, luggage allowance and bonus) amongst the three airlines. The primary data was collected through a structuredquestionnaire from 600 travellers at leading airports in India. The findings of the research showed that the age andoccupation of the respondents suggest a significant variance among the three airlines. The augmented service factors (attitude, habit, loyalty, safety and bonus) of the respondents suggest a significant variance among the three airlines. Conclusion, implications of the study,and suggestions for future researchers are also included in the research.
Introduction
In India, there were over 400 airports and airstrips. Passenger traffic amounted to over 115 million at airports across India in the financial year 2021. According to the International Air Transport Association (IATA) India has become the third-largest national aviation market in the world and it is expected to overtake China and the United States as the world's third-largest air passenger market in the next ten years, by 2030.
Figure 1 Number of Passengers travelled in domestic and International airports in India
The three major domestic airlines with largest market share in India are IndiGo, SpiceJet, and Air India.
Problem Definition
The Indian aviation industry has not been used much with great growth opportunities. Industry stakeholders should engage with and work with policy makers to make sound and informed decisions that can improve the Indian aviation industry in India. With a focus on customer loyalty, India will achieve its vision of being the third largest airline market. The research questions for this study are as follows: • How are the customer profiles of the three airlines different? • How does satisfaction differ between the three airlines? • How does customer loyalty differ from the three airlines? • How do the augmented services like service, security, luxury, freight grant, and bonus differ from the three airlines? Kumar et al. (2011) found that customer relationships play a critical role in creating customer loyalty. Zhaohua, Yaobin, Kwok, and Jinlong (2010) explain that customer satisfaction is considered an important decision for the re-purchase and customer loyalty. Nambisan and Sawhney (2007) explain that there are many important factors between consumer behaviours that influence the level of trust, which consumers display in the airline. Aydin and Özer (2005) stated that service quality also improves customer tendency to repurchase, buy more, buy other services, price sensitivity and tell other customers about their interests. Loyalty programs can offer customers a wide range of "solid" items (e.g. discounts, coupons or discounts for pre-purchase or savings) and "soft" (e.g. special invitations, special "after-hours" purchasing benefits), thus becoming regular customers. increase their purchases and become store attorneys; recommending the store to family, friends and acquaintances (Gable et al.;. Chitty, Ward, and Chau (2007) factor out that the behavioural element of loyalty describes routine behaviour. Habit is described as "a repetitively achieved, strong behaviour which is not actively deliberated upon on the time of the act" (Beatty and Kahle, 1988). Hallowell (1996) observed thatcustomer satisfaction influence customer loyalty.Safety has constantly been a critical detail to the commercial enterprise fulfilment of the passenger airline industry. Although deadly air accidents are extremely rare compared to other shipping modes, the rapid increase in the number of business aviation flights has ended in aviation's increasing exposure to risk (Chang and Yeh, 2004). Comfort performs an increasingly more vital position in plane tickets. Checked luggage describes gadgets of luggage brought to an airline for transportation in the maintain of an aircraft of a passenger airline. In airways, there may be an effective unbundling of offerings including the luggage allowance (Buttona and Isonb, 2008).
Research Objective
The objectives of this research are as follows. • To explore the nature of customer loyalty and its major determinants forthe three airlines in India-Indigo. SpiceJet, and Air India. • To identify the differences concerning attitude, habit, satisfaction, loyalty and augmented servicesamong the three airlines.
Research Methodology
In this study a quantitative approach was used and the study is descriptive. Customer loyalty to airlines has been identified as a dependent variable and Service, Security, Comfort, Habit, luggage/ baggage charge and Promotion have been identified as independent variations.
a. Measurement and Scaling
The conceptualization and improvement of the questionnaire have been based totally on the existing literature. A normal 5-point Likert scale were used to measure the constructs. The survey instrument was refined during a pilot study to ensure the internal consistency of the measuring instrument, with the involvement of 60 respondents.
The questionnaire contained 31 gadgets in total. The first part of the tool contained 5 questions about the demographics of the respondents including, age, gender training, and profession. The second part of the questionnaire contained seven questions about characteristics of the respondents and the third part of the questionnaire included 19 gadgets, which contained questions associated with attitude, addiction, pride, loyalty and elements (Service, safety, comfort, luggage allowance and bonus) and brand loyalty.
The questionnaires have been administered by personal delivery. A convenience sampling technique was followed to accumulate the primary records and it took one month for the complete collection of information. The individuals targeted for the gathering of facts for this studies were customers of Indigo. Spice Jet, and Air India in India.
During four-week, 627 respondents finished the survey. A total of 627 responses have been recorded. Twenty-seven responses were discarded because of duplicate submissions or incompletion, a net sample of 600 (size is determined primarily based on the sample standard deviation) usable questionnaires was used in this study.
b. Tools Used
For data analysis, a statistical package for social sciences (SPSS) version 20 was used. Statistical tests were applied to check the reliability (skewness and Kurtosis Test) and normality (Cronbach's Test) of the data and ANOVA, chi-square test, and percentage analysis were conducted to see the impact of independent variables over the dependent variable.
Demographic Profiles
As far as the profile of the respondents age are concerned33.33% of respondents' age is between 41 and 50 and 23.2% of respondents between the ages of 21 and 30.. Thisindicates that the researcher selected experienced passengers of the appropriate age for this study. As per Table 1, of the total sample size approximately 43.8% of graduates responded with 32.7% of post graduates responded. This shows that most of the respondents are completed their higher education. As per Table 1, approximately 67.3% of respondents are male and 32.7 percent of respondents are female. Table 1 shows that 32% of respondents 'annual revenue is between INR 5, 00,001 and INR 10,00,000 and 32% of respondents' annual revenue is between INR 10,00,001 and INR 15,00,000 rupees. Table 2 shows the cross-tabulation of Indian Airlines and the respondents' Purpose of Travel. 200 customers are selected from the Indigo, Spice Jet, and Air India services. 242 respondents avail airline travel as a visitor/tourist, 180 respondents avail airline service for the Business purpose. 86 respondents from Indigo avail service as tourist/visitor and 78 respondents from spice jet and Air India avail service as visitor/tourist respectively.
Hypothesis
• H 0 -Null Hypothesis: There is no association between Indian Airline and the respondents Purpose of the Travel. • H 1 -Alternate Hypothesis: There is an association between Indian Airline and the respondents Purpose of the Travel.
Level of Significance
The level of significance is fixed as 5% and therefore the confidence level is 95%. a. 0 cells (0.0%) have expected count less than 5. The minimum expected count is 25.33. Table 3 presents the results of the chi-square test regarding the Indian Airline and the respondents' Purpose of Travel. The value of chi-square is .001 which is less than 0.05 so, we accept H1 and conclude that there is an relationship between Indian Airlines and the respondents' Purpose of Travel. Table 4 shows the descriptive statistics of respondent's responses about Customer Loyalty based on the three airline services. The highest agreement is observed from the customers is for the statements 'I will fly with this Airline company in future" with the mean value of 2.98 and the lowest agreement is observed from the respondents for the statement "I consider myself as a loyal customer to this airline Company" with a mean value of 2.75. It is inferred from table 4 that the distribution of variables using the benchmark ± 2.0 values of Skewness and kurtosis. Based on the value reported for Skewness and Kurtosis were lying between the specified benchmark values of the above variables. Hence it is inferred that the distribution is significant. The data distribution achieved normality. Table 5 shows that the highest score belongs to Indigo and the lowest belongs to Spice Jet. Air India's score is in the middle. Thus, Indigo indicated 'agree' in terms ofloyalty. The Cronbach's Alpha for the scale was .872, and the ratings of the three items wereaveraged to form an overall loyalty score for each traveller. One-way ANOVA shows thatthe difference concerning loyalty among the three airlines was significant. In Table 6, Indigo's mean value is the highest, which is the best; whereas Air India'smean value is the lowest. Spice Jet's score is in the middle. These scores clarify that Indigo was perceived as a 'good' airline; whereas Spice Jet and Air India were ranked as'average' in terms of attitude towards the airline. This explains that travellers rated Air India low rating in terms of attitude, it needs to be improved. Standard deviations were found notto be high. The table 7 shows the descriptive statistics of respondent's responses about Customer Satisfaction based on the three airline services. The highest agreement is observed from the customers is for the statements 'This company represents the ideal I have of a perfect airline." with the mean value of 3.62 and the lowest agreement is observed from the respondents for the statement "I am satisfied with the experience that the airline company has provided" with a mean value of 3.24. It is inferred from table 7 that the distribution of variables using the benchmark ± 2.0 values of Skewness and kurtosis. Based on the value reported for Skewness and Kurtosis were lying between the specified benchmark values of the above variables. Hence it is inferred that the distribution is significant. The data distribution achieved normality. In Table 6, Indigo's mean value is the highest, which is the best; whereas Spice Jet's mean value is the lowest. Air India's score is in the middle. These scores clarify that Indigo was perceived as a 'good' airline; whereas Spice Jet and Air India were ranked as 'above average' in terms of customer satisfaction towards the airline. This explains that travellers rated Spice Jet low rating in terms of customer satisfaction, which needs to be improved. It is evident from Table 9 that it is an analysis of means among the three airlines with respect to augmented services like Safety, Comfort, Luggage allowance, and Bonus.
Conclusion
In conclusion, the purpose of this study is to emphasize the importance of customer loyalty and to demonstrate the impact of various factors on customer loyalty in the Indigo, Air India, and Spice Jet airlines. The benefits of loyalty programs have been recognized as important aspects of customer loyalty in the aviation industry. Therefore, airlines should offer good services to suit customer needs and make reliable plans to increase profits.
|
v3-fos-license
|
2022-05-11T15:21:57.647Z
|
2022-05-01T00:00:00.000
|
248687271
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/22/9/3552/pdf?version=1651908618",
"pdf_hash": "bd6192de8dadb3137615cc03ddb99372eb2ba40d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43154",
"s2fieldsofstudy": [
"Business"
],
"sha1": "896fbb0cb496eb831b43fbe8d5a5982c41d783f9",
"year": 2022
}
|
pes2o/s2orc
|
Formation Control of Automated Guided Vehicles in the Presence of Packet Loss
This paper presents the formation tracking problem for non-holonomic automated guided vehicles. Specifically, we focus on a decentralized leader–follower approach using linear quadratic regulator control. We study the impact of communication packet loss—containing the position of the leader—on the performance of the presented formation control scheme. The simulation results indicate that packet loss degrades the formation control performance. In order to improve the control performance under packet loss, we propose the use of a long short-term memory neural network to predict the position of the leader by the followers in the event of packet loss. The proposed scheme is compared with two other prediction methods, namely, memory consensus protocol and gated recurrent unit. The simulation results demonstrate the efficiency of the long short-term memory in packet loss compensation in comparison with memory consensus protocol and gated recurrent unit.
Introduction
Automated guided vehicles (AGVs) along with their formation control are a key technology for Industry 4.0 as they automate the coordinated movement of materials and components in manufacturing environments in a safe, secure, and operationally efficient manner [1,2]. In many applications, formation control refers to the process by which a group of autonomous vehicles follows a predefined trajectory while maintaining a desired spatial pattern [3]. Multi-agent formation control systems have received considerable attention in the research literature due to the inherent difficulties associated with the control and coordination strategies, especially in the absence of a central controller. AGV formation control has been extensively used in smart factories and warehouse environments in a mobile robot-based production line system [4]. Formation control has a wide range of applications, including vehicle platoon control [5], cooperative transportation of large or heavy loads carried by multiple mobile robots, or automated guided vehicles (AGVs) [6]. Formation control [7] addresses various sub-problems such as localization [8], obstacle avoidance [9], and distributed path planning [10], with numerous studies on these topics. Due to its simplicity and scalability, the leader-follower approach is a widely adopted formation control method [11]. This method selects one or more robots as leaders to guide and move along the desired trajectory, while the remaining robots are selected as followers to track the leader(s) paths. A leader-follower formation control problem can be considered a tracking problem in control systems, where the leader moves along the desired trajectory, and the followers track the leader, maintaining the required formation [3]. We consider that each robot's (e.g., leader, follower) control procedure only uses local measurements and is not based on a centralized controller. Consensus control is another type of formation control in which all robots coordinate and make decisions based on information from their neighbours to achieve consensus [12].
Generally, the AGVs/robots exchange formation control information through a wireless network [10], which plays a vital role in interconnecting the AGVs. Formation control • Study of decentralized leader-follower formation control for non-holonomic automated guided vehicles using linear quadratic regulator (LQR). LQR is a simple yet popular control approach that can be easily implemented and that has not yet been used for formation control. • Analysis of the impact of packet loss on the formation control of AGVs.
• Improving the performance of a linear quadratic regulator (LQR) controller via machine learning, e.g., LSTM, to deal with packet losses, rather than using a highly non-linear and complicated controller such as a sliding mode controller. • Development of a mechanism to compensate for packet loss with LSTM and the application of the mechanism to the formation control of non-holonomic differential drive robots, which are more sensitive to network uncertainties due to non-holonomic constraints. • Comparing LSTM with GRU and MCP for the compensation of 30 and 50 percent packet loss through simulation in MATLAB/SIMULINK.
The rest of this paper is organized as follows. In Section 2, related work focused on the compensation of packet loss for AGVs is presented. Section 3 discusses how to model differential drive robots mathematically. A controller is designed in Section 4. In Section 5.1, the vulnerability of the system to packet loss is demonstrated. In Section 5.2, an LSTM network is used to counteract the negative impact of packet loss. Prediction accuracy of LSTM, GRU, and MCP are compared in Section 6.1. Simulations are used to evaluate the proposed method's performance in Section 6.2. Finally, the conclusion and suggestions for future work are presented in Section 7.
Related Work
In this section, we mainly discuss prior research that has developed algorithms and controllers to deal with packet loss in the formation control of AGVs. The authors in [36] concluded that by reducing the number of control updates and transmissions, the wireless communication channel is less congested and hence packet losses can be minimized. However, the authors did not propose any mechanisms to address the effect of packet loss, which can affect the performance of their formation control.
In [33], a networked predictive controller and two algorithms are developed to cope with consecutive packet loss and communication delay for non-linear wheeled mobile robots. The authors simulated the effect of packet loss by considering a tunnel the robot travels through that is no longer detectable with cameras and associated servers [33]. A well-quantified analysis of packet loss and its effects is missing.
A consensus-based tracking control strategy was studied for leader-follower formation control of multiple mobile robots under packet loss [37]. A novel multiple Lyapunov functional candidate and linear matrix inequality (LMIs) ensure that the robots reach consensus when packet loss and communication weight (representing the rate of information flow between agents) are taken into account. Packet loss is modelled using a Bernoulli distribution and assumed to be 20% in the majority of scenarios. It is shown that the system can achieve consensus under high packet loss, but it also takes a long time to reach consensus. Their simulation results consider that one of the agents has access to the maximum amount of information, implying that their communication weight is equal to one.
In [13], event-triggered second-order sliding mode control is designed for consensusbased formation control of non-holonomic robots. Although sliding mode control is a robust method for counteracting packet loss and delay, the second-order sliding mode controller is a difficult controller to implement in real-world industrial scenarios. Furthermore, the main focus of this article is on event-triggered control, and the highest packet loss rate considered is 20%, modelled using the Bernoulli distribution for a circular trajectory.
Among the learning methods used to address packet loss, iterative learning control (ILC) design has been used to cope with packet loss in several articles [38,39]. ILC is based on the concept of learning from previous iterations in order to improve the performance of a system that repeatedly performs the same task [40]. In [41], ILC is applied to a linear system that suffers from 15% and 25% packet loss. The authors show that after 50 iterations, the system compensates for this packet loss. ILC is also used in non-linear multi-agent systems to solve the consensus problem of a leader-follower use case with packet dropouts of 10% and 40% [42]. None of these ILC studies [38,39,41,42] were conducted with non-holonomic AGV formation control, which is a more challenging system because of the non-holonomic constraints.
It is worth noting the difference between the methods proposed in this research and the dead reckoning method. They might seem similar in definition but they have totally different approaches and functions. Dead reckoning (the "deduced reckoning" of sailing days) is a simple and basic mathematical procedure for finding the present location of a vessel by advancing some previous position through a known course and using the velocity information of a given length of time [43]. In dead reckoning, the Global Positioning System (GPS) is not available, e.g., no GPS receiver, indoor environment, etc. [44]; therefore, dead reckoning is used as a localization method that estimates the robot's position, orientation, and integrates local sensor information over time, which usually suffer from drifts [45]. In this research, we do not address localization, and it is considered that each robot knows its own position accurately (e.g., the global reference is available).
In contrast to previous research, this study focuses on the use of deep learning to compensate for packet loss while robots maintain their formation. When packet loss occurs, LSTM, GRUs, and MCP are optional methods for predicting the leader position. LSTM is used for AGVs in various fields such as path planning [46], state estimation and sensor fusion of holonomic robots [47], data fusion of the odometry and IMU [48] anomaly detection [49] and fault detection [50]; however, no research has been conducted to compensate for packet loss in the formation control of AGVs via deep learning.
Mathematical System Model
In this section, we consider the generic mathematical system model of non-holonomic robots moving in the X-Y plane [51]. We chose a non-holonomic differential drive robot, which is widely used for AGVs in industry, as detailed in Section 1. In non-holonomic robots, the number of control variables is less than the number of state variables, which complicates formation control. This section discusses non-holonomic constraints to derive a kinematic model of non-holonomic robots. The differential robot's kinematics can be simplified using unicycle model equations [51] in which the wheel is assumed to have a desired velocity at a specified heading angle. As shown in Figure 1, the robot's position is determined by the co-ordinate (x, y, θ), which is the robot's orientation relative to the axes (X and Y). There are also two control inputs denoted by v and ω, which correspond to linear and angular velocity, respectively.
The reference trajectory followed by the leader is represented by Equation
where θ re f is the reference heading angle (tangent angle of each point on the path), which can be obtained from the reference positions (x re f , y re f ) given by Equation (2).
k = 0 represents the forward drive direction and k = 1 represents the reverse drive directions.
The linear velocity v re f of the robot is obtained with Equation (3) and the reference angular velocity ω re f is obtained with Equation (4).
where the sign relates to the desired drive direction (+ for the forward direction and − for the reverse direction)
Controller Design
In this section, we detail the design of the linear quadratic regulator (LQR) tracking controller used for controlling the non-holonomic robots, so that they follow a desired trajectory through a leader-follower strategy. A similar design is detailed in [52] for a single robot following a reference trajectory. LQR is an optimal control technique which considers the states of the system and control inputs when making optimal control decisions and computes the state feedback control gain [53]. LQR was chosen because of its simplicity and ease of implementation, while providing good accuracy, as shown in [52] for a single AGV. The designed LQR is a simple controller in comparison with non-linear controllers such as a sliding controller, which has been widely used for AGVs in recent years. As shown in Figure 2, the kinematic controller (LQR controller) generates two control signals, the angular velocity (ω cl ) and the linear velocity (v cl ), for each robot's trajectory tracking. The current position of the robot (x, y, and θ) is compared with its expected reference trajectory (x re f ,y re f and θ re f ) and the trajectory tracking errors are fed to the LQR controller after the required transformations [52].
To design the LQR controller, let us consider a linear time-invariant (LTI) system: where A ∈ R n×n , B ∈ R n×m , C ∈ R p×n , and D ∈ R p×m are the system matrix, control matrix, output matrix, and feed forward matrix, respectively, representing the state space model. The dimensions of these matrices are depicted with n state variables, m inputs, and p outputs. x is the state vector, u is the control vector, and y is the output vector. The LQR controller generates the control input that minimizes the cost function [54] given by Equation (6).
where Q = Q T is a positive semi-definite matrix that penalizes the departure of system states from the equilibrium, and R = R T is a positive definite matrix that penalizes the control input [55]. The feedback control law that minimizes the value of the cost function is given by Equation (7): where K, the optimal state feedback control gain matrix, is obtained with Equation (8): and P is found by solving the algebraic Riccati Equation (ARE) (9) [56]: Thus, to design an LQR controller, the trajectory tracking problem should be written in the form of Equation (5). This gives the trajectory tracking errors in the form of Equation (10).
where e x , e y , and e θ are errors in x, y, and heading angle, respectively. To transform these errors into robot coordinates, a rotation matrix was applied to the system as stated in Equation (11).
The transformed errors (e 1 , e 2 , e 3 ) are fed to the LQR controller and it generates the required control signals ω cl and v cl , as shown in Figure 2. Applying Equation (1) to the time derivative of Equation (11) yields the state space model given by Equation (12).
Comparing Equation (15) with the standard form in Equation (5), the system is controllable if and only if its controllability matrix (R = [B, AB, A 2 B]) has a full rank. However, rank (R) = 3 if either v re f or ω re f are non-zero, which is a sufficient condition only when the reference inputs v re f and ω re f are constant. This happens only when the trajectory is a line or a circular path. The controllability of a driftless system can be derived from Chow's theorem if the system is completely non-holonomic [51]. The robot model represented by Equation (1) is completely non-holonomic since it has only one non-holonomic constraint, which is represented by Equation (16): Therefore, the robot cannot move in a lateral direction due to its wheels and it is controllable [51], as shown in Figure 2, where the LQR controller is given by Equation (17) and K 2×3 is the gain matrix with three states and two inputs.
To obtain the LQR controller gain (K 2×3 ), matrices Q and R are tuned, where Q is a positive-definite/semi-definite diagonal matrix related to the state variables, and R is a positive-definite diagonal matrix related to the input variables [57]. The following Q and R were selected according to [52] for the evaluation of our tracking system.
Formation Control under Packet Loss
In this section, we first evaluate the controller's performance under various packet loss conditions. Following that, we discuss the application of an LSTM model to a follower in order to predict the position of the leader when packet loss occurs.
Impact of Packet Loss on Formation Control
As demonstrated in [52], the LQR controller performs admirably in tracking the trajectory of a single robot along a variety of paths. Here, we extend the LQR tracking control problem [52] to the formation control of multiple robots in various packet loss scenarios. In the leader-follower approach, the leader's position is communicated to all followers at regular intervals. The communication interval is considered to be 0.05 s and the sampling interval is 0.005 s. The effects of packet loss are depicted in Figure 3, which illustrates how packet loss results in an increased follower position error of around 4 cm. The simulation results in Figure 3 were obtained by considering a memory element in each follower robot that stores the most recent position of the leader. That is, whenever packet loss occurs, a follower makes use of the last received data stored in its memory to track the reference (leader) trajectory. This approach is called MCP, as detailed in Section 2. Here, we apply an LSTM prediction model to alleviate the impact of packet loss and we compare the system's performance with MCP and GRU.
Error Signal
Packet loss 50% Packet loss 30% Without Packet loss
Long Short-Term Memory to Cope with Packet Loss
Our objective is to enhance the optimal control system (as shown in Figure 2) using LSTM rather than designing a highly non-linear tracking controller. We believe that, with the recent advancements in machine learning techniques, the existing industrial controller's performance can be improved in the presence of various network uncertainties such as packet loss, delays, etc. We propose to use LSTM, a type of recurrent network that reuses previously stored data and its dependencies, for predicting the latest position of the reference trajectory (leader). LSTM has been widely used for the prediction of time series data [58][59][60]. As we are attempting to predict the leader's trajectory, which is a time-based ordered sequence of locations, this problem fits within the LSTM framework. LSTM has been addressed in numerous articles [61][62][63] in order to learn and remember long-term dependency and information. By incorporating various gates, such as an input gate, an output gate, and a forget gate, LSTM is expected to improve traditional recurrent neural networks (RNNs). These various gates enable LSTM to achieve a trade-off between the current and the previous inputs while alleviating an RNN's vanishing gradient and exploding gradient problems [61]. We detail the LSTM model along with various gates/parameters and evaluate the system's performance in the following sections.
Architecture of LSTM Prediction and Control
A basic LSTM network for prediction begins with an input layer, followed by an LSTM layer, a fully connected layer, and finally, a regression output layer. The input layer provides the position of the leader to the LSTM layer. The hidden layer is in charge of storing and remembering the position data received from the leader. The output layer provides the leader robot's predicted position. Since the position of the leader is characterized by its x and y position and heading angle, we use three independent LSTM neural network models for predicting each of these states.
As illustrated in Figure 4, LSTM is equipped with a "gate" structure that enables it to add or remove cell state information and selectively pass the information while passing through different gates as detailed below [ • Forget gate: The forget gate ( f t ), given by Equation (18), decides whether the information from the previous cell state C t−1 should be discarded or not.
where f t is the forget gate, σ is the sigmoid function, W f is the weight matrix, b f is the bias term, h t−1 is the previous hidden layer output, and x t is the new input. • Input gate: This gate determines the information that has to be stored in the cell states that includes two parts given by Equation (19). The first part in Equation (19) consisting of σ identifies which value is to be updated, and the second part in Equation (19) including tanh generates the new candidate values.
where i t is the input gate,C t is the candidate state of the input, and σ and tanh are the sigmoid and hyperbolic tangent functions, respectively. W i and W C are the weight matrices, b i and b C are the bias terms, h t−1 is the previous hidden layer output, and x t is the new input. • Updating cell state: Updating the cell state considers the new candidate memory and the long-term memory given by Equation (20).
where C t and C t−1 are the current and previous memory states, f t is the forget gate, i t is the input gate, andC t is the input candidate state. • Output gate: This gate determines the output of the LSTM given by Equation (21).
where o t is the output gate. W o and b o are the weight matrix and bias terms, respectively. h t−1 and h t are the previous and current hidden layer outputs, x t is the new input, and C t is the current state of the memory block. The first part in Equation (21), which includes σ, determines which part of the cell state will be output (o t ), and the second part in Equation (21) processes the cell state by tanh multiplied by the output of the sigmoid layer.
Application of LSTM for Leader Position Prediction
As previously stated, follower robots should have access to the latest position of the leader in order to maintain accurate formation control. When packet loss occurs, followers are unaware of the leader's true position. To cope with this, we use an LSTM for predicting the leader's trajectory. The LSTM network is trained using the leader trajectory and then its states are updated.
As shown in Figure 5, when no packets are lost, network states are updated with the actual observed leader position. In the event of packet loss, network states are updated using previous LSTM predictions, as observed leader position values are unavailable.
Performance Evaluation
In this section, we evaluate the performance of the different prediction schemes LSTM, GRU, and MCP. We also evaluate the performance of the leader-follower formation control system with these prediction methods in different packet loss scenarios. All the performance evaluations are carried out through MATLAB/SIMULINK simulations.
Prediction Accuracy of LSTM, GRU, and MCP
Here, we discuss the prediction performance of LSTM, GRU, and MCP for a circular trajectory for 30% and 50% packet loss. LSTM was trained with the leader's trajectory positions; 80% of these data were used for training the LSTM and 20% was used for validation. Figure 6 shows the validation and training loss for LSTM. Over the 400 time periods, the proposed LSTM model was able to learn to predict with the desired accuracy.
The errors between the actual and the predicted positions of the leader trajectory (X, Y, and heading angle) are shown in Figure 7. As shown in the figure, LSTM provided more accurate predictions than GRU and MCP. The root mean square error (RMSE) between the actual and predicted positions is shown in Table 1. From Table 1, it is clear that LSTM provided the most accurate prediction in comparison with GRU and MCP for both 30% and 50% packet loss scenarios. MCP had the worst prediction performance.
Simulation Results
Here, we discuss the formation control performance of four robots with one of them acting as leader. For each robot, the controller diagram depicted in Figure 2 was simulated using MATLAB/SIMULINK. The followers and the leader attempted to maintain their formation as they travelled along a pre-defined path. At regular communication intervals, the leader's position was communicated to all the followers via wireless broadcast communication. In the event of packet loss, LSTM predicted the leader's position for the followers. The LSTM and GRU model settings were identical and they are listed in Table 2. The LSTM and GRU model parameters were carefully chosen to maintain a balance among prediction accuracy, computing resources, and calculation time. In our use case, the follower robot was expected to maintain a predefined distance from the leader. The accuracy of the LSTM location prediction was measured using the RMSE given by Equation (22). RMSE is a frequently used measure of the difference between the predicted and the actually observed values.
where x i is the observed value,x i is the predicted value, and N is the number of data points.
The simulations were carried out with a sampling time of 5 ms and a communication interval of 50 ms. The leader positions communicated to the followers were vulnerable to packet loss, which was modelled using a Bernoulli distribution with a probability of ρ equal to 0.3 and 0.5. Packet loss had a more significant impact on the formation control when the trajectory followed was not simple in nature (e.g., straight line or its variants). We chose circular and eight-shaped paths for our evaluations.
Circular Path
Here, we consider the leader and the followers as moving through a circle while packet loss is considered to be 30%. As illustrated in Figure 8, LSTM prediction compensated for packet loss better than GRU and MCP. The distance from the leader is also depicted in Figure 9, which compares the prediction performance of MCP to those of LSTM and GRU.
The RMSE values of X, Y, and the heading angle of the follower are shown in Table 3. From Figure 8, it is clear that LSTM-based prediction can provide formation control performance that is comparable to that in perfect communication scenarios (0% packet loss), even with 30% packet loss. This is clearly observed in the RMSE as well. A lower RMSE for X and a close enough RMSE for Y and heading angle can be observed in Table 3 when comparing 0% and 30% packet loss scenarios. This demonstrates how well LSTM prediction can compensate for packet loss. Overall, LSTM performed 10% better than GRU and 148.07% better than MCP. We repeated the circular trajectory scenario with a 50% packet loss. The follower performance is illustrated in Figure 10. LSTM again outperformed GRU and MCP in terms of prediction. The distance from the leader is depicted in Figure 11 and the MCP's performance is compared to those of LSTM and GRU. RMSE was calculated for the X and Y positions and the heading angle of the follower in Table 3 for a 50% packet loss. Here, LSTM outperformed GRU and MCP; while packet loss was 50%, LSTM performance was only slightly worse than it was with 0% loss. Overall, LSTM performed 21.41% better than GRU and 223.17% better than MCP. Figure 11. Distance of the follower from the leader for the circular trajectory (packet loss-50%).
Eight-Shaped Trajectory
Here, we detail the formation control performance of the leader and the followers while following an eight-shaped trajectory in different packet loss scenarios. Figure 12 shows the system's performance while the packet loss was 30%. It is clearly visible that the LSTM prediction was very close to the perfect communication scenarios when compared with GRU and MCP. The distance from the leader is depicted in Figure 13. Table 4 gives the RMSE for the X, Y, and heading angle of the followers. LSTM performance (RMSE of X, Y, and heading angle) while having 30% packet loss was even better than it was in the perfect communication scenarios (0% packet loss). This demonstrates that LSTM prediction can completely compensate for packet loss and even compensate for the quantization error in the leader positions due to discrete communication intervals. Overall, LSTM performed 5.20% better than GRU and 156.14% better than MCP. The formation control experiment along the eight-shaped trajectory was repeated with 50% packet loss. As illustrated in Figure 14, LSTM again outperformed GRU and MCP in terms of prediction. The distance from the leader is depicted in Figure 15. MCP had a weaker performance compared to LSTM and GRU. The RMSE for the X, Y, and heading angle of the follower is presented in Table 4 for the 50% packet loss scenario. As observed earlier, LSTM outperformed GRU and MCP. LSTM's prediction performance is comparable with that in a perfect communication scenario (0% packet loss) even when sustaining 50% packet loss. Overall, LSTM performed 14.49% better than GRU and 250.53% better than MCP.
Conclusions
The formation control problem of non-holonomic AGVs is presented in this study. Decentralized formation control of multiple AGVs is presented based on leader-follower formation control by an LQR controller. The performance of the LQR controller is analyzed when sustaining packet loss and with packet loss compensation using an LSTM neural network model. The LSTM algorithm is in charge of forecasting the leader's position based on the previous leader's position. When packet loss occurs, followers rely on LSTMgenerated predicted position values to maintain their formation accurately. Numerous simulations were run to compare the performance of the LSTM to that of MCP and GRU. LSTM prediction significantly aids in compensating for packet loss along a variety of trajectories. Overall, LSTM performs 12% better than GRU and 194% better than MCP. In future research, we will consider communication delay and other details of connectivity aspects and plan to also implement our proposed approach in a physical robot test environment.
|
v3-fos-license
|
2018-04-03T01:28:34.055Z
|
2012-09-01T00:00:00.000
|
264602825
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://europepmc.org/articles/pmc3954357?pdf=render",
"pdf_hash": "3eb8458295529cf261425d1c1b5a552d5f36c5a5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43156",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3eb8458295529cf261425d1c1b5a552d5f36c5a5",
"year": 2012
}
|
pes2o/s2orc
|
Transplant of kidneys with small renal cell carcinoma in incompatible, heavily immunosuppressed recipients
Renal cell carcinoma (RCC) is considered a contraindication for transplant. However, an increasing number of cases of transplant kidneys with RCC have been reported with encouraging results. We present our experience of two cases of transplanting kidneys with small RCCs. Donors and recipients were aware of the presence and possible consequences of RCC in the transplanted kidney before transplantation. Cases were discussed in the multidisciplinary team meetings. Regular, 6–12 monthly follow-up of donors and recipients was carried out with ultrasonography and/or computed tomography to detect recurrence of RCC or new tumours in the recipients’ transplant kidneys or the donors’ native kidneys. The outcome was recorded. There were no suspicious masses in the any of the kidneys during the follow-up period. The transplant kidneys are functioning.
Due to the shortage of organ supply, dependence on live kidney donation is reducing and restrictions for accepting donors are becoming less stringent.Donor safety is nevertheless of paramount importance.Donors therefore undergo extensive evaluation prior to surgery.This includes computed tomography (CT) renography or magnetic resonance angiography.These investigations lead to increased diagnosis of unexpected pathologies in donors. 1 One of these diagnoses is the presence of small (<4cm) renal cell masses and renal cell carcinomas (RCCs).This is an overview of the management of RCC discovered either during the evaluation process of the kidney donors or after donor nephrectomy in one of the UK major centres of kidney transplantation.
Methods
A retrospective review of the last five years of living donor kidney transplants was carried out.Two cases with RCC in the transplanted kidneys were identified.
Case 1
In 2006 an ABO incompatible transplant took place between a 45-year-old healthy female kidney donor and her 57-yearold husband with end stage renal disease (ESRD) secondary to Alport syndrome.The donor also had a history of hyper-tension and post-immunosuppression lymphoproliferative disorder.The recipient had desensitisation with rituximab and plasma exchange according to the local ABO incompatible transplant protocol.During the donor nephrectomy, a 0.5cm cystic lesion was discovered in the lower pole of the kidney.The lesion was excised with part of the surrounding healthy looking tissue and a frozen section procedure was performed, which proved the lesion to be clear cell RCC with a cancer free surgical margin.
The case was discussed with the recipient, and the risk of recurrence of the RCC in the transplant kidney and consequences of this were explained thoroughly.The recipient decided to go ahead with the transplant, accepting the risks explained to him.
Case 2
In 2008 a 72-year-old healthy female potential donor was discovered to have a 14mm suspicious exophytic mass at the posterior medial aspect of the upper pole of the left kidney during the donor workup with pre-operative magnetic resonance angiography.CT of the chest, abdomen and pelvis confirmed the presence of the lesion and excluded the presence of any other suspicious lesions.
The potential recipient was the donor's 71-year-old sister with ESRD secondary to focal segmental glomerulosclerosis with a history of a right mastectomy for a breast carci- noma.was five years prior to the transplant and the patient had been free of breast cancer during that period.However, the patient had a B cell positive crossmatch, and desensitisation with rituximab and plasma exchange was necessary before the transplant, according to the local protocol.
The case was discussed with the donor and the recipient, explaining the possible diagnosis of RCC in the suspicious mass as well as the possible recurrence of the RCC in the transplant kidney and its consequences.Both the donor and recipient accepted the risk and the transplant was carried out.
In these two cases, regular follow-up by ultrasonography every 3 months and CT every 6-12 months was carried out for both the donor and recipient for recurrence of the original tumour or development of a new tumour.The outcomes of the donors and recipients were recorded.
Results
Both donors are alive with no clinical or radiological evidence of local, contralateral or distant recurrence of the tumour.Neither of the recipients have developed any suspicious lesions in the transplant kidneys or distant metastasis elsewhere.However, the recipient in the first case developed a haemangiosarcoma in a right brachiocephalic arteriovenous fistula 18 months after the transplant.This required a high above-elbow amputation.Both grafts have been functioning well.
Discussion
Several studies have demonstrated a significant survival advantage with transplantation over dialysis, particularly in elderly patients and in those with co-morbidities. 2here are few reports of transplanting kidneys with small RCC.Brook et al reported 43 transplant patients with previously diagnosed small RCC. 3 These patients had significant co-morbidities.Only one recurrence of RCC in the transplant kidney after nine years of follow-up was reported.Furthermore, Sener et al reported no recurrence of the RCC in 3 patients after a median follow-up duration of 15 months. 4n our study as well as in other studies, the kidney recipients were high risk dialysis patients in whom the morbidity of dialysis outweighed the risks of transplanting kidneys containing a small RCC.Transplantation provides a substantial survival advantage compared with dialysis in this group of patients. 5ur approach in the above cases was to engage in thorough multidisciplinary team discussion, to explain to both the donors and recipients regarding all the possible consequences, and to follow up the donors and recipients closely with ultrasonography, CT and biopsy of any suspicious lesions.We recommend that these kidneys are used only in high risk recipients where the risks of dialysis outweigh the risks of tumour recurrence.There is no need for a change in immunosuppression protocol in these cases.
|
v3-fos-license
|
2019-07-19T20:04:04.212Z
|
2019-06-15T00:00:00.000
|
197556709
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.ijtsrd.com/papers/ijtsrd24038.pdf",
"pdf_hash": "005ead0eb02f7bd8708a136ed26a5f6b1bead3c0",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43160",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "2516dd7a45370025aee4cb8fa50ccd742ba80b1b",
"year": 2019
}
|
pes2o/s2orc
|
Solar and Electric Powered Hybrid Vehicle
Copyright © 2019 by author(s) and International Journal of Trend in Scientific Research and Development Journal. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0) (http://creativecommons.org/licenses/ by/4.0) ABSTRACT This system is designed to prevent green-house effect caused due to the burning of fossil fuel and reduce pollution for the environment. The designing, construction and implementation of the Solar and Electric Powered Hybrid Vehicle (SEPHV) are expressed in this paper. The power supply of the SEPHV can be charged by the solar and normal AC power source too. In this system, the series combination of 12V six lead-acid batteries is used for the driving motor power supply (40~60V DC). The 50W six solar panels are used for charging each of batteries. The charge controller is designed to be the supply batteries with the minimum amount charge possible and to protect from overcharge by the solar panels as well as over discharge by the driving motor. In this SEPHV, wireless message display system is also designed by Arduino software. The weight of the car without load is 300kg. As the results, this SEPHV is capable of accommodating at least four persons (250kg) with an average speed of 57km/hr. By using this, we can be able to reduce the all kind pollutions and fuel economy.
INTRODUCTION
The fossil fuel such as petrol and diesel are very expensive way to be extracted and used. , Transportation of these fuel in rural areas themselves have become a problem. The major problem is green-house effect caused due to this burning of fossil fuel. Solar vehicles depend on PV cells to drive the BLDC motors.
Unlike solar thermal energy which converts solar energy to heat, PV cells directly convert sunlight into electricity. The Solar and Electric Powered Hybrid Vehicle (SEPHV) contains the solar panel, Brushless DC (BLDC) motor, charge controller, batteries, speed controller and inverter. Inverter can be used to charge the batteries in normal AC supply during sunless conditions. A zero emission solar/electric vehicle is powered by Photovoltaic/Electric Supply energy by means of solar panels and AC supply with storages of electric energy in batteries. The PV array has a particular operating point that can supply the maximum power to the load which is generally called Maximum Power Point (MPP). The SEPHV can charge itself from both solar and electric power. The vehicle is altered by replacing its engine with a 1600W, 60V brushless DC (BLDC) motor. The electric supply to the motor is obtained from six battery set of 60V, 20AH. Six solar panels each with a rating of 60W, 20A are attached to the top of the vehicle to grab the solar energy and then it is controlled with the help of charge controller. This is used as a main source of energy to charge the battery. The household electric supply of 220V is reduced with a inverter to 12V and then it is converted it to DC with a rectifying unit to charge the battery. This is used as a backup source or auxiliary of energy to charge the battery. The Vehicle can be controlled and can matchup a speed. The Solar and Electric Powered Hybrid Vehicle (SEPHV) is thus a boom to the present world by providing us with fuel free mode of transport.
II.
System Block Diagram Solar and Electric Powered Hybrid Vehicle (SEPHV) is altered by replacing its engine with Brushless DC Motor (BLDC). The Motor is made to run from a battery set which is charged from two methods. In the first method , Solar Panels are kept at the top of the SEPHV which produces a DC Voltage from the availability of solar radiation. The amount of DC voltage developed is controlled using a charge controller. For the second method, the normal AC Supply is stepped down and rectified to produce a DC Voltage. These two methods are combined to charge the batteries. The charge controller controls the depth of discharge (DOD) of the battery in order to maintain the life of the battery. The motor controller can be used to control both the speed and its electrical braking. Solar charging controllers are designed to prevent solar and electric hybrid vehicle (SEPHV) from overcharging and excessive discharging, therefore to protect our investment and extend the battery life. The electrical energy thus formed is being fed to the batteries that get charged and is used to run 60V BLDC motor. The batteries are initially fully charged by PV panels and electric supply. The batteries are directly connected to the motor through a Solenoid control circuit. The Solenoids are acting as the speed control switch. Initially, first accelerator contact is pressed where the Solenoid-I activates and the single battery is connected to the motor. When the second accelerator contact is pressed, solenoid-II activates and the two set of batteries are connected to the motor. When the third accelerator contact is pressed, solenoid-III activates and the
IV. System components 1. Solar Panel
Polycrystalline silicon solar panels are selected for this system. Polycrystalline silicon solar panels use less silicon, which makes them somewhat less efficient. However, the unique design, which features strips of silicon wrapped around rectangular conduct wires, allows them to function more efficiently. Certain circumstantial use of polycrystalline silicon solar panels such as when used on rooftops can yield efficiency.
Charge Controler
These are simple controllers which use basic transistors and relays to control the voltage by either disconnecting or shorting the panel to the battery. Maximum Power Point Tracking (MPPT): These types of controllers are highly efficient and provide the battery with 15-30% more power. MPPT Controllers track the voltage of the battery and match the panel output with this. This ensures maximum charge by converting the high output from solar panels to a lower voltage needed to charge the batteries. Pulse Width Modulated Design (PWM).
Battery
Lead Acid Batteries can contain a large amount of electrical energy which they are capable of discharging very quickly if any form of conductor is placed across their terminals. Lead acid batteries contain sulfuric acid which is corrosive. Lead acid batteries give off hydrogen when they are being charged, which when mixed with air is explosive, and can be ignited by small spark.
BLDC Motor
In BLDC motor, the current carrying conductor is stationary while the permanent magnet moves. When the stator coils are electrically switched by a supply source, it becomes electromagnet and starts producing the uniform field in the air gap. Though the source of supply is DC, switching makes to generate an AC voltage waveform with trapezoidal shape. Due to the force of interaction between electromagnet stator and permanent magnet rotor, the rotor continues to rotate.
V.
Wireless Message Display System A Dot Matrix Display (DMD) Board is a matrix of LED connected in 16x32 pattern. Messages can be transferred over mobile to DMD Board using Bluetooth. The connection of the hardware components is highlighted below. The Bluetooth module is connected as shown with the pins 0 and 1 of the Arduino connected to the TX and RX of Bluetooth module respectively and powered by VCC (5V) and GND of the Arduino. To control the DMD, the connection on the Arduino Uno of Pins 6 to 13, and GND of the Arduino board were used. The software used in this work consists of two parts. The first is the code for communication between the Arduino Bluetooth and the mobile phone and the second is the mobile application used to send the text from the mobile to the LCD screen.
VII. Discussion
This paper is discussed about fossil fuel that are considered as an essential and ideal source of energy. This paper shows that solar car can reach the velocity 57 km/h and it is stable and safe of main results. A solar car is powered by a BLDC motor. While gasoline vehicle have a heavy noise and pollute the air, solar electric vehicle are smooth and silent and also have on pollution emits while driving. The idea of solar vehicle is new. This system composed of Solar panel, Brushless DC Motor, Charge Controller, 12 V lead acid batteries, and speed control for driving smaller parts. The individual experience that each one of us has been through is priceless and very informative and knowledgeable. We learned more about power generation and utilizing renewable energy. We eventually had a chance to put what we have learned in a real life project. This was basically achieved due to dedication, passion and hard work. There are a couple of pointers that were concluded at the end of our project. Those are: Implementation of Electric Cars is possible in Myanmar. Solar Panel can be used in Electric cars in Myanmar to have cleaner energy, due to abundance of sunlight throughout the year.
VIII. Further Extension
In the near future, the SEPHV can be designed and constructed with the remote control steering which will be able to move the SEPHV as the driver's wish. Therefore, the driver will control the movement and speed of the solar car. The car can be created with suitable chassis, advance suspension and breaking system to be flexible and comfortable. In this SEPHV, only one BLDC motor is used for driving the car but we can improve the performances of the car by increasing the number and power of motor . Solar cars will be easily incorporated with future technologies.
|
v3-fos-license
|
2019-05-11T13:06:52.129Z
|
2018-01-08T00:00:00.000
|
55028253
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijnpt.20180401.11.pdf",
"pdf_hash": "1f30a8b0c19f89d16f44e0581d1b71804623113d",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43161",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"sha1": "6b484a4654d9a4049e838777b0b4a0ca3e6ac079",
"year": 2018
}
|
pes2o/s2orc
|
An Exploration of the Knowledge of Informal Caregivers on Ageing Related Health Conditions at Matero and Chawama Old People’s Homes, Zambia
Caring for older adults requires a multidisciplinary approach and this includes a primary caregiver and a knowledge of the physiology and epidemiology of aging. The objective of this study was to explore the knowledge of caregivers on ageing-related health conditions at Matero and Chawama old peoples’ homes in Lusaka. A qualitative case study design with a sample size of 12 based on the criteria of theoretical saturation was used. The results showed caregivers’ views on ageing-related health conditions, common ageing-related health conditions seen at both nursing homes, clinic/hospital and nursing home management of these conditions. The challenges faced by caregivers were revealed and they included attitude of the elderly, inadequate equipment, lack of transport, financial challenges, work overload and participant’s characteristics like personal needs, their age and gender influencing caregivers’ work output. Caregivers knew most of the ageing-related health conditions and are knowledgeable on the nursing management of these conditions. There are various things affecting knowledge of caregivers on ageing health and management skills. Various challenges such as financial challenges, lack of transport, and inadequate equipment were also pointed out and are seen to greatly influence the work output of caregivers.
Background
Human aging is associated with a wide range of physiological changes that limit normal functions and renders a human being more susceptible to a number of diseases [1,2]. Changes associated with age have an impact on the function of every body system, even in the healthiest older people [3]. Normal age-related changes may be accompanied by chronic health problems such as diabetes or heart disease and the management of many such chronic conditions may increase the complexity of care. Caring for older adults requires a multidisciplinary approach and includes a primary caregiver who coordinates care with other team members, including physiotherapists, occupational therapists, pharmacists, nurses, and other health professionals [4]. Caregivers are essential partners in the delivery of complex health care services. Unlike professional caregivers such as physicians and nurses, informal caregivers provide care to individuals with a variety of conditions, most commonly advanced age [5]. Elderly patients with multiple comorbid conditions such, have intricate treatment protocols that require caregiver involvement, further complicating this already difficult care. Because better treatments have extended the life spans of most patients with chronic illnesses, caregiver involvement often is required for several years [6].
Caring for people aged 65 and older can be complicated and requires specialized knowledge of this demographic group. Knowledge of the physiology and epidemiology of aging helps manage conditions that have special significance in the elderly. This information is also vital for planning and implementing interventions that will help in further Conditions at Matero and Chawama Old People's Homes, Zambia management [7]. The amount or level of knowledge possessed by those who are rendering care has the potential to either positively or negatively affect the outcome of a health condition. However, little information is available about the knowledge and skills that informal caregivers need to have provide or how their knowledge and skills affect care [8].
Therefore, exploring the knowledge that caregivers have on health conditions that are mainly attributed to the process of aging is vital for the successful management of the aged.
Design and Setting
This study utilized a qualitative case study design conducted at Chawama and Matero homes for the aged. This type of design was selected because it facilitated exploration of a phenomenon within its context by using a variety of data sources.
Chawama and Matero homes for the aged are the main homes in Lusaka, Zambia caring for the aging population. Chawama Home for the aged is under the Cheshire Homes and is a Divine providence home for the aged, homeless and the orphans. It is located in Lusaka's Chawama area. It was first established in 1988 under the name St. Theresa and was officially opened in 1992. It accommodates an average of 20 aged people and 25 orphans.
Matero home for the aged on the other hand is an after care centre running under the government of the Republic of Zambia under the Department of Social Welfare. It is located in Lusaka's Matero area. Both these centres are situated close to health centres in the said areas.
Selection of Participants and Data Collection
The study utilised a sample size of twelve (12) caregivers using a theoretical saturation method, by use of a convenience sampling method. This included 6 caregivers from each centre. Of the six, two key informants were selected from each centre making a total of 4 key informants. The remaining 4 caregivers from each centre i.e. (4 from Chawama and 4 from Matero) were grouped in to two (2) focus group discussions.
Data collection was done with the assistance of an interview guide, an audio recorder and note taking to describe the study setting and non-verbal communication.
The data was later transcribed and reported verbatim.
Analysis
Data was analyzed using thematic analysis, a method used in identifying, analyzing and reporting information in themes. The analysis was done according to the step-by-step six-stage process of thematic analysis. These are familiarization, generation of initial codes, searching for themes, reviewing themes, defining and naming themes and producing the final report [9]. Data transcribing was done as part of the familiarization stage.
Transcripts were carefully and thoroughly read and re-read after which initial coding and categorization of themes was done. Responses that were related through content and context were categorized as themes. This was continued until no new themes came up. Transcribed data was transferred to Nvivo version 10 for arrangement, coding and merging into themes. Codes were categorized according to similar contents and then developed into broader themes.
Results
Results revealed ageing related health conditions which encompassed caregivers' views on ageing-related health conditions and common ageing related health conditions seen by the caregivers at the nursing homes. They were of the view that some conditions are peculiar to aging and some are most commonly associated with the younger population. They were also knowledgeable on some of the conditions that specifically affect men and those that are most commonly associated with women. This was stated by one of the caregivers who said: I know that when people age, they come with certain conditions peculiar to aging like Alzheimer's or dementia, sexually transmitted diseases is more peculiar to the youth than the aged.
Additionally, most of the participants did not know the exact names for some of the conditions, but rather they described them based on the symptoms. Different age related health conditions were mentioned during the interviews and discussions although some of the conditions were not out rightly mentioned by their exact names. Terms used to describe them were collectively used to point to one particular condition such as waist pains and backache to mean low back pain and high blood pressure to mean hypertension. When asked about which ageing-related health conditions participants had seen to be more common than the others at both nursing homes, it was revealed that dementia, osteoporosis, hypertension and low back pain were more in occurrence as compared to the other condition: " …As a person grows old, there are complications… such as waist pain… the back bone as we grow, it also begins to bend... so the waist is the biggest problem followed by the legs.." Another participant narrated the following; "And then, I think the elderly…they are slowly losing their memory" Forgetfulness is a very common feature among the elderly and it is something expected; one participant mentioned this attribute during the interview and had this to say; "The other thing is forgetfulness. When they leave something, they forget were they left it..." "…mostly what is common for me is Osteoporosis and BP (Hypertension). Those are the ones I'm finding often… And also dementia, they forget a lot…"
Emerging Themes
Aside from the questions centred on the objectives, the interviews and the discussions also revealed some themes that were of relevance to the study being undertaken.
Society's Attitude and Perception Towards Ageing
Interviews with the key informants revealed that there is lack of care from society and even family towards the aging population. One of the key informants mentioned that the greatest challenge has been to learn that the family system has broken down and that society no longer has value for lineage.
"…The greatest challenge for me has been to learn that the family system has broken down. Society no longer has that value of lineage. People don't care about people when they age or when they have a health issue…" Another key informant also said that the elderly are being left and neglected by their families and society. She attributed this negligence of the aged by family and society to the fear of caring for elderly people when they are about to die.
"…what is happening also for the elderly…they are left by the family. Not only by the family, but people around also, they fear or they are afraid that someone will die, what shall we do...?"
Connotation of Aging to Witchcraft
The other point that was raised by both the participants from the FGDs and the key informants was the tendency of society to associate old age to witchcraft which was also one of the things precipitating into negligence.
"… People think when a person becomes old, then that person becomes a witch... I don't know why it is like that... So mostly, these people who we receive here, some are just dumped. Just right outside the gate…Others are left at the police station…" One of the key informants related this tendency to the cultural set up and she assumed that promoting the cultural values might help to get rid of this vice.
"…And also I think, this is a very big problem…the elderly they are suspected that they are witches...I think my assumption would be maybe improving the culture…"
Strengthening of Policies
Another important point that emerged from the interviews with the key informants was the need for government to strengthen the laws and policies that govern the aging population in Zambia. One of the key informants said there is no law which ensures the protection of the elderly when they are abused even by their own relatives.
""…There are a lot of elderly people out there who are being abused. So there needs to be a strengthening of policies and laws… there is no law, if you are abusing your grandmother today, I can report you to the police but you can defend yourself because you are the one who is related to your grandmother... There just needs to be a law that elderly people need to be taken care of…" Another key informant was of the view that the government is not playing their part in taking care of the aging population. She further added that government needs to take care of its people.
"Among the challenges I would also say that the government is also not taking care of its people. The government has a part to play but they don't…"
Factors Influencing Caregivers' Knowledge
Caregivers expressed a number of things that influence their knowledge on different age related health conditions.
(i) Work Experience
Work experience was one of the major things influencing the level of knowledge that the caregivers have on different ageing-related health conditions. During the discussions and interviews with the participants, it was revealed that most of the information they have on health conditions affecting the elderly and the management of these conditions is based on experience and the number of years they have worked at the nursing homes.
"…following the number of years we have been working here, from experience just like this, I can say the information is a lot…the time that I have worked here, and the diseases that I have seen here, they are a lot" However, other caregivers were of the view working for a long period of time as a caregiver had a tendency to make caregivers lose interest in the job further affecting their performance towards work. One of them had this to say; "…when a person does the same job over and over again, sometimes they end up getting tired...or lose interest in the job…so once the interest is gone, it means, even the performance will be poor… "
(ii) Level of Education
Most of the key informants mentioned that the level of education for caregivers was also another important aspect influencing caregivers' knowledge. They said that the level of education gave an understanding of ageing-related health conditions and the patience needed towards the elderly in caring for their conditions. "Level of education of the caregivers does influence because if you don't have knowledge, you will never understand them (the health conditions). And you will never know how to react at certain situations...And also you know having that patience and how to understand the elderly…"
(iii) Lack of Training in Caregiving Services
Another important thing that was thought to influence caregivers' knowledge was the lack of training in caregiving services. One of the key informants mentioned that there is a lack of education and training in caregiving services leading to a lack of understanding and lack of respect for work ethics. She further mentioned that lack of respect and understanding for work ethics is one of the leading causes of abuse towards the elderly.
"There is a lack of education and training. Also lack of understanding the conditions...And also lack of respecting ethics… That's why you find that there is a lot of abuse because people don't know what the ethics are..."
(iv) Lack of Background Information
The discussions and interviews also pointed out that lack of background information about the elderly clients being cared for had an influence on the knowledge that caregivers had. One of the key informants said that if an elderly person is brought to them, it is easier to understand the conditions if the caregivers know about that elderly person's past.
One of the participants said that it is sometimes hard for them to tell what is exactly wrong with the elderly because there are some who come to the homes when they are already sick. He further explained that when the elderly are taken to the hospital, it is difficult to explain exactly what is wrong with them to the medical personnel at the hospital.
"…sometimes we are found with those elderly people who were maybe just picked by the police… they come here when they are already sick. It is hard for us to tell exactly what is wrong with that person...when we take them to the hospital…the medical staff at the hospital ask us … how did the problem start ?so it becomes difficult to answer..."
Root cause of Knowledge of Informal Caregivers on Ageing-related Health Conditions
The root cause analysis diagram below illustrates how the findings of the study are linked to knowledge of caregivers on ageing-related health conditions
Discussion
The study results showed objective-based or predetermined themes and also pointed out other major themes that had emerged from FGDs and interviews with the key informants.
The objective based findings adequately showed the common ageing-related health conditions at the nursing homes including the caregivers' views on these conditions and their management. The findings also brought out what influences knowledge of caregivers regarding ageing-related health conditions and the challenges faced by caregivers.
Participants knew a number of health conditions affecting the elderly at the nursing homes. They are also knowledgeable on the conditions that specifically affect men and those that are most commonly associated with women. This was pointed out by one the participants who said that when men reach 70 to 80 years, they develop the problem of tubes blocking ( Obstructive nephropathy) and women usually complain of painful legs (osteoarthritis) and waist pains ( low back pain). Similarly, according to a report by WHO on gender, health and ageing, the basic diseases which affect older men and women are the same. However, rates, trends, and specific types of these diseases differ between women and men. Perhaps more importantly, the gender picture of a given society has a great bearing on the health of the aged. It is further explained that osteoarthritis is common in older women than in older men [10] Amongst the many mentioned conditions, they were a few of them that were pointed out by the participants to be common and more in occurrence as compared to others. These included dementia, low back pain, hypertension, and osteoporosis. In line with these findings, a study done by Bain [11] also revealed similar findings.
Study finding is that several factors influence knowledge of caregivers on ageing-related health conditions, one of them was level of education. Although the participants knew most of the conditions affecting the elderly at the nursing homes, one of the things affecting their ability to identify the ageing health conditions by name and thus influencing their knowledge was level of education. Most of the participants from the FGDs unlike the key informants, did not exactly know the conditions by their names. This is because of the secondary level of education for most of the participants from the FGDs. Although some of the participants from the FGDs felt that the level of education had little to do with the understanding and management of ageing-related health conditions, a higher level of education showed a better ability of being able to identify the conditions as seen from the interviews with the key informants. Furthermore, the key informants stated that a higher level of education gave a better understanding of the conditions and the elderly themselves. Similarly, literature indicates that having a lower level of education provides a weaker platform for understanding frail adults and increases the informal caregiver's burden [8]. However, in some study [12] it has been stated that although the caregiving literature is vast, much of it is based on cross-sectional analyses of relatively small opportunity samples and that confounding effects such as the caregiver's level of education and health status have often not been controlled for in most study designs or statistical analyses [12]. This study was a qualitative study and is therefore different from this scenario.
The other thing influencing knowledge of caregivers on ageing-related health conditions is the lack of training in caregiving services. In the management of ageing-related health conditions, Authors have stated that many caregivers focus on leg strength and balance training in the elderly [13]. However, therapy designed to improve mobility in elderly patients such as caregivers focusing on leg strength and balance has not been fully attainable because most of the caregivers from both Chawama and Matero homes for the aged are not trained in any form of medical care or physiotherapy except for one caregiver from Chawama home for the aged who has been trained in community health work and HIV/AIDS management. Due to the lack of training in medical care and management skills of ageing-related health conditions, the caregivers do not have adequate knowledge and skills needed to help ensure the optimum wellbeing of the aged. In a similar study done by Murat et al., [14] on the knowledge and attitude of caregivers of Parkinson Disease (PD) patients, it was shown that 65% of these caregivers are not experienced or trained in providing care to PD patients or to the elderly with a chronic disease [14]. In another similar study done by Yadav et al, [15] it was revealed that there is a lack of knowledge about Alzheimer's disease among Hispanic older adults and caregivers precipitating into a lack of knowledge and skills on how to handle this condition [15]. Another researcher reports also that little information is available about the knowledge and skills that informal caregivers need to provide care or how their knowledge and skills affect care [16]. Another study found that most caregivers in old people's homes in Zambia lacked even basic training in elderly care and related aspects.
Furthermore, a retrospective study by Changala et al [12] identified approach considerations of comprehensive health care that had shown the potential to improve the quality, efficiency, or health-related outcomes of care for older persons.
Conclusion
It was found several opportunities to improve the knowledge of caregivers in order to consequently improve the quality of health and physiotherapy care provided to the elderly population. Many of the strategies we suggest for improving service delivery of the elderly care are in keeping with the emphasis on enhancing person-centred care. Through understanding the knowledge base and management of elderly patients, we hope that quality of life and other outcomes will be improved for elderly patients.
|
v3-fos-license
|
2018-01-08T18:01:20.580Z
|
2017-10-20T00:00:00.000
|
11036749
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.nature.com/articles/cdd2017177.pdf",
"pdf_hash": "dc6f77dcabc867815c860aeefff0e4014975fc93",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43162",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "dc6f77dcabc867815c860aeefff0e4014975fc93",
"year": 2017
}
|
pes2o/s2orc
|
Resolution of inflammation and sepsis survival are improved by dietary Ω-3 fatty acids
Critical conditions such as sepsis following infection or traumatic injury disturb the complex state of homeostasis that may lead to uncontrolled inflammation resulting in organ failure, shock and death. They are associated with endogenous mediators that control the onset of acute inflammatory response, but the central problem remains the complete resolution of inflammation. Omega-3 enriched lipid emulsions (Ω-3+ LEs) were used in experimental studies and clinical trials to establish homeostasis, yet with little understanding about their role on the resolution of inflammation and tissue regeneration. Here, we demonstrate that Ω-3 lipid emulsions (LEs) orchestrate inflammation-resolution/regeneration mechanism during sterile peritonitis and murine polymicrobial sepsis. Ω-3+ LEs recessed neutrophil infiltration, reduced pro-inflammatory mediators, reduced the classical monocyte and enhanced the non-classical monocytes/macrophages recruitment and finally increased the efferocytosis in sepsis. The actions of Ω-3+ LE were 5-lipoxygenase (5-LOX) and 12/15-lipoxygenase (12/15-LOX) dependent. Ω-3+ LEs shortened the resolution interval by 56%, stimulated the endogenous biosynthesis of resolution mediators lipoxin A4, protectin DX and maresin 1 and contributed to tissue regeneration. Ω-3+ LEs protected against hypothermia and weight loss and enhanced survival in murine polymicrobial sepsis. We highlighted a role of Ω-3+ LEs in regulating key mechanisms within the resolution terrain during murine sepsis. This might form the basis for a rational design of sepsis specific clinical nutrition.
The initiation and resolution of inflammation are complex processes characterized by the release of mediators that control the migration and the function of immune cells. This process is essential to exert successful protection against injury and/or infection. If particularly the resolution process fails, inflammation can become chronic leading to collateral tissue destruction and loss of functional organ integrity. Newly identified bioactive resolution phase lipid mediators such as arachidonic acid (AA)-derived lipoxins, eicosapentaenoic acid (EPA)-derived resolvins and docosahexaenoic derived resolvins, protectins (PDs) and maresins (MaRs) and their bioactive peptide-conjugate pathways are biosynthesized during the resolution phase. These so-called specialized pro-resolving lipid mediators (SPMs) actively stimulate cardinal signs of resolution, namely limitation of neutrophil influx, the counterregulation of pro-inflammatory mediators, apoptosis of PMN and the active clearance of apoptotic cells and invading microorganisms. 1 Sepsis, a syndrome that is particularly marked by failed resolution of inflammation predisposes to metabolic and immunological dysfunction that causes high morbidity and mortality worldwide. 2,3 To date, however treatment for sepsis is nonspecific, focused primarily on symptomatic therapy. In recent years lipid emulsions have been tested in experimental and clinical trials in critically ill to evaluate a possible beneficial influence on inflammation. This showed a controversial beneficial role for Ω-3 supplementation in critically ill, [4][5][6] meaning that so far, treatment strategies with reduced load of Ω-6 fatty acids such as fish oil-based, olive oil-based or medium-chain triglycerid-based LEs have not been recommended for critically ill because of the insufficient data. 7 Discrepancies are still considered in the methodological bias including the optimum composition, dose and timeframes, and indication for parenteral LEs. In particular little information is available about the mechanism of LEs during the onset and the resolution of acute inflammation and the tissue regeneration.
In this report, we show that administration of Ω-3 + LEs control inflammation-resolution mechanisms. Using a selflimited acute inflammation model and a murine polymicrobial sepsis model we found dietary Ω-3 + LEs to stop neutrophil infiltration, to reduce pro-inflammatory cytokines and to enhance anti-inflammatory mediators. This was associated with a strong reduction of classical monocytes and an increase of non-classical monocyte/macrophage (MΦ) recruitment. Moreover, Ω-3 + LEs enhanced efferocytosis, whereas this phagocyte responses were lost in 12/15-LOX − / − mice, suggesting that the actions of Ω-3 + LEs were 5-LOX and 12/15-LOX dependent. Ω-3 + LEs shortened the resolution interval, stimulated the local endogenous biosynthesis of SPMs and enhanced the tissue regeneration during peritonitis compared with vehicle control or the administration of Ω-3 − LEs. Moreover, Ω-3 + LEs protected against hypothermia and weight loss enhancing survival in murine polymicrobial sepsis. Together, these results demonstrate that Ω-3 + LEs control key innate protective mechanism during the onset and resolution of acute inflammation and promote to tissue repair and regeneration.
Results
Ω-3 + LE stimulates resolution of inflammation and promotes tissue regeneration. Given that inflammation and its timely resolution are held to be crucial for sufficient inflammatory responses that enable inflamed tissues to return to homeostasis we sought to determine the impact of Ω-3 + LEs, emulsions composed of long-chain, medium-chain fatty acids and fish oil (50:40:10) with Ω-6: Ω-3 ratio of 2.2:1 (Supplementary Table 1) on the dynamic of leukocytes. WT mice were pretreated with Ω-3 + LEs for 24 h prior injection of ZyA and lavages were collected at 4, 12, 24 and 48 h. Ω-3 + LE-treated mice displayed a drastic reduction in leukocyte infiltrates which was associated with a significant decrease of the PMN levels observed throughout the course of the inflammation when compared with vehicle control (Figure 1a). To directly corroborate the hypothesis that Ω-3 + LEs influence critical properties of neutrophils at the onset of acute inflammation, we sought to determine the impact of Ω-3 + LEs on the leukocyte-endothelium interactions by performing intravital microscopy of the murine cremaster. As shown in Figures 1b, Ω-3 + LEs significantly decreased the neutrophil adherence, the neutrophil migration and increased the rolling velocity in postcapillary venules. Representative microcirculation with and without Ω-3 + LE is shown in Supplementary Movies 2A and 2B. In these exudates, Ω-3 + LE also reduced IL-6 and keratinocyte chemoattractant (KC; IL-8 in humans; Figure 1c). Having demonstrated that Ω-3 + LEs impact the neutrophil recruitment in the early phase, we next focused on the resolution phase, where the recruitment of monocytes and MΦ predominate. The results showed that Ω-3 + LEs decreased the classical Ly6C hi monocytes at the site of inflammation and increased the non-classical Ly6C lo monocytes and MΦ that indicated a strong enhancement of MΦ clearance of apoptotic PMN (Figure 1d). To quantify the local kinetics of leukocyte infiltration, we determined the resolution indices (Ri), demonstrating a 56% reduction in Ri from 23 to 10 h in mice challenged with dietary Ω-3 + LE suggesting to strongly accelerate resolution of acute inflammation ( Figure 1e). After having demonstrated that Ω-3 + LE displayed pro-resolving activity, we turned our attention to the possible impact on tissue repair and regeneration. Indeed, we found significantly increased exudate IL-10 and TGF-β levels that are known to be present in the resolution phase and to be an important factor on peritoneal healing ( Figure 1f). [8][9][10] To substantiate this hypothesis we performed immunohistochemical characterization of proliferating-cell-nuclearantigen (PCNA), where Ω-3 + LE demonstrated a higher tissue regenerative response (Figure 1f). These results indicated that Ω-3 + LE might promote resolution mechanisms during peritonitis and improve tissue repair and regeneration.
Ω-3 + LEs enhance pro-resolving lipid mediator biosynthesis. The resolution of acute inflammation is regulated by lipid mediator class-switching from production of proinflammatory lipid mediators in the initiation phase to the biosynthesis of SPM such as lipoxins, resolvins, protectins and maresins in the resolution phase. To explore whether Ω-3 + LEs impact the biosynthesis of these resolution phase mediators in murine peritonitis, we performed LC-MS/MSbased profiling of peritoneal lavages. In these, Ω-3 + LEs increased LXA 4 Table 2). Taken together, these results indicate that Ω-3 + LEs altered the LM profile in murine peritonitis toward a pro-resolving LM-SPM signature profile with pro-resolving characteristics and as such enhances tissue regeneration.
Ω-3 − LEs display impaired pro-resolving properties. To reflect the clinical routine and evaluate generally used nutrition solutions we further compared the impact of Ω-3 + LEs with non-enriched Ω-3 (Ω-3 − ) LEs (emulsions composed of long-chain and medium-chain fatty acids (50:50) with Ω-6: Ω-3 ratio of 6.6:1) (Supplementary Table 1) on the resolution programs in murine peritonitis. Mice treated with Ω-3 − LEs displayed a lower impact on the resolution mechanism as Figure 3a) and classical monocytes accompanied by decreased nonclassical monocytes that indicated a strong reduction of MΦ clearance of apoptotic PMN compared with Ω-3 + LEs treated mice ( Figure 3b). To further validate the opposing impact of Ω-3 − LEs on the resolution of acute inflammation, we determined the exudate IL-10 and TGF-β levels and the characterization of tissue PCNA that contribute to resolution and regenerative programs 8 (Figures 3c and 1f). Here, we found a significant reduction of both cytokines and impaired tissue regenerative response following Ω-3 − LEs administration compared with Ω-3 + LEs treated mice (Figure 1f). When exploring the resolution index, we found an increase in Ri from 7 to 18 h in Ω-3 − LEs treated mice (Figure 3d). When exploring the LM profiles we found significantly lower exudate levels of LXA 4 , MaR1, PDX, 15-HETE 15-HEPE and Ω-3 + LEs enhance human MΦ function, efferocytosis and phagocytosis. As mentioned above, of great importance for promoting resolution of inflammation is the successful clearance of pathogens and inflammatory cells. Having demonstrated pro-resolving properties of Ω-3 + LEs in murine peritonitis, for human translation, we explored the ability of Ω-3 + and Ω-3 − LEs to firstly promote human MΦ efferocytosis of apoptotic PMN and phagocytosis of ZyA particles ( Figure 4a) and secondary the MΦ phagocytosis of Escherichia coli as a feature for the infection-resolving actions (Figure 4b). Consistent with the in vivo findings, Ω-3 + LEs significantly increased the capacity of primary human MΦ to uptake apoptotic human PMNs, ZyA particles and E. coli bacteria. Of interest, we also found these results not to be affected through Ω-3 − LEs (Figures 4a and b). Because GPCR receptors such as ALX/FPR2, DRV1/GPR32 and ERV/ChemR23 have been demonstrated to mediate proresolving actions at low concentrations, 12 we next determined the expression of these receptors on human MΦ following stimulation with vehicle or TNF-α or Ω-3 + LEs ± TNF-α for 4 h. As expected, we found increased mRNA levels of ALX/FPR2, DRV1/GPR32 and ERV/ChemR23 when treated with Ω-3 + LEs+TNF-α compared with the control group ( Figure 4c). Importantly, MΦ stimulated with Ω-3 − LEs failed to increase the expression of these GPCR receptors (Figure 4d). Taken together these data support the role of Ω-3 + LEs as activator of pro-resolving mechanisms. Figure 4f).
Ω-3 + LEs improve survival in murine sepsis. To investigate whether the observed beneficial effects of Ω-3 + LEs could decrease mortality owing to polymicrobial sepsis we performed a survival test in cecal ligation and puncture (CLP) model. Figure 5 shows that the administration of Ω-3 + LEs reduced the mortality rates and increased survival up to 60%, respectively. Since hypothermia is a risk factor for increased mortality in ICU patients with infection, we determined the body (surface) temperature and the weight of the infected mice treated with Ω-3 + LEs or Ω-3 − LEs (Supplementary Movies 1A-F are shown in the Supplementary Methods). Notably, Ω-3 + LEs protected mice from hypothermia and weight loss compared with the vehicle group (Figure 5a). By contrast, mice that were treated with Ω-3 -LEs did neither improve the survival nor protect from hypothermia and weight loss (Figure 5a). To corroborate that this improved outcome was due to the production of SPMs we carried out additional experiments to determine the impact of Ω-3 + LEs in the CLP model. For this purpose, C57BL/6 mice were administered with Ω-3 + LEs, Ω-3 − LEs or vehicle 24 h prior exposure to CLP and lavages were collected at 4 h. Collected results showed that mice treated with Ω-3 + LEs demonstrated a significant reduction in leukocyte infiltrates that was combined with a significant reduction of PMN when compared with Ω-3 − LEs or vehicle (Supplementary Figure 4A). Moreover, the results showed increased non-classical monocyte levels that indicated a strong enhancement of the MΦ efferocytosis of apoptotic PMN (Supplementary Figure 4B). Next, we determined the impact of Ω-3 + LEs and Ω-3 − LEs on the biosynthesis of the lipid resolution phase mediators.
In the peritoneal lavages Ω-3 + LEs increased significantly the specialized pro-resolving mediators LXA 4 , MaR1 and PDX and the arachidonic acid-derived product 15-hydroxyeicosatetraenoic acid (15-HETE) compared with Ω-3 − LEs or vehicle (Supplementary Figure 5). In contrast, Ω-3 + LEs significantly reduced the pro-inflammatory LTB 4 . Taken together, these data indicate that Ω-3 + LEs also demonstrated anti-inflammatory and pro-resolving effects during peritoneal infection when compared with Ω-3 − LEs or vehicle suggesting that this improved outcome was due to the biosynthesis of SPMs.
Discussion
In the present study, we report that dietary Ω-3 LEs significantly controls inflammation-resolution mechanisms. Using a peritonitis and a sepsis model we found Ω-3 + LEs to accelerate resolution of inflammation, shortening the R i from 23 to 10 h. Ω-3 + LE stopped neutrophil infiltration, reduced proinflammatory and enhanced anti-inflammatory mediators. Also, Ω-3 + LEs strongly reduced the classical monocytes and increased the non-classical monocyte/ MΦ recruitment and finally enhanced efferocytosis of apoptotic PMN. These phagocyte responses were lost in 12/15-LOX − / − mice, suggesting that the actions of Ω-3 + LEs were 12/15-LOX dependent. Ω-3 + LEs stimulated the local endogenous biosynthesis of SPMs that have been demonstrated to actively enhance resolution of inflammation and tissue regeneration compared with peritonitis alone or peritonitis and Ω-3 − LE treatment. Moreover administration of Ω-3 + LEs protected against hypothermia and weight loss and enhanced survival in murine sepsis. Together, these results show that Ω-3 + LEs control key innate protective mechanism during the onset and the resolution of acute inflammation and promote survival in sepsis.
Although infection frequently underlies sepsis this is not entirely the case, more than 40% of cases are caused by sterile/non-infective processes. 2 Unresolved immunological processes are one of the key causes that lead to persistent critical illness during sepsis and the development of organ dysfunction. Despite improved management concepts for sepsis, the mortality with no targeted treatment remains still high. The complex pathophysiology of sepsis is marked by two phases, the inflammatory storm where host-and pathogenderived classical signals interact dangerously with each other and the anti-inflammatory phase. 14 The anti-inflammatory response is characterized by the interplay between humoral, Ω-3 fatty acid activates resolution programs A Körner et al cellular and neuronal mechanisms that potentially mitigate the detrimental effects of the pro-inflammatory response. Particularly, innate cells such as monocytes and MΦ change to an anti-inflammatory phenotype that activates resolution and regeneration programs. Efficient resolution of inflammation is an active process activating endogenous mechanisms to promote a return to tissue homeostasis. 1 Newly identified a novel genus of bioactive LMsnamely lipoxins, resolvins, protectins and maresinsknown as SPMs possess antiinflammatory and pro-resolving capacity. 1,15,16 During the early onset phase and the resolution phase these SPMs biosynthesized from essential fatty acids are produced locally and exert protective actions on leukocytes, activate efferocytosis, promote tissue regeneration and reduce pain. 1,17,18 PGE 2 and PGD 2 in addition to their roles in the initiation of an inflammatory response may undergo a temporal mediator class switch to produce pro-resolving mediator such us a lipoxins and SPM, indicating that the beginning signals the termination of the acute inflammatory response. 19 In this context, reduced dietary intake of Ω-3 (EPA and DHA) could reduce the biosynthesis of SPMs contributing to failed resolution and disease pathologies. Over the last decades, diverse strategies for nutrition therapies with LEs have been determined in various experimental studies and clinical trials. 20 It is generally recognized that intravenous LEs composed of predominantly long-chain polyunsaturated fatty acid (e.g., soybean oil) may negatively influence inflammatory processes of critically ill. 21,22 Following the concerns that have been raised with respect to the in vitro, in vivo and clinical studies the generation of alternative intravenous LEs containing medium-chain tricglycerides, fish oil and olive oil with or without addition of soybean oil have been developed. 23 Recently, in a secondary analysis of data from four International Nutrition Surveys the effects of different classes of lipid emulsions on clinical outcomes in critically ill were examined. 24 The main findings and conclusion of this study demonstrated an association of fish oil or olive oil-based LEs with improvement in clinical outcomes and mechanical ventilation compared with soybean oil-based LEs. 25 Interestingly, however, no overall impact on infections was shown. In a further meta-analysis, Pradelli et al. reported fish oil containing LEs to reduce infections in elective surgical and ICU patients and to decrease the length of stay, both in the ICU and in hospital overall. In this meta-analysis no statistically significant effect on mortality was found. 26 Subsequently, contradictory and inconclusive results were demonstrated in systematic reviews of studies and subgroup analysis. [27][28][29] Nevertheless, because the finding of the clinical trials and experimental reports are still inconsistent in demonstrating clinical benefits in the ICU, the current guidelines do not make a recommendation on the types of lipids to be used in critically ill. 20 Disagreements are still considered in the methodological bias including the optimum composition, dose and timeframes, and indication for parenteral LEs.
On the basis of the factual situation and since the current focus in research has moved from inhibiting inflammation to accelerating resolution of inflammation, we intended to determine the impact of Ω-3 + and Ω-3 − LEs on the biochemical mechanism during the onset and the resolution of acute inflammation. In murine sepsis we found that Ω-3 + LEs influenced the dynamic of leukocytes where it reduced the neutrophil infiltration throughout the course of inflammation by particularly decreasing the adherence and the neutrophil migration and increasing the rolling velocity of the PMNs in the early phase of inflammation. Ω-3 + LEs decreased the classical Ly6C hi monocytes at the site of inflammation and increased the non-classical Ly6C lo monocytes and the MΦ that indicated a strong enhancement of MΦ clearance of apoptotic PMN. The Ri was reduced by 56% in mice treated with Ω-3 + LEs compared with the vehicle control. Since pro-resolution is a distinguishing procedure from anti-inflammation where agonists of resolution such as SPM play a crucial role in the nonphlogistic clearance from sites of inflammation, we found Ω-3 + LEs to significantly increase levels of LXA 4 , MaR1 and PDX in both, the sterile peritonitis and murine microbial sepsis. In addition Ω-3 + LEs enhanced the arachidonic acid-derived product 15-HETE, the eicosapentaenoic acid-derived products 15-HEPE, 14,15-diHETE and 18-HEPE as well as the docosahexaenoic acid-derived product 17-HDHA. Ω-3 + LEs also increased 14, (15)EET that is known to possess antiinflammatory and pro-resolving properties. MΦ have a crucial role in wound healing and organ regeneration. In wound healing inflammatory monocytes accumulate in the injured tissue and particularly phagocytosis of tissue debris can induce mononuclear cells to switch from pro-inflammatory to an anti-inflammatory phenotype. It is well known that M2 cellsmonocytes and/or MΦexpress high levels of antiinflammatory mediators such as IL-10 and TGF-β 10 that contribute to the rapid resolution and wound healing through (e.g.) recruiting fibroblasts into the wound site to promote myofibroblast differentiation. [30][31][32] Our data demonstrate high levels of IL-10 and TGF-β following Ω-3 + supplementation compared with vehicle control or Ω-3 − LE, suggesting a positive influence in the peritoneal healing. To substantiate these data we performed immunohistochemical characterization of PCNA where omega-3 supplementation demonstrated higher tissue regenerative response. To further explore the impact of Ω-3 + LE also in the presence of an underlying infection process, we used a murine CLP sepsis model, demonstrating protection against hypothermia and weight loss and enhancement in survival. Of note, phagocyte responses were lost in 12/15-LOX − / − mice, indicating that the actions of Ω-3 + LEs were 12/15-LOX dependent. Conversely, the administration of Ω-3 − LEs, did neither improve the resolution nor the survival during sepsis, suggesting that the Ω-3 + Hence, the present results implicate a critical role for Ω-3 + LEs in modulating inflammation, infection and stimulating mechanisms of resolution and tissue regeneration and provides novel evidence for the performance of clinical investigations in the future.
Materials and Methods
Methods and any associated references are available in the Supplementary Information.
Animals. The Institutional Review Board and the Regierungspräsidium Tübingen approved this project. 12/15-LOX-deficient mice (12/15-LOX − / − ) and littermate control mice were bred and genotyped as previously described. 33 Peritonitis. Zymosan A (ZyA, Invivogen, San Diego, CA, USA) was prepared in a 1 mg/ml solution in saline and 1 mg was injected i.p. After 4, 12, 24 and 48 h, C57BL/6 mice were euthanized with pentobarbital (100 mg/kg body weight) and peritoneal lavage was performed using ice-cold PBS (without Calcium or Magnesium). Peritoneal cavity was gently massaged and lavage was withdrawn. Subsequently, organs were harvested for further analysis.
Cecal ligation and puncture. Cecal ligation and puncture (CLP) procedure in C57BL/6 mice was performed as described previously. 34 Following the induction of anesthesia a longitudinal skin midline incision is done and linea alba should be identified and dissected to gain access to peritoneal cavity. Cecum is located and exterioized by blunt forceps to prevent damage of the mesenterial blood vessels and intestine and is ligated 50%, which correlates with a mid-grade sepsis. Subsequently the distal part of the cecum is perforated with a 20-Gauge needle by through-and-through puncture. The cecum is relocated to peritoneal cavity and peritoneum and skin is closed with 5-0 sutures and the animals are resuscitated with 1 ml of prewarmed saline.
Transcriptional analysis of SPM receptors. Transcriptional analysis was performed using the following primers: GPR32: 5′-TGG ACC GTT GCA TCT CTG TC-3′, 5′-AGT GCG TAC AGC CAT TCC AT-3′; ChemR23: 5′-AGG GAC TGA TTG GCT GAG GA-3′, 5′-ATC CTC CAT TCT CAT TCA CCG T-3′; ALX: 5′-TGT TCT GCG GAT CCT CCC ATT-3′, 5′-CTC CCA TGG CCA TGG AGA CA -3′. 18S-expression was evaluated with sense 5′-GTA ACC CGT TGA ACC CCATT-3′ and antisense 5′-CCA TCC AAT CGG TAG TAG CG-3′. LC-MS/MS. The targeted lipidomics and lipid mediator studies were performed by MG and MH in the Center for Proteomics and Metabolomics, Leiden University Medical Center (LUMC), The Netherlands. Peritoneal lavages were thawed and internal standards added. The samples were extracted twice using MeOH and prepared for analysis according to published protocols. 35 LC-MS/MS analysis was carried out using a 6500 QTrap LC-MS/MS system as described in Heemskerk et al. 36 Statistics. All data are presented as mean ± S.E.M. Statistical analysis was performed with GraphPad 5.0 software (GraphPad, San Diego, CA, USA). Twotailed Student's t-test or one-way ANOVA, followed by Bonferroni's or Dunnett's multiple-comparison test were applied as appropriate considering P-values o0.05 significant.
|
v3-fos-license
|
2017-05-04T15:39:55.542Z
|
2016-06-09T00:00:00.000
|
2644350
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2016.00261/pdf",
"pdf_hash": "447d9ce166f80a8f16ecbbed624eb71a912496f8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43163",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "447d9ce166f80a8f16ecbbed624eb71a912496f8",
"year": 2016
}
|
pes2o/s2orc
|
Cortical Signal Analysis and Advances in Functional Near-Infrared Spectroscopy Signal: A Review
Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging modality that measures the concentration changes of oxy-hemoglobin (HbO) and de-oxy hemoglobin (HbR) at the same time. It is an emerging cortical imaging modality with a good temporal resolution that is acceptable for brain-computer interface applications. Researchers have developed several methods in last two decades to extract the neuronal activation related waveform from the observed fNIRS time series. But still there is no standard method for analysis of fNIRS data. This article presents a brief review of existing methodologies to model and analyze the activation signal. The purpose of this review article is to give a general overview of variety of existing methodologies to extract useful information from measured fNIRS data including pre-processing steps, effects of differential path length factor (DPF), variations and attributes of hemodynamic response function (HRF), extraction of evoked response, removal of physiological noises, instrumentation, and environmental noises and resting/activation state functional connectivity. Finally, the challenges in the analysis of fNIRS signal are summarized.
INTRODUCTION
Near-infrared spectroscopy (NIRS) is an emerging non-invasive brain-imaging methodology that utilizes near-infrared (NIR) light of 650-900 nm to determine cerebral oxygenation, blood flow, and the metabolic status of a localized region of the brain (Saager and Berger, 2008;Yamada et al., 2009;Khan et al., 2014;Molavi et al., 2014;Santosa et al., 2014;Naseer and Hong, 2015). Activation in a particular part of the brain causes an increase in the regional cerebral blood flow (rCBF) (Zhang et al., 2011b;Umeyama and Yamada, 2013;Kopton and Kenning, 2014). The rate of rCBF increase exceeds that of the regional cerebral oxygen metabolic rate (rCMRO2), which is the major cause of de-oxy hemoglobin (HbR) decrease in venous blood (Sitaram et al., 2009). Thus, cortical activation causes an increase in total hemoglobin (HbT) and oxy-hemoglobin (HbO), with a corresponding decrease in HbR. The absorption of NIR light changes with changes in the concentration of HbO and HbR during activation and rest periods. The attenuation of NIR light due to the absorption change reflects, according to the modified-Beer Lambert law (MBLL), the concentrations of HbO and HbR. Among neuro-imaging modalities, functional near-infrared spectroscopy (fNIRS)'s simplicity, portability, low cost, good temporal resolution (suitable for realtime imaging), and high signal-to-noise ratio make it a favorable option (Hu et al., 2013;Chang et al., 2014;Herff et al., 2014). fNIRS also has been considered as a potential multi-modality imaging methodology (Yunjie and Blaise, 2012). One disadvantage of fNIRS, however, is its low penetration depth. Details on the pros and cones of fNIRS can be found in Gervain et al. (2011), Barati et al. (2013), and Tak and Ye (2013).
The increase in HbR at particular area of brain is an indicator of the neuronal activity in nearby area. The detection of the neuronal activation in a particular cortical area is nothing but extraction of a specific waveform in the hemodynamic response (HR) (Ciftçi et al., 2008). In past, the canonical hemodynamic response function (cHRF) is frequently be used as desired impulse response in hemodynamic signal. Of course, it could vary in its shape, time to peak, relaxation time, and full width half maximum (FWHM). These variation in the characteristics of HRF are observed in different brain areas, among subjects and on repetition of trials. Such variations in the attributes of cHRF measured by fNIRS, has been observed in HbO concentration changes (Hong and Nugyen, 2014). The major cause of this phenomenon is the brain's continuous consciously/unconsciously processing for several tasks at the same instant of time. Even if the subject is instructed to relax and sit comfortably during an experiment, the brain consciously or unconsciously processes for many past, present and future events. Several studies have reported such findings during the analysis of fNIRS data. The difference in the dynamical shape of HRF during event-related motor and visual paradigms revealed that the peak times of HbO, HbR, and total hemoglobin (HbT) for visual paradigm are approximately equal unlike for motor paradigm (Jasdzewski et al., 2003). An additional source of such variations in hemodynamic signal measured through fNIRS, could also be as a result of certain artifacts (Yamada et al., 2009;Umeyama and Yamada, 2013). The artifacts could be related to instrumentation noise, not proper fixation of NIRS optodes, and motion of subjects likewise body tilt, breathing hold, and head nodding etc. (Yamada et al., 2009;Robertson et al., 2010;Umeyama and Yamada, 2013). Among these considerations, there is another factor that can affect the shape of HRF. This factor is known as differential path length factor (DPF). It determines the additional distance traveled by light photon due to scattered behavior of brain tissues. It is found that the wavelength dependent DPF and age can also affect the characteristics of HR (Duncan et al., 1996). A mismatch between these features could result as a decrease in the detection performance (Ciftçi et al., 2008).
Additionally, NIRS signals include physiological noises associated with heart beats, respiration rhythms and lowfrequency fluctuations. A special algorithm that not only to suppress physiological signals present in optical signals, measured through fNIRS, but also other unwanted (activation not related to experimental paradigm) signals due to continuous brain processing, therefore is required. In fNIRS signal analysis, most of studies have been reported in relation with reduction of physiological and instrumentation noises or to extract neuronal activation related waveform. But recent research in this field has also been turned toward the analysis of functional connectivity of brain regions during resting states (Lu et al., 2010;Hu et al., 2013). Additionally, does this resting state connection stays during task periods of a particular region (Zhang et al., 2011b;Hu et al., 2012Hu et al., , 2013? Until now fNIRS has a limitation that NIRS optodes cannot cover full skull at once to study/analyze complete functional connectivity (Lu et al., 2010). Figure 1 summarizes different subfields in the area of fNIRS signal analysis for development of a standard methodology.
BRAIN OPTICAL SIGNAL Data Acquisition and Pre-Processing
In recent years, different types of NIRS imaging systems have been developed, which can be grouped into continuous wave (CW), frequency domain (FD), and time-resolved (TR) categories. CW-fNIRS instrument measures the concentration changes of HbO, HbR, and total hemoglobin with assumption that scattering remains constant, while FD-fNIRS and TD-fNIRS detects the absolute concentrations of HbO and HbR. TD-fNIRS is based on the principle of time of flight measurement and the most expensive instrument. The central differentiating element among these instruments is the estimation of the path length traveled by the photon due to scattering. The CW system, the least expensive, provides relative-change information in the forms of the concentrations of HbO and HbR, and was the version utilized most frequently. Figure 2 shows the geometry of fNIRS signal acquisition.
The transportation of light photon through tissue is a complex process. When photon of different wavelengths incident on tissues, the characteristics of the detected light depends upon the combination of scattering, absorption, and reflection (Cope and Delpy, 1988). It can be modeled using the modified Beer-Lambert law given as where I in (λ), I o (λ) are the incident and the detected light respectively, µ a (λ) is the absorption, d is the source detector separation, DPF(λ) is the DPF, and G(λ) is the geometrydependent parameter. The first step to express chromophore changes from optical signal is to find the optical density (OD) defined as (Cope and Delpy, 1988;Duncan et al., 1996;Kamran and Hong, 2014) Considering two chromophores, i.e., HbO and HbR and assuming the phenomenon of scattering to be constant, where λ i is the wavelength of the incident light and ε λ i HbO and ε λ i HbR are the extinction coefficients of HbO and HbR, respectively. By considering two different wavelength of light, the above equation could be rearranged as follows (Ye et al., 2009;Kamran and Hong, 2013;Santosa et al., 2013) where HbR i (k) and HbR i (k) are relative concentration changes of HbO and HbR respectively, k is the discrete time, i represents the ith-channel of emitter-detector pair, λ 1 and λ 2 represent two different wavelengths, ε λ 1 HbO , ε λ 1 HbR , ε λ 2 HbO , and ε λ 2 HbR indicates the extinction coefficients of HbO and HbR at two different wavelengths respectively, OD λ j (k) is the optical density variation at kth-sample time at particular wavelength (j = 1, 2), d i is the source-detector separation and DPF λ j is the DPF at particular wavelength (j = 1, 2).
Effects of Differential Path Length Factor
The scattering behavior of human brain tissue to NIR light entails that DPF is required to correct the reading observed through fNIRS. Initially it was in practice to use DPF value between 3 and 6. Later, time of flight methodology or intensity modulated spectroscopy were used to estimate the values of DPF. Duncan et al. (1995) found the DPF values for adult heads for four different wavelengths in a population of 100 (50 males and 50 females) subjects. It was found that the value of DPF are 6.51 ± 1.13, 6.53 ± 0.99, 6.26 ± 0.88, and 5.86 ± 0.98 for 690, 744, 807, and 832 nm, respectively. The magnitude of the DPF determines the magnitude of the calculated concentration changes (Kohl et al., 1998). Therefore, DPF has an important role for any instrument claiming accurate measurement of chromophores changes. Kohl et al. (1998) has used the key idea that DPF is proportional to the rate of change of absorbance with respect to absorption. Thus, they found wavelength dependent DPF by the ratio that rate of change of absorbance to the absorption spectrum of arterial blood. Duncan et al. (1996) analyzed 283 subjects (age between 1 day and 50 years) and developed equations to express DPF values as a function of age at different wavelengths. Their results summarizes the DPF values for four different wavelengths as under; DPF 690 = 5.38 + 0.049 * (A 0.877 ), DPF 744 = 5.11 + 0.106 * (A 0.723 ), DPF 807 = 4.99 + 0.067 * (A 0.814 ), DPF 832 = 4.67 + 0.062 * (A 0.819 ).
But CW-NIRS systems could be used for different wavelengths for their equipment, thus a general equation was the requirement that could be used for any wavelength and for any age. Schroeter et al. (2003) analyzed fNIRS data from 14 young (23.9 ± 3.1 years old) and 14 elderly (65.1 ± 3.1) subjects and suggested that DPF is not only effected by age but also with different brain regions. They concluded that the hemodynamic response can be decreased by age in the frontal association cortex during functional activation and proposed to calculate effect size to analyze age-related effects in fNIRS studies. Scholkmann and Wolf (2013) realized this need and fitted a polynomial of degree three on the available data set for the values of DPF. They represented DPF as a function of wavelength and age as under DPF(λ, A) = α + βA Ŵ + δλ 3 + ελ 2 + ςλ. The values of the unknown parameters were found by using Levenberg-Marquardt algorithm (LMA) and least absolute residuals (LAR). There results suggested that The above equation is a generalized form of DPF correction depending upon age and wavelength. This equation is advantageous to use for any researcher because any one can easily evaluate the DPF value at any wavelength and age. The published articles presenting the values of DPF for different age and wavelengths have been summarized in Table 1.
Variations in HRF Pattern
The neural activation indication, measured through fNIRS, may be confounded with individual anatomical or systemic physiological sources of variance (Heinzel et al., 2013). Generally, inter-subject variability is due to the individual's differences in anatomical factors likewise skull and cerebrospinal fluid (CSF) structure, vessels distributions, and the ratios of the arteries and veins. Barati et al. (2013) observed that the variability in the stimulus condition for HbO was revealed in the slope, amplitude, and timing of the peak response. Jasdzewski et al. (2003) analyzed the difference in the dynamical shape of HRF during eventrelated motor and visual paradigms. Their results revealed that the peak times of HbO, HbR, and total hemoglobin (HbT) for visual paradigm are approximately equal unlike for motor paradigm (Jasdzewski et al., 2003). Additionally, their results have been analyzed for different values of source-detector separation. But if the source-detector separation is greater than 3 cm than the results are not much affected by DPF values (Duncan et al., 1996). Power et al. (2012) analyzed two very important questions: (1) is it possible to distinguish activation task from baseline or from other tasks? And if so (2) are the spatiotemporal characteristics of the response consistent across sessions? Their results concluded that the mental arithmetic tasks can be distinguished from base line but the characteristics of the response changes from session to session. Hong and Nugyen (2014) analyzed 19 subjects to conclude variations in the impulse responses at three different brain regions, somatosensory cortex (SC), motor cortex (MC), and visual cortex (VC). Their findings suggest that the activationand the undershoot-peak of the HbO in MC are higher than those in SC and VC. Additionally, the time-to-peaks of the HbO in three brain regions are almost the same (about 6.76 76 ± 0.2 s) and the time to undershoot peak in VC is the largest among three.
Constrained Basis Set
The detection of cortical activation related waveform from neuroimaging discrete signal is nothing but a search for a consistent and specific wave pattern (Koray et al., 2008). This is equaling to fit the measured signal to a known waveform up to certain accuracy. A mismatch of such fitting might lead to misleading results. The cHRF attributes includes magnitude of initial dip, time to the first peak, time to the undershoot, magnitude of the undershoot etc. In literature, cHRF consisting of two gamma functions have been used most frequently. In which, first peak is to tackle the main response and second peak is for undershoot after the response. Likewise fNIRS, functional magnetic resonance imaging (fMRI) modeling requires flexible HRF modeling, with the HRF being allowed to vary spatially, on repetition of trials and between subjects (Woolrich et al., 2004). Thus, a Bayesian constrained frame work is described in Woolrich et al. (2004) to best choose the HRF in the measured data. Koray et al. (2008) proposed constraint GLM model parameters such as main response (time to first peak) must be within 3-8 s, not more than one positive peak, not more than two dips, initial dip magnitude must be lesser than quarter of the magnitude of onset, an undershoot after 2-8 s of time to peak and magnitude of post stimulus undershoot must be lesser than half of the magnitude of onset. A 3D volume of parameter values of the canonical basis set then supposed as prior distribution in the Bayesian analysis. Finally, Gibbs sampling in this volume is used to find out the parameter of interest. This method is advantageous as it tries to constraint the basis set with possible reduction in solution space. The solution space is reduced on the basis of HRF physical properties reported in fMRI data in past.
Extraction of Evoked-Response Jobsis (1977) was the first to present an idea that there is a possibility to detect changes of cortical oxygen using NIR light. Later, Cope and Delpy (1988) designed an NIR system with four different wavelengths (778, 813, 867, and 904 nm). It is well-known fact that the neuronal activity generates an early de-oxygenation in a particular area of the brain from where the activity is started. The characteristics of HRF include an early rise after 1-2 s of stimulation and reaches to peak around 5-6 s. Finally, it starts to drop down and reaches a base line level after having a slight undershoot. The total duration of HRF for impulse stimulation is around 26-30 s. Friston et al. (1994) introduced the statistical parameter mapping (SPM) software for fMRI signal analysis, modeling the oxygen dependent signal as linear combination of two Gamma functions. This two Gamma function model is most frequently used to account the first peak and final undershoot of the oxygen dependent waveform. The standard values to generate the shape of cHRF are given in SPM (Friston et al., 1994(Friston et al., , 1998. The mathematical form to generate this type of HRF is described below where u is the experimental paradigm, h represents the cHRF, α 1 is the delay of response, α 2 is the delay of the undershoot, β 1 is the dispersion of the response, β 2 is the dispersion of the undershoot and Ŵ represents the Gamma distribution. Boynton et al. (1996) presented the idea to model neuronal related fNIRS waveform by employing only one Gamma function with two free parameters.
In contrast with fMRI, fNIRS observed signal is contaminated with several physiological noises. In most of studies, such signals are filtered out in pre-processing steps with known cutoff frequencies. But, Prince et al. (2003) modeled such signals as a linear combination of different sinusoids. In past several studies have modeled HRF using the model presented in Equations (13) and (14) in GLM based analysis framework. Jasdzewski et al. (2003) analyzed impulse response attributes in the form of a linear model. Later, Koh et al. (2007) introduced a functional optical signal analysis (fOSA) software based upon the GLM methodology. Plichta et al. (2006Plichta et al. ( , 2007 presented the functional brain maps of visual cortex by employing GLM methodology with ordinary least square estimation (OLSE) to extract the values of activity strength parameters. Taga et al. (2007) analyzed the effects of source-detector separation to extract neuronal activity related fNIRS signal. Koray et al. (2008) estimated HRF by fitting constrained parameters of cHRF in Bayesian framework. NIRS-SPM (Ye et al., 2009) is the extension of SPM (Friston et al., 1994(Friston et al., , 1998, frequently been used in fMRI analysis. This software package has employed the concept of GLM with known regressors to decompose the measured NIRS time series. The brain signal model could be represented mathematically in the form of known n-regressors, The GLM methodology has been employed quite frequently while analyzing the fMRI time series. For this purpose, a basis set including predicted HRF (pHRF) and a base line correction is been used. Furthermore, the temporal and dispersion derivatives have been added to tackle the temporal and spatial effects in HRF (Friston et al., 1994(Friston et al., , 1998. Likewise fMRI, fNIRS instrument monitors the concentration changes of HbO/HbR, thus an identical regression vector with fMRI is used in optical signal analysis. But fNIRS time series has additional challenges of existence of physiological signals in the measured waveform. Thus, Abdelnour and Huppert (2009) described the basis set as linear combination of pHRF, a base line correction and three sinusoids for physiological signals. Hu et al. (2010), supposed regression vector to be a combination of five components; pHRF, a baseline correction and remaining three forms a set of high pass filter with cut-off frequency 0.0006 Hz. Zhang et al. (2011bZhang et al. ( , 2012 introduced the use of recursive algorithms for better extraction of neuronal related concentration changes in observed fNIRS data. Aqil et al. (2012a) supposed fNIRS signal in a standard GLM framework with estimation of activity strength parameters using recursive algorithm. They modeled the fNIRS time series as a linear combination of pHRF, first derivative of pHRF (temporal derivative), second order derivative of pHRF (dispersion derivative) and a base line correction. Kamran and Hong (2013) has modified the method and analyzed the measured optical data in the form of linear parameter varying approach with a recursive technique that can estimate activity strength parameters in a Lagrangian framework. Scarpa et al. (2013) presented the idea to consider a reference channel with a source-detector separation of <0.7 cm. This reference channel contains only the physiological signals and useful component from nearby channel could be extracted by subtracting the data measured through reference channel. Later, Kamran and Hong (2014) modeled cortical signal in the form of autoregressive moving average with exogenous signal (ARMAX). The physiological signal have been considered as a known amplitude and frequencies and variation in HRF is modeled by employing ARMA model. Hong and Nugyen (2014) has used the same basis set as described in Aqil et al. (2012b) to model impulse response as a state-space model. Independent component analysis (ICA) is a powerful blind signal processing technique (Katura et al., 2008). It can extract independent components from measured discrete series by using statistical concepts. Morren et al. (2004) analyzed and detected fast neuronal signal with a source-detector separation of 3 cm using ICA technique. Zhang et al. (2013) has employed ICA methodology to explore the existence of particular wave form, modeled as Gamma variants. Similarly, Santosa et al. (2013) has used ICA to extract the pHRF from regression vector including pHRF, a baseline correction and physiological noises. NIRS data analysis is performed in medical field as well for detection of different brain diseases. Machado et al. (2011) have used the GLM methodology to estimate the existence of hemodynamic responses to epileptic activity. As a general comparison, the most of above methods could be analyzed on the basis of computational cost. For example, ICA needs more computational cost as compared to ordinary/recursive least squares estimation algorithm for unknown parameter estimation. Initial dip is an important and most crucial attribute and indicator of neuronal activity. It gives the idea of a particular location which is involved in originating the said activity. Thus, using Gamma functions as basis set could be more advantageous if the factor of initial dip is added and analyzed. Table 2 summarizes the list of studies describing methodologies to extract the neuronal activity related wave pattern from fNIRS signal.
HRF Model Using State-Space Model
fNIRS measured time series is a discrete data series that could be converted into state-space model for further analysis. Aqil et al. (2012b) summarizes the state space model of optical time series using standard subspace-based approach, but the numeric values of final matrices were not been displayed. The brain model used in their work is similar with Aqil et al. (2012a). Later, Hong and Nugyen (2014) converted fNIRS cortical signal model (same as Aqil et al., 2012a) into statespace model by using standard subspace based approach. They have summarized the mathematical derivation as well and finally they used built in Matlab function to extract final state-space model and matrices of order six. They have also summarized the numeric values of finalized matrices for different brain regions as well. Kamran and Hong (2013) presented the idea that linear parameter varying model could be beneficial to tackle the time varying characteristics of the human brain signal. In their work, the measured optical data is modeled as a state-space model whose matrices are dependent upon time varying parameter. But a final state-space model has not been reported in their work. Modeling using state space methods could be beneficial over other estimation methodologies with recursive algorithms because the analysis in state space model would be much easier if a model is developed that can cater the attributes of HRF as variable parameters.
Physiological Noises
fNIRS data analysis incorporates an additional challenge of temporal correlation presented in the data due to the physiological signals. The physiological noises includes cardiac beat, respiration rhythm and low frequency fluctuations known as Mayer waves. In most of studies, the physiological signals are pre-filtered by using standard signal filtering techniques. Prince et al. (2003) has presented the idea to model the biological signals as a set of sinusoids. Zhang et al. (2007) has proposed to nullify the effects of global interferences by using multiseparation probe configuration (placing a detector close to the source) and adaptive filtering. Abdelnour and Huppert (2009) included physiological signals as known regressors in their regression set. Hu et al. (2010) have added a set of high pass filter with cut-off frequency 0.0006 Hz to tackle the physiological signals. Zhang et al. (2011b) analyzed multi-distant sourcedetector separation further by decomposing short distance source-detector measurement into intrinsic mode functions (IMFs). An estimate of global interference is derived by analyzing weight coefficients of IMFs. Cooper et al. (2012) has analyzed the simultaneous fNIRS and fMRI recordings to reduce the physiological effects. They calculated the variance of the residual error in a GLM of the base line fMRI signal and the observed variance is reduced by incorporating NIRS signal in the model. Zhang et al. (2012) presented to remove physiological effects
References
Methodological details Jobsis, 1977 Possibility to detect changes of cortical oxygen using NIR light. Cope and Delpy, 1988 Design of NIR system with four wavelengths (778, 813, 867, and 904 nm) with applying modified Beer-Lambert law for data conversion. Friston et al., 1994 Statistical parameter mapping software for fMRI but later used for fNIRS data analysis with modifications. Boynton et al., 1996 HRF model with one Gamma function with two free parameters. Prince et al., 2003 Biological signals modeled as sum of sinusoids. Jasdzewski et al., 2003 Impulse response, initial dip, and time to peak analysis in fNIRS signal. Koh et al., 2007 A software functional optical signal analysis (FOSA) was introduced based on GLM methodology. Plichta et al., 2006Plichta et al., , 2007 GLM methodology with ordinary least square estimation to generate functional maps of visual cortex. Taga et al., 2007 Analysis of effect of source-detector separation to fNIRS hemodynamic response. Koray et al., 2008 Estimation of constrained HRF parameters in Bayesian frame work. Abdelnour and Huppert, 2009 GLM based methodology with Kalman filter to estimate handedness. Ye et al., 2009 GLM based NIRS-SPM software package for analysis of fNIRS data. Hu et al., 2010 Brain functional maps by using GLM and Kalman filtering. Zhang et al., 2011b Recursive least squares (RLS)-empirical mode decomposition for noise reduction. Zhang et al., 2012 RLS estimation with forgetting factor to remove physiological noise.
Aqil et al., 2012a GLM and RLSE for estimation of brain functional maps.
Aqil et al., 2012b
Generation of cHRF using state-space approach. Scarpa et al., 2013 Reference channel based methodology for estimation of evoked-response Santosa et al., 2013 ICA methodology to estimate pre-defined cortical activation signal. Kamran and Hong, 2014 Linear parameter varying model and adaptive filtering to estimate HRF and functional maps of brain. Barati et al., 2013 Principle component analysis to continuous fNIRS data (using spline method). Kamran and Hong, 2014 Auto-regressive moving average model with exogenous signal (ARMAX) model for cortical activation estimation.
Hong and Nugyen, 2014
State-space model for impulse response using fNIRS.
from simulated fNIRS data set of near and far detectors using recursive algorithm. Yamada et al. (2012) proposed that functional and systemic responses could be separated on the basis of a negative and positive linear relationship between HbO and HbR changes of the functional and the systemic signals. Later, Kirilina et al. (2012) included an additional predictor to account for systemic changes in the skin to analyze time course, localization and physiological origin of task related superficial signals in fNIRS measured data. They found that skin blood volume depends upon the cortical state. Additionally they found that origin of the task related systemic signals in fNIRS are colocalized with veins draining the scalp. Frederick et al. (2012) generated regressors for systemic blood flow and oxygenation fluctuation effects by applying a voxel-specific time delay to concurrently acquired fNIRS-fMRI time series. Kamran and Hong (2014) added three sinusoids with known frequencies and amplitudes as exogenous signal in their ARMAX frame work. It is very important to find out the frequencies and amplitudes of existing sinusoid in measured data in addition with its origin. Since, the frequency of experimental paradigm coincident with physiological noises could result in the generation of harmonics.
Most of the previous studies were using a generic idea regarding a fixed frequency and amplitude of known sinusoid to cater the physiological signals in fNIRS data. A recently published Kamran et al. (2015) is more advantageous estimation algorithm because it allows the user to estimate the frequency and amplitude of physiological signals automatically from measured data instead of using a fix pattern. The studies related to removal of physiological noises from fNIRS signal have been summarized in Table 3 describing cortical area, number of subjects, nature of mental task, methodology used and source-detector separation.
Resting State Functional Connectivity
Human brain generates continuous low frequency fluctuations during resting state. These low frequency fluctuations could be used as an informative source to understand the mechanism of different brain regions. The reason that these fluctuations are useful because they are correlated with different brain regions. These findings opened up a new way of thinking to explore a topic named as resting state functional connectivity (RSFC). There is no consensus available at the moment to specifically designating a range of frequencies for resting state low frequency fluctuations of hemodynamic measured waveform (Fox and Raichle, 2007;Lu et al., 2010). fMRI studies have reported a high level of interhemispheric correlations in different brain regions (Damoiseaux et al., 2006;De Luca et al., 2006). White et al. (2009) have reported their analysis regarding RSFC in motor in visual brain cortices. They calculated correlations using Pearson correlation coefficient. Their results suggest inter-hemispheric correlations exist in both motor and visual networks. Lu et al. (2010) analyzed the RSFC maps of the sensory motor and the auditory cortices using seed-based correlation analysis and data driven cluster analysis during resting state and motor-localizer task sessions. Their results suggested RSFCs were detected both within the ipsilateral and between the bilateral sensorimotor seed-regions. Additionally, it was found that significant correlation exist within the ipsilateral and between the bilateral temporal auditory cortices but not between temporal auditory areas. Zhang et al. (2011a) raised an issue of reliable RSFC maps. They analyzed test-reset reliability at three different scales; maps cluster and channel wise at individual and group levels. Their finding suggest that one should be very careful when interpreting the individual channel wise RSFC. But individual level and group level RSFC has excellent map-/cluster wise reliability. The trial-to-trial variability (TTV) in fNIRS signal exist even if the experimental procedure is kept constant. Hu et al. (2013) suggested to reduce TTV using RSFC information. They concluded that low frequency fluctuations are significant source of TTV and TTV decreases after removing the effects of bilateral connectivity. Since, fNIRS optodes cannot cover whole head surface which makes it difficult to analyze the RSFCs between all different brain regions (Lu et al., 2010). One possible way of covering whole skull is to increase the number of optodes. But this shall affect the temporal resolution of the equipment.
Environmental and Instrumental
Effects/Artifacts/Noises fNIRS measured data series includes hemodynamic signal related to certain artifacts. These artifacts could be related to biological processes or from outer sources. Instrumental noise is one of major external sources whose affect could be reduced/nullify by proper calibration of the instrument. The other artifacts could be as a result of not good contact between skull and NIRS optodes. The uncoupling of optodes and skull is the source of fluctuations in the detected intensity, leading to uncorrected results. Thus, it must be insured that optodes and skull have a proper contact at correct angle. Removal of unnecessary hair at contact could also improve the quality of the signal. Another artifact is due the motion (tilt in the body, slight head moment, and breath holding) of the subject. This motion cause changes in the blood flow which is a major reason of fluctuations in the measured hemodynamic response. A crude way to remove motion artifacts is to average a certain number of experiments that its effects could be nullified. But for realtime BCI applications, it is necessary to estimate the actual contributions of the motion artifacts in single-trial. Yamada et al. (2009) presented a theoretical analysis of optical signal using Monte Carlo simulation. They proposed a multi-distant probe arrangement can reduce/eliminate artifacts in fNIRS measured data. Later, Robertson et al. (2010) experimented in a co-located channel configuration to analyze the known motion artifacts around three axis. They found motion related hemodynamic signal is detectable at co-located channels but not at unique channel. Cui et al. (2010) studied the effect of artifacts on fNIRS data induced by the head motion and they found that oxyand deoxy-Hb are generally negatively correlated, head motion causes the correlation to become more positive. They proposed a correlation based signal improvement method to maximize the negative correlation between oxy-and deoxy-Hb signals. Haeussinger et al. (2011) developed a method to identify channels with major extra-cranial signal contributions and subtracted the average of these channels from all channels to obtain improved fNIRS signals. Thus, it would be concluded that moment in different direction caused changes in the absorption of light at different brain regions.
Statistical Significance and Functional Maps
fNIRS signal are highly corrupted by several measurement noises and physiological interferences. Therefore, a careful statistical analysis is required to extract neuronal-activity related signal from observed optical data (Tak and Ye, 2013;Kamran and Hong, 2014). The existence of a particular response HRF(k) in the measured data is found by a statistical analysis known as ttest (Hu et al., 2010;Kamran and Hong, 2013;Santosa et al., 2013Santosa et al., , 2014Hong and Nugyen, 2014). The basic idea is to test whether the estimated value of the activity strength parameter is greater or less than a target value zero with statistically significance (t-value > t critical and p < 0.05). Thus, it is equivalent of testing a null hypothesis Ho with proper statistics i.e., Null hypothesis H o : β 1 = 0 (16) Alternative hypothesis: β 1 = 0. Finally, t-value could be evaluated as under where SE is the standard error of the estimated coefficient.
However, in practical situations while analyzing fNIRS data, multiple comparison problems are often required to be addressed. Thus, there is chance that such analysis shall result in increase of inference error. Therefore, in addition to analyze the multi-data gathered through fNIRS modality, it is also required to put necessary checks that each individual data is analyzed with more care and strong level of evidence so that to reduce inference error in proceeding steps. For example, in Plichta et al. (2006), the Bonferroni correction and the Dubey/Armitage-Parmar alpha boundary were used for statistical inference of activated channels to estimate the statistical significance of fNIRS response during task periods. A detailed review on statistical analysis of fNIRS data could be found in Ye et al. (2009) and Tak and Ye (2013).
Challenges fNIRS signal is not consistent among subjects, repeated trails and repetition of experiment even if the conditions are assumed to be similar. Therefore, the optical signal model constitutes an additional challenge to the researchers working in this area. Since, fNIRS time series is a combination of several physiological signals (Hu et al., 2010). These physiological signals are cardiac beat (∼ 1 Hz), respiratory rhythm (∼ 0.2-0.3 Hz) and low frequency fluctuations (< 0.1 Hz) among others. The first step is to estimate the frequency of particular signal present in the observed time series. In addition to this, it is also required to estimate the amplitudes of the signal present in the data. The next target is to find out the attributes of the HRF. There are several important characteristics, e.g., initial dip, FWHM, time to peak, peak height, post stimulus undershoot etc. Thus, a non-linear model is required to incorporate such attributes. For instant, two Gamma function model is most attractive because each Gamma function represents each peak (actual response and post dip). Furthermore, an iterative non-linear optimization algorithm is needed to estimate the free parameters in the model with significant accuracy. Finally, more precise statistical support is required to state that the estimation in the model is statistically significant. Another important factor is the design of experimental paradigm. The frequency in experimental paradigm must be different from physiological signals to avoid harmonics. Brain functionality is complex and coupled nonlinear system. The analysis of different brain region's coupling is also a fundamental step toward improvement of fNIRS analysis. Considering the fact that fNIRS optodes cannot cover full skull, the number of optodes could be increased but it shall reduce the temporal resolution as well. Thus, an optimal source-detector separation is needed to establish for significant and maximum surface analysis.
CONCLUSION
Brain engineering is a multi-disciplinary field with a focus to extract useful information from cortical signal observed by neuroimaging equipment. In this article recent advancements in the analysis of the optical signal observed through fNIRS are summarized. It is important for new researcher to understand the importance of pre-processing steps, effects of DPF and other factors during analysis. The recent conclusion for such factor is also presented. Additionally, different methodologies that have been developed in past to extract the neuronal activation related waveform (pre-processing steps, effects of DPF, variations and attributes of hemodynamic response function (HRF), extraction of evoked response, removal of physiological noises, instrumentation, and environmental noises, and resting/activation state functional connectivity), are summarized. Since systemic, instrumentation, and environmental noises effect the measured signal and the analysis. Therefore, reduction/removal of such noises must be performed carefully. Special consideration must be given for the selection of experimental paradigm to avoid physiological harmonics. Several methodologies that have been reported in past decade for noise removal, have also been summarized here. It is well-known by fMRI studies that different brain regions have connections during resting and task periods. Thus, it is very important to analyze such connections of brain using fNIRS as well. A brief review of RSFC is also added in this article.
AUTHOR CONTRIBUTIONS
MK did the literature review and wrote the paper. MN participated in literature review and revising the article. MJ has suggested the theoretical aspects of the current study and supervised all the process from the beginning. All the authors have approved the final manuscript.
|
v3-fos-license
|
2023-08-13T15:10:15.204Z
|
2023-01-01T00:00:00.000
|
260851716
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/50/e3sconf_interagromash2023_05004.pdf",
"pdf_hash": "ee6725840b7e716992b42baa94cfff2f823dee45",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43164",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "0584eaa65e7495100690a0ff4bed361ae0f11db0",
"year": 2023
}
|
pes2o/s2orc
|
Ults entropy in the structure of natural and technical systems "natural environment – object of activity – population"
. The purpose of this article is to evaluate the categorical concept of entropy in the structure of natural and technical systems related to the use of water resources, the quantitative and qualitative indicators of which are formed within the spatial limits of river basin geosystems. Using fundamental basic concepts as a universal measure of the forms of motion of matter – energy, time, system, system approach and important concepts – irreversibility, ecological state, environmental safety, as well as fundamental laws of thermodynamics – conservation and changes in the zones of influence of the "Object of Activity" in the form of a complex of hydraulic structures and associated buildings as part of natural-technical system "Natural Environment – Object of Activity – Population" (NTS "NE-OA-P") on the use of water resources in economic and other activities, to give a detailed understanding of Entropy.
Introduction
The world around us is one and harmonious. The Earth's biosphere (Earth biosph.= 1010 km 3) has existed for more than three billion years as a self-sufficient, self-regulating system that includes a variety of living organisms interacting with abiotic elements as part of the "Natural Environment" component [3].
The paradigm of the development of human civilization from the moment of its birth to the modern stage of development was characterized by the consumer principleto take from nature all available resources in increasing quantities. Diverse economic and other activities are inherently connected with the processes of human and social life, as well as with the processes of subjective transformation of forms of energy and substances that have a direct impact on the processes of formation of the ecological state in natural environments the surface layers of the atmosphere, the hydrosphere in the catchment area of the river hydrographic network, the upper layers of the lithosphere and the soil cover with underlying rocks [2].
"Natural environment", as a natural component with biotic and abiotic elements included in it in the river basin geosystems under consideration, where quantitative and qualitative indicators of water resources used in economic and other activities are formed at any hierarchical level of the Earth's biosphere, are the most important component in the human environment and the basis of life on Earth [4].
The problems of exhaustion of natural resources, greenhouse effect, thermal pollution of the natural environment cover all spheres of life of modern society and are systemic, both at the level of the global system "Nature-Society-Man" and at the level of the created and operating local natural-technical systems "Natural Environment-Object of Activity-Population", related to the use of water resources and protection from the negative impact of natural waters in the catchment area of river basin geosystems [3]. The solution of these important problems necessitates the search for new methodological approaches to the use of accumulated knowledge about nature and the creation of new and improvement of existing technologies for the use of water resources in various spheres of economic and other activities of society [19].
The existing and created NTS "NE-OA-P" in the spatial limits of river basin geosystems, where quantitative and qualitative indicators of water runoff (surface and underground) are formed as an integral element in the global moisture turnover of the Earth's biosphere (577 thousand km3) under the influence of energy flows (35.6 TW) coming from space from the Sun, occupy the lowest levels in the hierarchy of natural basin geosystems and are in interrelation, interaction and relationship with natural basin geosystems of a higher hierarchical level [7,9]. NTS "NE-OA-P" belong to the class of complex open systems in which the natural processes of interrelation, interaction and relationship between biotic and abiotic elements in the composition of the natural component "NE" are dominant over the processes of interaction of the technogenic component "OA" with the natural "NE" and social "P" components [5]. It should be noted that the functioning NTS "NE-OA-P" in the spatial limits of the river basin geosystem forms a special unity with the environment of the natural component "NE" in space and time and acts as an important concept of "ecological state" as a factor of "ecological safety" in the zones of "OA" influence.
To ensure environmental safety within the spatial limits of the zones of influence of "OA", it is important to assess the role of interaction, interrelation and relationship of "OA" with the natural component of "NE" with the biotic and abiotic elements of the considered NTS "NE-OA-P" [6].
The interrelation, interaction and relationship with natural biotic and abiotic elements in the composition of the natural component "NE" in a generalized sense is caused by the processes of transformation of forms of energy and substances that form the "ecological state" within the spatial limits of the zones of "OA" influence ( Fig. 1) [7,18].
Research methodology
The concept of entropy, introduced by Rudolf Clausius (1865), plays a universal role in natural systems and, accordingly, in the NTS "NE-OA-P" and defines the basic laws both at the global level of the Earth's biosphere and at the local level of river basin geosystems. For the class of NTS "NE-OA-P", as established by the results of research, entropy reflects the thermodynamic properties characterizing the intra-system processes of interconnection, interaction and relationship between the natural ("NE"), technogenic ("OA") and social ("P") components in the NTS "NE-OA-P", which function in accordance with the second law of thermodynamics [5]. The technogenic component "OA" with its structural elements includes various types of structures of hydraulic structures, buildings, mechanisms, devices, etc., the functional task of which is to transform forms of energy in the processes of interaction with elements of the natural component "NE" (channel water flow, river ichthyofauna, river floodplain, surface water regime and underground water runoff, etc.), which is characterized by a balance ratio of the free part of energy (E free), the ability to perform work and the bound part of energy (E bnd), which is not able to perform work and eventually turns into a stable form of energy heat [14].
The existing and created NTS "NE-OA-P" within the boundaries of the catchment area of the river basin geosystem in functional terms for the regulation and use of water runoff (surface, underground) in technological processes of economic and other activities are considered within the framework of fundamental energy indicators, which are the laws of thermodynamicsconservation and changes in the zones of "OA" influence [12].
Along with all the achievements of the development of science from the second half of the XIX century to the present, the most important is the second law of thermodynamics, which states that our universe is becoming more and more disordered and this process cannot be changed. At the global level, the processes of vital activity of living matter, its interaction with inert matter and the technosphere created by man make a certain contribution to the tendency of disorder growth, both within the biosphere of the Earth and in higher hierarchical systems of the Universe [11].
For a small Universe, the concept of entropy will proceed from the thermodynamic specifics that in any Universe, with natural changes, entropy increases, and with "unnatural" changes, it decreases. This concept should express the second principle of thermodynamics in the formulations of Kelvin and Clausius, where both of these formulations can be reflected in one simple statement: natural processes are accompanied by an increase in entropy, and with unnatural processes there is a decrease in the rate of entropy growth [13].
By the middle of the XIX century, famous physicists Rudolf Clausius, Nicolas Sadi Carnot, James Joule and Lord Kelwick laid the foundations of modern thermodynamics as the most important scientific direction in physics and technology. Thermodynamics, as a general theory of collective properties of complex systems, describes not only the work of various machines, bacterial colonies, computer memory devices, but it can also be fully assumed that it can be used to describe the processes of interaction of natural "NE" (biotic, abiotic), technogenic "OA" and social "P" as part of the NTS "NE-OA-P" components in the boundary within the river basin geosystems, where water resources are formed and used [16].
The concept of entropy, originally introduced by Rudolf Clausius in 1865, plays a universal role in systems and, accordingly, in the NTS "NE-OA-P". Entropy in the considered natural systems and in the NTS "NE-OA-P" determines many patterns in the processes of evolution, both at the global level of the Earth's biosphere and at the local level of river basin geosystems that cause a small Universe.
Entropy, as it is established, is one of the fundamental basic concepts standing next to the concept of Energy as its figurative shadow, as a universal measure of various forms of motion of matter. It should be noted that the concept of entropy in the considered class of NTS "NE-OA-P" reflects the thermodynamic properties characterizing the intra-system processes of interconnection, interaction and relationship (IIR) between natural "NE", technogenic "OA" and social "P" components with their elements in in the considered space of the river basin geosystem, having a statistical (probabilistic nature), as well as evolutionary phenomena in the processes between the components. Entropy in the considered class of NTS "NE-OA-P" determines the intensity (speed) of the processes of transformation of intra-system forms of energy. Monitoring observations of the speed of these processes make it possible to judge the direction of evolutionary changes in the natural "NE" and social "P" components in the zones of influence of "OA" as part of the NTS "NE-OA-P". Based on the results of monitoring observations in the zones of influence of the technogenic component "OA" as part of the NTS "NE-OA-P" in the form of a complex of hydraulic structures (CHS) (reservoir waterworks, water supply, coupling and other types of HS [17].
In a generalized sense, all complex systems, which include the NTS "NE-OA-P" behave the same way and function in accordance with the second law of thermodynamics in the processes of interaction between components in the NTS "NE-OA-P" as a result of which the transformation of forms of energy with an integral the production of a low-quality part of the energy -bound E bnd, which determines entropy [5,15]. But the statement that any system steadily degrades can be refuted by such an example as the fact that selforganization processes and, accordingly, the growth of orderliness are observed in nature. This is confirmed by the theoretical studies of Lars Onsager, Ilya Prigozhin [5,15] and others, which confirmed the universality of the second law of thermodynamics. In a strict sense, the second law of thermodynamics applies only to systems that are in a state of equilibrium when the mass, energy and configuration of the system do not change or have ceased to change. But in reality, as applied to the considered NTS "NE-OA-P", we observe changes both in natural environmentsthe atmosphere, hydrosphere, geological environment, soil cover, and in "OA", which have been operated for more than a dozen years. Consequently, time plays an important role both in natural environments and for "OA".
The second law of thermodynamics states that entropy in the considered NTS "NE-OA-P" is inevitable. On the one hand, the surrounding "OA" natural environment "NE" in the spatial limits of the river basin geosystem and the complex of hydraulic structures and associated buildings and roads being created as part of the "OA" form a mess, but on the other hand, processes of self-organization (ordering) due to regulation are observed on the existing "OA" and the management of river flow in the boundary of the catchment area, which necessitates a very cautious approach in assessing the resulting certain disorder (chaos) and the formation of a controlled order in the processes of regulating river flow [6].
If the second law of thermodynamics is universal, as it is claimed by modern research in the theory of thermodynamics, therefore, it can be used in the study of NTS "NE-OA-P", which are local spatial limits of the natural environment, the boundaries of which are determined by the laws of the influence of "OA" on natural environments [1,2]. The question arises, how does the second law of thermodynamics work as part of the NTS "NE-OA-P" [15,16]?
In open natural systems, where the processes of interaction of biotic and abiotic elements in the composition of the natural component in the form of "NE" occur with each other with the continuous flow of solar energy flows, there is an evolutionary formation and development of diverse ecosystems within the river basin geosystem under consideration [14]. The current "OA" as part of the NTS "NE-OA-P" makes certain changes in the natural processes of interaction of biotic and abiotic elements in the composition of the natural component "NE", which can cause both degradation and balanced interaction of "OA" with natural "NE" and social "P" components, which is characterized by dynamic equilibrium. The second law of thermodynamics explains why the sequence of equilibrium states cannot be irreversible and why the system cannot return to its original state, and also states that in the processes of transformation of energy forms in the considered NTS "NE-OA-P" their state in the space and time of the river basin geosystem of the zones of influence "OA" it is determined by the energy potential, which can rise or fall, but where there is energy, there is its shadowentropy, which determines an important systemic conceptirreversibility [15].
Entropy is a quantity whose growth rate determines the intensity of the processes of transformation of forms of energy. Observing the rate of entropy growth, it is possible to determine the direction of the evolutionary functional development of the system, which is inherently connected with the processes of change and conservation. Ensuring a decrease in the rate of entropy growth in a particular system is achieved by constant and efficient dissipation of easily used energy (solar energy, food, etc.) and its conversion into energy of the most stable formthermal [1]. Entropy is usually considered as the degree of disorder of the system, but in reality it can be misleading. Based on the results of research in the NTS "NE-OA-P" it is advisable to isolate the structure-forming elements in the composition of the natural component within the zones of influence of "OA" [16]. Thus, in complex NTS "NE-OA-P" as natural in the form of "NE", technogenic in the form of "OA" components in the form of individual elements (subsystems), the following are considered: -climatic characteristics; surface layers of the atmosphere; flora; fauna; formed hydrographic network, within which the runoff is formed (surface underground); the geological environment of the upper layers of the lithosphere; "OA" as a technogenic component of the system under consideration; "P" which lives in the zones of influence of "OA". Consequently, the considered class of NTS "NE-OA-P" is based on three basic components "NE", "OA" and "P", which include separate elements, the number of which in each component is taken depending on the tasks to be solved [6].
The resulting elements in the composition of natural "NE" and man-made "OA" components are considered as subsystems of a lower hierarchical level. The aggregate and collective properties of the components with the elements included in them as part of the NTS "NE-OA-P" for convenience are called integral or system-forming, and their quantitative assessment is the integral characteristics of the corresponding quantitative indicators [1]. An important characteristic of the NTS "NE-OA-P" is its structure, which reflects many of those connections, the interaction between the components with the elements included in them, which determine the most important value in the processes of converting forms of energy within the system, for example, the potential energy of the water flow in hydroelectric power plants.
For the considered NTS "NE-OA-P" entropy characterizes the state of structural formations in the biotic, abiotic elements of the natural component "NE" and individual structural elements in the technogenic component "OA". The degree of variability of the state of structural formations is due to the fact that the NTS "NE-OA-P" consists of a variety of structural formations in the form of a variety of plant and animal worlds, climatic indicators, soil cover, hydrographic network, air environment in the surface layers of the atmosphere, geological environment in the upper layers of the lithosphere, etc., as well as "OA" in the form of a reservoir waterworks, a complex of hydraulic structures, water supply and sanitation systems, etc. Each structural formation of the natural component "NE" "OA" affects differently and, accordingly, the energy-entropy state will be different, which causes variability [8,10].
The state of both individual structural formations and their constituent elements, as well as the NTS "NE-OA-P" as a whole is determined by the level of entropy (S) in the balance ratio of the free (E free) and bound (E bnd) part of the energy, which is expressed by the efficiency coefficient (EFFICIENCY -Ƞ) in the form of: (1) where is the total flow of energy entering the system . Consequently, the higher the entropy, the greater the number of individual elements and structural formations of the NTS "NE-OA-P" can be in a state of increasing disorderliness and, accordingly, the connected part of the energy , which causes a decrease in their functional reliability, expressed by efficiency in the use of the free part of energy , for example, the energy of a watercourse at hydroelectric power plants, etc. [13,14].
The entropy of the NTS "NE-OA-P" is characterized by the number of different microstates in the considered individual elements and structural formations of natural "NE" and man-made "OA" components that correspond to a certain macrostate of this system within the considered river basin geosystem. Mathematically, entropy is the product of the number of macrostates (m mk) by the logarithm of this number. The changes occurring in the NTS "NE-OA-P" under the influence of the processes of transformation of forms of energy is accompanied by an increase in entropy, since the efficiency is always less than one when the previous state differs from the present, that is, there is a direction of the ongoing processes, which causes the "arrow of time".
The change in entropy in open equilibrium systems, respectively, and in the considered NTS "NE-OA-P" is determined by the well-known Ilya Prigozhin equation [15]: , ( 2 ) where is the total change of entropy in the system over a period of time ; the change in entropy caused by irreversible processes within the system or the production of entropy is always positive or equal to 0, i.e.
; -entropy imported from the surrounding environment and defined by the expression: and (3) where -means the exchange of entropy caused by the flows of matter coming from the catchment area of the river basin geosystem as a result of water and chemical erosion, drainage systems of urbanized territories.
According to the second law of thermodynamics, is always positive, can be both positive and negative (Fig. 2). In general, non-reversible changes within the influence zones of the "OA" spatial limits of the river basin geosystem are associated with flows in the form of solar energy or matter (water, solid, ion runoff, etc.) over time . Then the change in entropy in the system can be represented as: (4) where is the generalized (thermodynamic) force, which is expressed as functions of variablestemperature, relative humidity, concentration of substances (bottom sediments, suspended sediment, ion runoff, etc.) of the acting pressure H m on structures, etc.
deS external i diS system
In the considered class of NTS "NE-OA-P", the total irreversible processes are expressed as the sum of all changes caused by flows determined by monitoring studies at the stage of the EIA procedure during the periods of design, construction and operation of "OA" and can be represented by the expression: , (5) which in a generalized form reflects the second law of thermodynamics, where the entropy in the system in each irreversible process is determined by the product of force and flow . The methodology of studying the intrasystem processes of the NTS "NE-OA-P" is based on the theory of open systems, which was formulated in the second half of the last century by I.R. Prigozhin [5,15]. Based on the concept of irreversible processes, the theoretical foundations of nonequilibrium nonlinear thermodynamics were developed, in which the concept of closed systems was replaced by a fundamentally different basic concept of an open system that has the ability to exchange matter, energy and information with the environment (МEI).
Based on the analysis of various natural models, it can be noted that the hierarchical structure of nature is aimed at reducing the total entropy of the Universe, but the dynamics of interactions between natural components is aimed at the natural growth of entropy. The interaction of these two trends actually determines the evolutionary development of the physical world, which is observed in living organisms and in natural systems such as the river basin geosystem. Thus, in biological systems of living organisms, the processes of reproduction, mutations and recombinations are aimed at increasing entropy, and the processes of aggregationmolecules into cells, cells into organisms, organisms into communities, are aimed at reducing the level of entropy. In natural macrosystems such as river basin geosystems, similar trends can be notedthe interaction of biotic and abiotic elements in the composition of the natural component "NE" causes the tendency of entropy growth to increase the level of disorder and the production of a bound part of energy (E bnd), transformed into a stable formheat.
Results and their discussion
What is the concept of entropy in the considered class of NTS "NE-OA-P"? Entropy is the accumulated flow of the bound part of the energy E bnd, based on the balance ratio of the free E free and the bound E bnd part of the energy.
With the introduction of the technogenic component "OA" into the spatial limits of the river basin geosystem in the form of a certain complex of hydraulic structures (reservoir, water intake, derivational, water-transporting, protective, etc.), there is an ordering (management) of structural connections, interactions and relationships between the natural component "NE" with its biotic and abiotic elements and accordingly, a decrease in the growth rate of entropy and the bound part of the energy E bnd. The decrease in entropy growth rates in the NTS "NE-OA-P" is estimated in comparison with the background state of the spatial limits of the river basin geosystem, which is determined by system integrated environmental studies (SIES) at the stage of engineering and environmental surveys.
Thus, it can be noted that the ordering of connections, interactions and relationships in the structural formations of river basin geosystems occurs through the creation of the NTS "NE-OA-P", in which the control of natural processes (hydrological, channel-forming, hydraulic, ichthyofauna, transformation of solar energy, etc.) is observed, which does not contradict the second law of thermodynamics. The second law does not postulate a monotonous disorder and an increase in the bound part of the energy E bnd and quite gets along with the emergence of orderliness and the complication of system-forming connections.
In systems of any hierarchical levela living cell, a powerful ship engine, NTS "NE-OA-P" similar processes of transformation of the initial energy level in motion with a certain efficiency Ƞ<1 occur.
Conclusions
The considered definitions and the main elements of the energy-entropy approach to the study of the class of NTS "NE-OA-P" functioning within the river basin geosystems, taking into account modern scientific knowledge in the field of system research of the processes of interconnection, interaction and relationship between natural "NE", technogenic "OA" and social "P" components with elements included in them, in which the concepts of energy entropy and time occupy an important place, where entropy expresses a quantitative measure of the bound part of the energy "E bnd" according to the ordering of the components and the elements included in them, which determines the direction of the processes in the system.
|
v3-fos-license
|
2019-05-07T14:22:24.856Z
|
2015-12-07T00:00:00.000
|
146201548
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://sajce.co.za/index.php/sajce/article/download/390/103",
"pdf_hash": "b97956d732cd2563bb452a41e3ddbed9e6e2f732",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43166",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"sha1": "9bc9d747d7b520844896cdb26cf00c92fe7079d2",
"year": 2015
}
|
pes2o/s2orc
|
Examining oral reading fluency among grade 5 rural English Second Language (ESL) learners in South Africa: An analysis of NEEDU 2013
The ability to read for meaning and pleasure is arguably the most important skill children learn in primary school. One integral component of learning to read is Oral Reading Fluency (ORF), defined as the ability to read text quickly, accurately, and with meaningful expression. Although widely acknowledged in the literature as important, to date there have been no large-scale studies on ORF in English in South Africa, despite this being the language of learning and teaching for 90% of students from Grade 4 onwards. As part of the National Education and Evaluation Development Unit (NEEDU) of South Africa, we collected and here analyze data on 4667 grade 5 English Second Language (ESL) students from 214 schools across rural areas in South Africa. This included ORF and comprehension measures for a subset of 1772 students. We find that 41% of the sample were non-readers in English (<40WCPM) and only 6% achieved comprehension scores above 60%. By calibrating comprehension levels and WCPM rates we develop tentative benchmarks and argue that a range of 90-100 WCPM in English is acceptable for grade 5 ESL students in South Africa. In addition we outline policy priorities for remedying the reading crisis in the country.
1) Introduction and background
The ability to read for meaning and pleasure is arguably the most important skill children learn in primary school. Since almost all future learning will depend on this fundamental understanding of the relation between print and spoken language, it is unsurprising that literacy, built upon a firm foundation of basic reading, is used as one of the primary measures of school efficacy. Apart from the obvious cognitive importance of learning to read, children who become novice readers within the first three years of primary school also have higher levels of socio-emotional well-being stemming from improved self-expression and communication as well as the selfconfidence that comes from cracking this difficult code (Chapman, Tunmer & Prochnow, 2000). Sadly, the opportunity of learning to read with fluency, accuracy, prosody and comprehension is one not afforded to the majority of South African children. Whether children are tested in their home language or in English the conclusions are the same; the vast majority of South African children cannot read for meaning by the end of Grade 4 -even in their home language -and almost a third are still functionally illiterate in English by the end of Grade 6 (Spaull, 2013).
The aim of the present study is to add to our understanding of the reading crisis in South Africa by focusing on the oral reading fluency (ORF) of Grade 5 English Second Language (ESL) learners in rural South Africa. To date there have been no large-scale studies focusing on oral reading fluency in English, despite this being the language of learning and teaching for 90% of students from Grade 4 onwards. There are two principle research questions that animate this study: (1) What are the levels of oral reading fluency among grade 5 ESL students in rural areas in South Africa?
(2) Is it possible to identify tentative benchmarks or thresholds of oral reading fluency that correspond to acceptable levels of comprehension?
To answer these questions we assessed a large sample of students, collecting data on oral reading fluency and comprehension for 1772 grade 5 ESL students from 214 rural schools in South Africa. As will become clear, there is an ongoing reading crisis in South African rural primary schools which, if not resolved, becomes a binding constraint to future learning at higher grades.
After a brief overview of existing research on reading outcomes and large-scale reading interventions in South Africa, we turn to a discussion of the international literature on oral reading fluency. Thereafter we explain our methodology and provide background information on the sample and assessments that were used. Section 3 contains a descriptive analysis of the data, while the final two sections develop tentative benchmarks for oral reading fluency in English for ESL students in rural South African schools. Finally Section 5 provides some policy recommendations regarding reading and reading interventions in South Africa.
An overview of South African large-scale research on reading outcomes and large-scale reading interventions
South Africa is in the fortunate position of having considerable amounts of data on educational outcomes in different subjects and at different grades. By implementing local assessments and agreeing to participate in cross-national assessments, the Department of Basic Education has ensured that there exists a solid foundation of nationally-representative data on which to make evidence-based policy. The results of these assessments are stable, consistent, reliable and sobering. As far as reading outcomes in the primary grades are concerned the three most recent and reliable assessments are the pre-Progress in International Reading Literacy Study (prePIRLS Grade 4, 2011), the Southern and Eastern African Consortium for Monitoring Educational Quality (SACMEQ, Grade 6, 2007) and the National School Effectiveness Study (NSES, Grades 3/4/5, 2007/8/9).
The NSES study assessed a nationally-representative sample of schools in South Africa (excluding Gauteng) and found that the average Grade 3 student scored 20% on a Grade 3 test conducted in English (Taylor & Taylor, 2013: p47). Given that the language of learning and teaching (LOLT) for most Grade 3 students in South Africa is still an African language (the switch to English is only in Grade 4), this is perhaps unsurprising. However, Spaull (2015: p.71) shows that the achievement of these students in their home language while better, is still extremely low. Given that some students wrote both the Systemic Evaluation 2007 Grade 3, which was conducted in the LOLT of the school, as well as the NSES 2007 Grade 3, which was the same test conducted in English one month later, Spaull shows that the matched sample scored 23% in the Systemic Evaluation in English, and scored 34% on the same assessment one month later when it was conducted in the LOLT of the school. While this shows that there is clearly a cost to writing the test in an unfamiliar language (particularly given that students had not yet switched to English), it also dispels the myth that students are performing acceptably in an African language before the switch to English in Grade 4.
The two cross-national assessments that focus on primary-school literacy provide complementary evidence given that prePIRLS was conducted primarily in African languages in Grade 4 (prePIRLS used the LOLT of the school in Grades 1-3), while SACMEQ assessed students in English and Afrikaans in Grade 6 after the language transition. Howie, Van Staden, Tshele, Dowse and Zimmerman (2012: p.47) show that 58% of grade 4 students did not achieve the Intermediate International Benchmark and 29% did not achieve the Low International Benchmark. That is to say that 58% of students could not interpret obvious reasons and causes and give simple explanations or retrieve and reproduce explicitly stated actions, events and feelings.
One can think of these students as those that cannot read for meaning in any true sense of the word. More disconcerting is the 29% of students that could not reach the most rudimentary level of reading; locating and retrieving an explicitly stated detail in a short and simple text. It would not be incorrect to classify these 29% of students as illiterate or non-readers in their home language 1 . The SACMEQ study of 2007 tested a nationally representative sample of learners in English and Afrikaans (the LOLTs in South Africa in Grade 6). It was found that 27% of learners were functionally illiterate in English or Afrikaans in the sense that they could not read a short and simple text and extract meaning (Spaull, 2013: p.439). Among the poorest 20% of schools this figure rises to 43% of learners that are functionally illiterate.
Large-scale reading interventions in South Africa
The crisis in basic literacy in South Africa has not gone unacknowledged by the Department of Basic Education. Since the early 2000s there have been a number of 1 For the majority of students the test was conducted in their home language. Only where a student's home language differed from the LOLT of the school in Grades 1-3 would this not be true. national policies, strategies, campaigns, and interventions in an attempt to address this. We provide a brief overview of the most pertinent endeavors.
Early Grade Reading Assessment (EGRA)
The first major intervention was the development of the Early Grade Reading Assessment (EGRA) for South Africa which began in 2006 (Hollingsworth, 2009).
EGRA aims to measure the early reading processes including recognizing letters of the alphabet, reading simple words, and understanding sentences and paragraphs. It was developed as an individual oral assessment of students' foundation reading skills, and has been successfully used in many developing countries (Bruns, Filmer, & Patrinos, 2011). In 2007, the EGRA instruments were field tested in South Africa by the Molteno Institute for Language and Learning. This included 315 learners from 18 schools in six South African languages. The results showed that learners were not able to read at their grade level and that learners performed worse than their counterparts in many other African countries. In 2012, NEEDU used the EGRA tests to assess the reading fluency of the best three Grade 2 students (selected by the teacher) in each of 215 urban and township classes. They found that that 72% of the three best learners in each class were reading at or below 70 WCPM and that 22% were reading at or below 20 WCPM. (NEEDU, 2013, p40). These results should be interpreted with some caution since the EGRA instruments were directly translated (rather than versioned) and they were not all piloted. Following the recommendation of NEEDU, the Department of Basic Education (DBE) resuscitated the EGRA project and in 2014-15, reading promotion was declared a Ministerial priority programme (Motshekga, 2014).
Systematic Method for Reading Success
In late 2008, the Systematic Method for Reading Success (SMRS) was developed and piloted. This is an early grade fast-track reading programme which uses a home languages approach to teaching initial reading (Piper, 2009). It is designed for teachers who do not know how to teach reading and can be seen as a scripted Teacher's Manual so that teachers with little preparation in reading instruction can teach it. SMRS is meant to be a supplementary introduction to a full literacy programme in learners' home languages. In the three provinces that participated in the pilot, the programme was deemed relatively successful (Piper, 2009).
Teaching Reading in the Early Grades -A teacher's handbook
As part of the FFL campaign, a teachers' handbook, Teaching Reading in the Early Grades (DBE, 2008b), was developed. The handbook was designed to help Foundation Phase teachers teach reading. It highlighted the core elements of teaching reading and writing including: shared reading and writing; guided reading and writing; independent reading and writing activities; word-level and sentence-level work. These materials form the foundation for the current national Curriculum and Assessment Policy Statements (CAPS).
The Integrated National Literacy and Numeracy Strategy
Also in 2008, the National Reading Strategy Grades R-12 (NRS) (Department of Basic Education, 2014) was developed as a strategy to address the growing concern over illiteracy, and to promote a nation of life-long readers and life-long learners. The NRS provides an outline of curriculum requirements, reading activities and resources needed, grade-by-grade in the Intermediate and Senior Phases for teachers and school managers, by grade level. It gives guidance to learners, teachers, school leaders, parents and systems managers. This strategy was closely followed by the Integrated National Literacy and Numeracy Strategy (INLNS) (Department of Basic Education, 2011b) which was the department's response to the need for urgency in addressing the low achievement levels of learners in literacy and numeracy as confirmed in the poor national Annual National Assessment (ANA) results and various international assessment results. In November 2011, the Council of Education Ministers (CEM) resolved that the INLNS should be implemented in 2012 as a national initiative. CEM further agreed that planning with provincial education departments and key stakeholders should begin in earnest, and that the strategy would target the classroom and teachers as key levers for change in learner performance and would be guided by the Department's 2012 education priorities (CAPS, ANAs and the workbooks).
The INLNS implementation plan is a high-level plan which aims to direct and integrate provincial initiatives, which in turn are expected to formulate detailed plans for districts and schools 'down to the classroom level'. The implementation plan elaborates the targets set in the DBE's Action Plan (Department of Basic Education, 2011a), prioritises areas requiring attention (teacher content knowledge, support material, quality Grade R, etc.) and lists the pre-conditions needed to implement the strategy (vacant posts filled, teacher time-on-task monitored, provisioning of districts, school nutrition, learner transport, etc.). But the INLNS stops short of recommending specific programmes for use at the classroom level, the choice of which is left to provincial departments.
2) Literature review
Reading is a highly complex task phenomenon, comprising many cognitive-linguistic skills (Pretorius, 2012). The importance of learning to read for meaning by the end of the third year of primary schooling is widely acknowledged and accepted throughout the local and international education literatures (Martin, Kennedy & Foy, 2007). This is both to ensure future academic success at school, but also because this creates independent learners. As Good, Simmons and Smith (1998) expound: "Professional educators and the public at large have long known that reading is an enabling process that spans academic disciplines and translates into meaningful personal, social, and economic outcomes for individuals. Reading is the fulcrum of academics, the pivotal process that stabilizes and leverages children's opportunities to success and become reflective, independent learners" (Good, Simmons & Smith, 1998: p45).
One of the essential components of competent reading is Oral Reading Fluency (ORF) which is the speed at which written text is reproduced into spoken language (Adams, 1990). In the literature ORF is generally regarded as the ability to read text quickly, accurately, and with meaningful expression (Valencia et al., 2010;Fuchs et al., 2001;Rasinski & Hoffman, 2003). This skill is believed to be critical to reading comprehension and the speed at which print is translated into spoken language has been identified as a major component of reading proficiency (NICHHD, 2000). When words cannot be read accurately and automatically, they must be analysed with conscious attention. If children use too much of their processing capacity trying to work out individual words, they are unlikely to successfully comprehend what they read (Hudson, Lane, and Pullen, 2005).
ORF can therefore be seen as a bridge between word recognition and reading comprehension. Problems in either oral fluency or reading comprehension will have a significant impact on a learner's ability to learn as they move through the phases of schooling. This has also been confirmed with longitudinal research which found high correlations between reading performance in early primary grades and reading skills later in school (Good et al., 1998;Juel, 1988). Reading fluency has also been found to be a significant variable in secondary students reading and overall academic achievement (Rasinski, et al., 2005).
ORF as a predictor of reading comprehension
At the most basic level, the Early Grade Reading Assessment (EGRA) is an oral reading assessment designed to measure the most basic foundation skills for literacy acquisition in the very early grades: recognizing letters of the alphabet, reading simple words, understanding simple sentences and paragraphs, and listening with comprehension. The EGRA tests, developed by RTI to orally assess basic literacy skills, have been used in over 40 countries (RTI International, 2008). For students in higher grades, ORF is generally measured by having an assessor ask a student to read a passage out loud for a period of time, typically one minute. A student's score is calculated with the number of words read per minute (WPM) and/or the number of words read correctly per minute (WCPM). In order to counter criticism that such an assessment does not validly measure comprehension, the passages are frequently accompanied by comprehension questions, as in the present study.
In their comprehensive review of numerous studies, Fuchs et al. (2001) provide converging evidence supporting ORF's validity as an indicator of reading comprehension. They conclude that: (1) ORF corresponds better with performance on commercial, standardized tests of reading comprehension than do more direct measures of reading comprehension; (2) text fluency (words read in context) compares positively to list fluency (words read in isolation) as an indicator of reading competence; and (3) oral reading fluency measured by reading aloud functions as a better correlate of reading comprehension than does silent reading fluency. In a recent study in South Africa (Pretorius, 2012), a strong correlation was found between three measures of decoding skill and reading comprehension with oral reading fluency emerging as a strong predictor of comprehension.
One explanation for the connection between fluency and comprehension comes from LaBerge and Samuels's (1974) theory of automaticity in reading (Rasinski, et al., 2005). According to this theory, readers who have not yet achieved reading fluency must consciously decode the words they have to read. This cognitive attention detracts from the more important task of comprehending the text. Poor reading fluency is thus directly linked to poor reading comprehension. As Fuchs et al (2001: p.42) explain: "Unfortunately as poor readers rely on the conscious-attention mechanism, they expend their capacity in prediction processes to aid word recognition. Little is left over for integrative comprehension processes, which happens for readers with strong word recognition skills, whereby new knowledge is constructed or new material is integrated into existing knowledge structures" For some languages, the practice of using WCPM as a predictor of comprehension has been criticized (Graham & van Ginkel, 2014). In a quantitative study of early grade reading in two European (English and Dutch) and two African languages (Sabaot and Pokomo) Graham & Van Ginkel (2014) analysed WCPM and comprehension scores of over 300 children in three countries and found that similar comprehension scores were associated with diverse WPM rates. This, they suggest, indicates that fluency measured as WCPM is not a reliable comparative measure of reading development since linguistic and orthographic features differ considerably between languages and are likely to influence the reading acquisition process.
Valencia & Buly's study (2004) raised concerns regarding the widespread use of WCPM measures and benchmarks to identify students at risk of reading difficulty. In their study, oral reading fluency data and standardized comprehension test scores were analyzed for students in grades 2, 4, and 6 in two Pacific Northwest school districts in America that had diverse student populations. One third of the student group spoke English as a second language. The results indicated that assessments designed to include multiple indicators of oral reading fluency provided a finergrained understanding of oral reading fluency and fluency assessment and a stronger predictor of general comprehension. Comparisons across grade levels also revealed developmental differences in the relation between oral reading fluency and comprehension, and in the relative contributions of oral fluency indicators to comprehension. When commonly used benchmarks were applied to WCPM scores to identify students at risk of reading difficulty, both false positives and false negatives were found. Valencia & Buly (2004) argue for a much more comprehensive assessment in order to understand the specific needs of different children. Their approach was to conduct individual reading assessments, working one-on-one with the children for approximately two hours over several days to gather information about their reading abilities. They administered a series of assessments that targeted key components of reading ability: word identification, meaning (comprehension and vocabulary), and fluency (rate and expression). Their research suggested that weak readers may not be weak in all three areas, and that there could be as many as six different profiles of readers, all needing different remedial attention. This approach may represent the 'gold standard' of reading assessment but the reality in most countries, and particularly in South Africa, is that this sort of assessment is unlikely to be realistic or practical.
Oral Reading Fluency among English Second Language Learners
The investigation of ORF for students reading in a second (or third) language is not as extensive as that for students reading in their first language. Notwithstanding the above, ORF studies on ESL students have been conducted in South Korea (Jeon, 2012), Kenya (Piper & Zuilkowski, 2015) and America (Al Otaiba et al., 2009;Jimerson et al., 2013). This does not include the numerous EGRA studies that have been conducted by RTI and USAID (Abadzi, 2011). For many second language readers, reading is a "suffocatingly slow process" (Anderson, 1999, p.1); yet developing rapid reading, an essential skill for all students, is often neglected in the classroom. Data from Segalowitz, Poulsen, and Komoda (1991) indicate that the English Second Language (ESL) reading rates of highly bilingual readers can be 30% or more slower than English First Language (EFL) reading rates. Readers who do not understand often slow down their reading rates and then do not enjoy reading because the process becomes laborious. As a result, they do not read extensively, perpetuating the cycle of weak reading (Nuttall, 1996, in Anderson, 1999. Conventional wisdom indicates that lack of oral English proficiency is the main impediment to successful literacy learning for young English Second Language learners, but recent evidence suggests that this may not be true. Conflicting data exist regarding the optimal or sufficient reading rate (Anderson, 1999). Some authorities suggest that 180 words per minute while reading silently "may be a threshold between immature and mature reading and that a speed below this is too slow for efficient comprehension or for the enjoyment of text" (Higgins and Wallace 1989 p 392, in Anderson, 1999). Others suggest that silent reading rates of ESL readers should approximate those of EFL readers (closer to 300 WPM), especially if the ESL is also the language of learning and teaching (LOLT), in order to come close to the reading rate and comprehension levels of EFL readers.
While research into reading in an ESL is not as extensive as its EFL counterpart, an increasing number of comparative EFL/ESL reading studies have been undertaken at different age levels. Pretorius (2012) argues that ESL reading theories tend to draw quite heavily on EFL reading theory, the assumption being that the underlying skills and processes involved in reading languages with similar writing systems are similar across languages. If these decoding processes are similar in alphabetic languages, then there is no reason why ESL reading rates should be so laborious. An area where differences between EFL and ESL LOLT readers may persistently occur will be vocabulary, but decoding per se should not be a stumbling block. Jimerson et al. (2013) tracked the ORF growth of 68 students from first through fourth grade in one Southern California school district in America, and used it to predict their achievement on a reading test. They found that both ESL students with low SES, and other students with low SES showed low performance in their initial first grade ORF, which later predicted fourth grade performance. The trajectory was the same for EFL students with low SES who performed poorly at the first grade level. The reading fluency trajectories (from the first grade) of the ESL and EFL students with low SES were not significantly different. Their study showed that initial pre-reading skills better explained fourth grade performance than either ESL with low SES or low SES alone.
Using ORF to set reading norms
ORF has been part of national assessments in the USA for decades and norms are well established, but the same cannot be said of most developing countries (Abadzi, 2011).
A search carried out in early 2010 showed that over 50 fluency studies have been conducted in various countries, but the studies often reported data in ways that were not easily comparable, and few had collected nationally representative data.
As early as 1992, researchers in the USA compiled norms for ORF in English based on reading data from eight geographically and demographically diverse school districts in the United States (Hasbrouck & Tindal, 2006). With the growing appreciation for the importance of reading fluency, new norms were developed in 2005 with greater detail, reporting percentiles from the 90 th through the 10 th percentile levels.
The use of norms in reading assessments can be categorised to match four different decision-making purposes (Kame'enui, 2002in Hasbrouck & Tindal, 2006 year when a more in-depth analysis of a student's strengths and needs is necessary to guide instructional decisions. 3. Progress-monitoring measures: Assessments conducted at a minimum of three times a year or on a routine basis (e.g., weekly, monthly, or quarterly) using comparable and multiple test forms to (a) estimate rates of reading improvement, (b) identify students who are not demonstrating adequate progress and may require additional or different forms of instruction, and (c) evaluate the effectiveness of different forms of instruction for struggling readers and provide direction for developing more effective instructional programs for those challenged learners.
Outcome measures:
Assessments for the purpose of determining whether students achieved grade-level performance or demonstrated improvement.
Such fluency-based assessments have been proven to be efficient, reliable, and valid indicators of reading proficiency when used as screening measures (Fuchs, Fuchs, Hosp, & Jenkins, 2001). This was also shown to be the case for ESL students, as shown by the work of Al Otaiba et al. (2009). They examined American Latino students' early ORF developmental trajectories to identify differences in proficiency levels and growth rates in ORF of Latino students who were (a) proficient in English, (b) not proficient and receiving ESL services, and (c) proficient enough to have exited from ESL services. They found that ORF scores reliably distinguished between students with learning disabilities and typically developing students within each group.
Setting ESL reading norms in the South African schooling context is a new and, as yet, largely unexplored terrain. One could argue that in the initial stages of ESL reading for LOLT (perhaps Grade 4 learners), reading at 70% the rate of EFL readers is not surprising or unexpected. However, as children go higher up the academic ladder (approaching the end of the Senior Phase), the gap between EFL and ESL reading for LOLT purposes should start narrowing, and by the end Grade 9, ESL norms should preferably start approximating EFL norms. One may also argue for a fluency continuum, with EFL and ESL LOLT reading norms divergent in the beginning stages of reading, but converging by high school. However, all of these suggestions are speculative in nature and are not based on empirical evidence, largely because such empirical evidence does not yet exist in South Africa. It is this gap in the South African literature to which this study hopes to contribute.
3) Methodology: Test Development and Sampling Information
To assess silent reading comprehension of Grade 5 ESL students in the written mode, an appropriate Grade 5 level passage was selected. This was followed by a range of literal and inferential questions in a mixed question format. In addition, Grade 4 and Grade 5 textbooks were used to select two reading passages appropriate to Grade 5 ESL students to assess ORF. Each of the two ORF tests was accompanied by five oral comprehension questions (All test instruments, questionnaires and administrator protocols are available in the Online Appendix 2 ).
Readability
Readability refers, broadly, to the ease or difficulty with which texts are read. Since the 1940s various readability formulae have been used to quantify aspects of texts that are deemed to play a role in determining the ease with which texts are read. These readability formulae invariably incorporate word length and sentence length in relation to overall text length, the assumption being that short words and short sentences are easier to read than longer words and sentences. Examples of readability formulae include the Flesch Reading Ease (RE), the Dale-Chall and the Grammatik formulae. Although the assumptions underlying the readability formulae have been criticised for oversimplifying the reading process, since there are several text-based and reader-based factors that affect reading ease, they continue to enjoy popularity as predictors of text difficulty (Klare, 1974).
The Flesch Reading Ease formula has been used in this analysis, primarily because it is easily available and in the educational context, serves as a useful guideline for establishing consistency across texts at specific grade levels. According to Hubbard The analysis also determines the number of passive constructions used in a text.
Passives are considered slightly more difficult to read than actives. The higher the number obtained from the computation, the easier the text is regarded as being while the lower the number, the more difficult the text. The scores have been measured in terms of readability categories, as shown in Table 1 below. Most academic/scientific texts and research articles fall into the last two categories of RE. One would expect Grade 4 and 5 textbooks to fall within the 90-70 range of scores. Using American textbooks as the data base, the Flesch-Kincaid formula was used to determine the reading ease of texts written for the different grades. These scores reflect the actual grade level, e.g. a score of 6 would indicate a text appropriate for Grade 6. This readability score does not reflect aspects such as the persuasiveness or credibility of a text or its interest level. It is to be expected that the RE score drops the more abstract and complex a topic is. The use of technical terms (e.g. pollution, precipitation) as well as general academic terms (e.g. operates, features) also affect RE.
A selection of Grade 4 and 5 textbooks across various subjects were obtained from primary schools in two townships near Tshwane, namely, Atteridgeville and Mamelodi respectively. From each textbook, four passages were selected, one from the beginning, two from the middle and one from the end. These passages were scanned and converted into MS Word text files; all the pictures and diagrams were removed and only running text used for the readability analysis. The results are given in Table 2 and Table 3 below. The RE range of the Grade 4 textbooks was between 82 -72, falling within the 'easy' to 'fairly easy' categories, while that of the Grade 5 textbooks was between 84 -68, falling between the 'easy' to 'standard' categories. As to be expected, there was a gradual decrease in RE scores from Grade 4 to Grade 5, with concomitant increases in the use of passives and more words per sentence, particularly in the content subjects. The latter textbooks also carry an increase in the use of specialist technical words as well as general academic words. It is interesting to note that across both grades the RE scores were higher (i.e. easier) in the English and Maths texts than in the other content subject texts.
The outcome of the readability analysis conducted served as a guideline for Steps 2 and 3, namely the selection of two passages appropriate to Grade 4 and 5 levels to assess oral reading fluency, and the selection of a passage appropriate to Grade 5 level to assess silent reading comprehension in the written mode.
The reading comprehension passage
Two passages were selected as the base for written reading comprehension test.
Eleven questions were asked, five based on the first passage, and six based on the second. The reliability scores of the combined comprehension passages, as well as the readability score of the questions are show in the Table 4 and 5 below while the question types are shown in Table 6. Based on the learner results, a Cronbach's alpha analysis was done on the written comprehension passage. Cronbach's alpha was 0.83 which indicates good reliability of the overall test.
The sample of schools and students
The data used in this study comes from a non-random sample of 4667 Grade 5 Very poor reading levels (poor letter and word recognition in the home language of learners) were identified in the first NEEDU evaluation cycle when Grade 2 learners were assessed using the EGRA instruments in 2012. Reading was thus identified as a critical factor inhibiting improvement in the sector. In the second NEEDU evaluation cycle, it assessed Grade 5 learners' reading in terms of their ORF and reading comprehension. It is these data that form the basis for this paper.
The labour-intensive nature of the approach to systemic evaluation adopted by NEEDU (NEEDU, 2013), meant that the number of schools selected for evaluation was limited and non-random. NEEDU aimed to assess one third of districts with the aim of covering all districts in three years. Within each district a district official was asked to select 8 schools for inclusion in the sample. This non-random selection clearly affects the generalizability of the sample, but if anything the results are positively biased (i.e. if better schools were put forward). The sample also seems to include more schools that were closer to amenities and fewer extremely remote schools. One further limitation is that the NEEDU school visits (and therefore the ORF assessments) were conducted throughout the year meaning that some schools were assessed earlier in the year and others later in the year. Analysis of the results by month and province shows no relation between the month of assessment and ORF or comprehension outcomes. Consequently we do not disaggregate the results by month but treat the sample as a grade 5 composite sample.
Notwithstanding the above, the sample of 214 schools is large by local and international standards and the number of students being assessed on oral reading fluency (1772) Two NEEDU evaluators visited each school to conduct the NEEDU evaluation, and one of those evaluators was trained as a reading assessor. The learners selected for the ORF assessment read aloud to the reading assessor. The assessor recorded the number of words read correctly, and this together with the time taken to read the passage, calculated the total WCPM read by each learner assessed.
The assessment was discontinued for those learners who clearly could not read the first passage, and for those learners who read at such a slow pace that they failed to complete the first paragraph (56 words) in one minute. To test their comprehension of the text, learners were asked five simple questions relating to the passage. Learners who did not read beyond the first paragraph were only asked those questioned that were relevant to the sections read. Learners were allowed to refer to the passage to answer the comprehension questions. All learners that were able to read beyond the first paragraph in a minute were asked to read a second more difficult passage. 855 learners were in this group, and a similar process was followed for the second ORF passage. Table 9 below provides a range of descriptive statistics on each of the three tests (silent comprehension, ORF Test 1 and ORF Test 2), reporting the number of students who completed the test, as well as the mean, standard error of the mean, minimum, maximum and standard deviation for each measure and reported by province, gender, language of learning and teaching (LOLT) in grade 5, and grade-arrangement. It is worth re-emphasizing that the sample was not randomly selected and is therefore not nationally or provincially representative. That being said, the rank order of the provinces in the silent reading comprehension test is broadly the same as the rank order of provinces using the 2007 grade 6 SACMEQ reading test (Spaull, 2011;21) with the exception of the Northern Cape. In the SACMEQ test the Northern Cape scored lower than the Western Cape and Gauteng whereas here it is the province with the highest average reading comprehension score. Unsurprisingly, this provincial rank order is roughly the same for the ORF Test 1 and ORF Test 2. While we do not stress the provincial results in this sample, we would argue that there are enough boys (2357) and girls (2294) to interpret results by gender with some level of confidence.
4) Descriptive analysis of oral reading fluency and comprehension data
The same applies to reporting results by grade arrangement with 3701 students in monograde classes and 966 students in multigrade classes, and language of learning and teaching (LOLT 4 ) at grade 5 level with 623 students in Afrikaans-medium schools and 3867 students in English medium schools. 4 The astute reader will notice that the two categories "Afrikaans LOLT (Gr5)" and "English LOLT (Gr5)" do not sum to the total number of students. This is because there were 46 grade 5 students from one school in the Eastern Cape where the LOLT was recorded as isiXhosa. While this is unusual it is possible. The reason we do not include three categories for LOLT is that the results for isiXhosa would be based on one school rather than a large number of schools, as is the case with Afrikaans LOLT (45 schools) and English LOLT (161 schools). Apart from this, the remaining differences in any of the categories are due to missing information. were marginally lower (20.2%) than in multigrade classes (21.7%) however this difference is not statistically significant (Figure 1). The largest difference between the three groupings is seen between students learning in English (19.1%) and students learning in Afrikaans (30.2%). The fact that students learning in Afrikaans do better on an English comprehension test than students learning in English requires investigation.
Firstly, the vast majority (92%) of students learning in Afrikaans in grade 5 also spoke Afrikaans as their home language, and all of them had been learning in 5 The use of race as a form of classification and nomenclature in South Africa is still widespread in the academic literature with the four largest race groups being Black African, Indian, Coloured (mixed-race) and White. This serves a functional (rather than normative) purpose and any other attempt to refer to these population groups would be cumbersome, impractical or inaccurate. The gaps between the subgroups are smaller for ORF Test 2, as one might expect when there is a selection effect determining which students proceed to ORF Test 2.
Only students that could read at least the first paragraph of ORF test 1 proceeded to ORF test 2. While the first paragraph contained 56 words, and therefore the minimum WCPM scores here might seem strange, a student could have completed the first paragraph with many mistakes allowing them to proceed to ORF Test 2 while still having an extremely low WCPM score. the average score (out of 5) was 1.3 with a standard deviation of 1.4. For the ORF 2
Correlations between oral reading fluency and comprehension
Comprehension the average score was 1.5 with a standard deviation of 1.2. Figure 2 and 3 below show the scatterplots and respective histograms of silent reading comprehension and ORF Test 1 (Figure 2) and ORF Test 2 (Figure 3). These graphs show that the distributions of silent reading comprehension scores and words read correct was lower for the ORF 1 sample than the ORF 2 sample, as would be expected given that ORF Test 1 (n=1772) was representative of the schools, while ORF Test 2 (n=855) included only those students who could read at least one paragraph in ORF Test 1. Figure AB shows that a full 14% of the sample could only read 0-5 words correctly per minute.
Intra-class variation in oral reading fluency
While it is useful to understand average rates of WCPM, as well as overall standard deviations, it is also helpful to report the range of WCPM scores within a school. The ORF Test 1 results show large variation between the best perfomring learner and the worst performing learner within a school. If one looks at the distribution of the range (maximum WCPM -minimum WCPM), one can see that in 50% of schools this gap is more than 78 WCPM. In 25% of schools the gap is larger than 98 WCPM. The Two plausible explanations exist for the large intra-class gap: (1) the strong impact of home literacy practices where some students are exposed to text and encouraged to read more than others, and (2) teachers teaching to the best learner in the class such that the best learner(s) continue to improve while students performing at the bottom end of the spectrum stagnate.
The relationship between oral reading fluency and comprehension
While the aim of the current paper is not to estimate the nature of the relationship between oral reading fluency and comprehension, it is still helpful to illustrate the broad trends between these two measures. Before this discussion it is helpful to explain two decisions: firstly which measure of comprehension is used, and secondly which measure of oral reading fluency is used.
• Measure of comprehension: Of the three measures of comprehension, we believe that the most reliable measure of comprehension is the 40-minute silent reading comprehension test that consisted of 11 questions and totalled 20 marks. Although the ORF Test 1 and ORF Test 2 comprehension questions were based on the same text as the one used for the oral reading fluency measure, there were only five one-mark questions asked after each passage.
Hence this measure is less nuanced and has less variation. Consequently we use the silent reading comprehension measure for the remainder of the paper.
• Measure of oral reading fluency: Of the two measures of oral reading fluency (ORF 1 and ORF2) we use the ORF Test 1 measure since this included the full sample of those tested for oral reading fluency (n=1772).
Given that these students were selected from the top, middle and bottom of the class they are broadly representative of the classes from which they came. The same cannot be said of the ORF Test 2 results since only students that read past the first paragraph proceeded to ORF Test 2, making this a selective subsample of students in the class. Consequently we focus on ORF Test 1 as the measure of oral reading fluency. Figure 4 below shows the cumulative density functions (CDF) of words correct per minute on ORF Test 1 for three groups of students; (1) those achieving less than 30% on the silent reading comprehension test, (2) those achieving 30-59% and (3) those achieving 60%+ on the test. One can clearly see that the CDFs of the three groups differ substantially. If one looks at the 50 th percentile (y-axis) together with Table 11 one can see that in group 1 half of the 1220 students were reading at 37 WCPM or lower, in group 2 half of the 445 students were reading at 63 WCPM or lower, and in group 3 half of the 107 students were reading at 87 WCPM or lower.
Figure 4: Cumulative density function (CDF) of words correct per minute on
Oral reading Fluency Test 1 per category of performance on the silent reading comprehension test. If one looks at the oral reading fluency rates in Table 11 and compares these to (typically English or French) in each country. This is obviously problematic since it is reasonable to expect oral reading fluency rates would differ based on text type and difficulty, whether it is in a student's home language or an additional language and whether the language is an agglutinating or fusional language. Notwithstanding the above, she recommends that as a broad rule of thumb children should be reading at 45 WCPM by the end of grade 2 and 90-120 WCPM by the end of primary school (Abadzi, 2011: p27). Given the lack of additional information on language, samplesize, grade etc. it is difficult to use these benchmarks in the South African context.
We follow Abadzi's (2011) approach and use our assessments of both oral reading fluency in English and comprehension in English (a second language to these students) and use these results to create tentative ORF benchmarks. If one specifies some minimum level of comprehension and then observes the distribution of words correct per minute associated with those students, it becomes possible to develop benchmarks that are specific to the South African rural context, and particularly the linguistic context where students are being assessed in a second language (English) and have only been learning in that language for 1-2 years.
Following this approach one can use Figure 4 and Table 11 to help identify logical thresholds of words correct per minute for South African ESL students. If students are performing below these thresholds teachers have reasonable cause for concern. Table 11 shows that of those 107 grade 5 students (from 61 schools) that are performing 'acceptably' (here defined as 60% or higher on the silent reading comprehension test), almost no student achieved lower than 50 WCPM and the majority (75%) scored above 68 WCPM. In contrast, of those students scoring less than 30% on the test, the majority (75%) scored less than 52 WCPM. English Speaker (C1). These are briefly described in Table 12 below. Grades 1-5 in South Africa, at least until more data becomes available on oral reading fluency benchmarks in South Africa.
In order to develop accurate benchmarks and acceptable growth trajectories that are specific to South Africa one would need a large data set of panel data on student oral reading fluency scores at successive grades, or at the very least repeated cross-sections of large samples of students at successive grades. As yet this is not available and improvised schema -such as that of Broward County -may be of value in the interim.
5) Conclusions and policy discussion
While the reading crisis in South Africa is widely acknowledged (Fleisch, 2008;Spaull, 2013;Taylor Van der Berg & Mabogoane, 2013), almost no prior research exists on oral reading fluency in English, despite this being one of the major components of reading. The present study has begun to alleviate this paucity of information by analyzing a large data set of grade 5 rural ESL learners assessed in English. The four major findings emerging from the analysis are as follows: 1) The English oral reading fluency of grade 5 rural ESL learners in South Africa is exceedingly low. 41% of the sample read at cripplingly slow speeds of less than 40 words correct per minute with an average of only 17 WCPM and can therefore be considered non-readers. These students were reading so slowly that they could not understand what they were reading at even the most basic level. Almost all of these non-readers (88%) scored less than 20% on the comprehension test. A further 28% of the sample scored less than 30% on the comprehension test bringing the total to 69% of grade 5 students who could not score 30% on the comprehension test. A quarter scored between 30% and 60% and only 6% of the sample scored above 60% on the comprehension test.
2) The full sample of South African Grade 5 rural ESL students' ORF scores are approximately the same as the lowest category of Grade 2 ESL students in America (non-English Speaker: A1). These students cannot communicate meaning orally in English. Given that the language of learning and teaching from grade 4 is English for almost all of these students, this is of serious concern.
3) The correlations between various measures of ORF and comprehension were approximately 0.5. This is relatively low compared to most of the international literature, however, that literature reflects largely on home language speakers. More research on ESL learners is needed before concluding if these correlations are low or high in international context.
4) Setting Oral Reading Fluency benchmarks for South African ESL learners is a useful
endeavor, allowing teachers to identify and track struggling readers and to provide a yard-stick against which teachers can compare their students' progress or lack of progress. It is not possible to simply use the Hasbrouck & Tindal (1996) norms given that these were developed for the U.S. and primarily for EFL students. We argued that a benchmark of 90-100 WCPM in English in grade 5 for ESL students in South Africa is probably a good starting point until more data become available. Only 9% of the sample achieved these levels of oral reading fluency. We also highlighted the potential of using the Broward County ESL classification chart and following the 'Intermediate English (B1)' trajectory for South African ESL students.
From a policy perspective there are three main recommendations that we would put forward:
1) The majority of primary school teachers do not know how to teach reading in either
African languages or in English. This is evidenced by the cripplingly low ORF scores in grade 5. These students cannot engage with the curriculum (which is now in English in Grade 5) and hence fall further and further behind as the reading material and cognitive demands become more and more complex. There is a clear need to convene a group of literacy experts to develop a course to teach Foundation Phase teachers how to teach reading. This course should be piloted and evaluated and if it is of sufficient quality should become compulsory for all Foundation Phase teachers in schools where more than 50% of students do not learn to read fluently in the LOLT by the end of Grade 3.
2) The clear need for evidence-based interventions, evaluations and sustained support.
Much of the policy energy that has been expended in the last 10 years has been sporadic and haphazard. Successful programs (like the SMRS) are not pursued while new initiatives are funded (but not evaluated) without a clear understanding of how they improve on or learn from previous initiatives. Any new national literacy drive needs to be piloted, independently evaluated and only taken to scale if and when it is proved to be effective. This should be seen as a medium-to-long term goal rather than a short-term goal.
3) Reading as a unifying goal for early primary schooling. The single most important goal for the first half of primary school should be the solid acquisition of reading skills such that every child can read fluently in their home language by the end of Grade 3 and read fluently in English by the end of Grade 4. This goal is easily communicated and understood by parents, teachers and principals and is relatively easy to measure and monitor. The benefit of having a single unifying goal to focus attention, energy and resources should not be underestimated.
Research Foundation (NRF) Research Priority Area.
Given the scale of the reading crisis and the lack of research on African languages at South African universities (particularly relating to early literacy in African languages), the NRF should declare this to be a national priority and dedicate significant resources to those researchers and departments with the skills and expertise needed to understand more about how children learn to read in African languages and which interventions are most promising.
|
v3-fos-license
|
2018-10-13T13:38:10.060Z
|
2018-10-12T00:00:00.000
|
52974764
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-018-5117-8",
"pdf_hash": "bab56ddaff3bc214fed467c1bd7f147e58fa8362",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43167",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "18e2c1978840fc3eb9045a7009853381a0f24d76",
"year": 2018
}
|
pes2o/s2orc
|
Genome-wide identification of oil biosynthesis-related long non-coding RNAs in allopolyploid Brassica napus
Background Long noncoding RNAs (lncRNAs) are transcripts longer than 200 bp that do not encode proteins but nonetheless have been shown to play important roles in various biological processes in plants. Brassica napus is an important seed oil crop worldwide and the target of many genetic improvement activities. To understand better the function of lncRNAs in regulating plant metabolic activities, we carried out a genome-wide lncRNA identification of lncRNAs in Brassica napus with a focus on lncRNAs involved in lipid metabolism. Twenty ribosomal RNA depleted strand specific RNA-seq (ssRNA-seq) datasets were generatred using RNAs isolated from B. napus seeds at four developmental stages. For comparison we also included 30 publically available RNA-seq datasets generated from poly(A) enriched mRNAs isolated from from various Brassica napus tissues in our analysis. Results A total of 8905 lncRNA loci were identified, including 7100 long intergenic noncoding RNA (lincRNA) loci and 1805 loci generating long noncoding natural antisense transcript (lncNAT). Many lncRNAs were identified only in the ssRNA-seq and poly(A) RNA-seq dataset, suggesting that B. napus has a large lncRNA repertoire and it is necessary to use libraries prepared from different tissues and developmental stages as well as different library preparation approaches to capture the whole spectrum of lncRNAs. Analysis of coexpression networks revealed that among the regulatory modules are networks containing lncRNAs and protein-coding genes related to oil biosynthesis indicating a possible role of lncRNAs in the control of lipid metabolism. One such example is that several lncRNAs are potential regulators of BnaC08g11970D that encodes oleosin1, a protein found in oil bodies and involved in seed lipid accumulation. We also observed that the expression levels of B. napus lncRNAs is positively correlated with their conservation levels. Conclusions We demonstrated that the B. napus genome has a large number of lncRNA and that these lncRNAs are expressed broadly across many developmental times and in different tissue types. We also provide evidence indicating that specific lncRNAs appear to be important regulators of lipid biosynthesis forming regulatory networks with transcripts involved in lipid biosynthesis. We also provide evidence that these lncRNAs are conserved in other species of the Brassicaceae family. Electronic supplementary material The online version of this article (10.1186/s12864-018-5117-8) contains supplementary material, which is available to authorized users.
Background
Non-coding RNAs (ncRNAs) are transcripts without a clear coding protein capacity found in the transcriptomes of plants and animals at an increasing frequency in recent years [1]. The role of ncRNAs is still not fully known but has been suggested to be involved in regulation of gene expression, translation, cell-cycle progression and other cellular functions [2,3]. There are diverse kinds of ncRNAs that have been generally grouped into housekeeping and regulatory ncRNAs. The housekeeping ncRNAs include transfer RNAs (tRNAs), small nuclear RNAs (snRNAs), small nucleolar RNAs (snoRNAs) and ribosomal RNAs (rRNAs). The regulatory ncRNAs fall into two subclasses in plants. One type is the small RNAs (sRNAs), including micro-RNAs (miRNAs) and small interfering RNAs (siRNAs) with a size of 20-24 nucleotides (nt). sRNAs achieve their functions via two main mechanisms: transcriptional gene silencing (TGS) and posttranscriptional gene silencing (PTGS). Another type is long non-coding RNAs (lncRNAs) with a size defined as longer than 200 nt. LncRNAs have been shown to function in response to a wide range of biotic and abiotic stresses in plants [4][5][6][7]. LncRNAs are grouped according to their genomic location and orientation relative to their nearby protein-coding genes. Long intergenic noncoding RNAs (lincRNAs) locate in the interval between two genes. Long noncoding natural antisense transcripts (lncNATs) are those overlapping with protein coding genes in the opposite orientation. Long intronic noncoding RNAs are generated from intron of other transcripts and sense lncRNAs are those partially overlapping with protein coding genes on the same strand [8,9]. LncRNAs are usually lowly expressed and tissue-specific [10]. Plant lncRNAs have been shown to be involved in transcriptional gene silencing, gene expression regulation, chromatin structure remodeling and other epigenetic mechanisms [11][12][13][14][15].
B. napus, also known as oilseed rape, is second only to soybean as an oil crop with a world production of over 60 million tons [38]. B. napus is an allotetraploid (A n A n C n C n ) evolved from a spontaneous hybridization event between B. rapa (A r A r ) and B. oleracea (C o C o ) about 7500 to 12,500 years ago [39]. With the availability of the B. napus genome sequence [39], it is now possible to identify and characterize lncRNAs at the whole-genome level in this important oil crop.
Oil biosynthesis is one of the key biological processes in B. napus and a major focus of much experimental research [40,41]. Up to now, a role of ncRNAs in lipid and fatty acid metabolism in B. napus has only been investigated to a very limited extent [42][43][44]. Some miRNAs were found to be differentially expressed in cultivars with different seed oil content [43]. Shen et al. (2015) found that 122 lipid-related genes are potentially regulated by 158 miRNAs. Recently, Wang et al. (2017) further showed that 11 miRNAs may have regulatory relationships with 12 lipid-related genes.
To further investigate the possible role of lncRNAs in the control of oil biosynthesis in B. napus, we have conducted a comprehensive analysis of lncRNAs at multiple stages of seed development. We also collected 30 publically available RNA-seq datasets generated from different tissues of B. napus.We show that the Brassica napus genome contains a large number of tissue and developmental stage specific lncRNAs and that some of these form part of regulatory networks specifically involved in the control lipid biosynthesis. We also show that some of these regulatory lncRNAs are conserved in other species of the Brassicaceae family, including the two progenitors (B. rapa and B. oleracea) of B. napus and A. thaliana.
Genome-wide identification of lncRNAs in B. napus
To identify lncRNAs related to lipid biosynthesis in B. napus, we first analyzed oil accumulation in developing seeds of the B. napus cultivar KenC-8. Developing seeds were collected from siliques every five days after flowering (DAF) up to 50 DAF (seed maturity) in two separate growing seasons. Little oil accumulation was observed in 5-20 DAF seeds, and the majority of oil accumulation occurred between 20 to 35 DAF. After 35-40 DAF, the rate of oil accumulation started to decrease ( Fig. 1). Based on this oil accumulation profile, we selected four developmental stages for transcriptome analysis: 10-20 DAF (a developmental stage when little oil accumulation was occurring) was selected as baseline control and 25 DAF and 30 DAF were selected to represent two consecutive rapid oil accumulation stages. The RNAs from the developing seed samples of each growing season were kept and analyzed separately, whereas samples from the 10-20 DAF seeds were bulked ( Fig. 1; Additional file 1: Table S1). In total, 20 (including two 40 DAF samples) ssRNA-seq libraries were created and sequenced, and more than 1.8 billion reads were generated for lncRNA identification. In addition we also collected 30 public transcriptomic datasets (including 14 from our previous study [45]) from poly(A) RNA-seq experiments using diverse tissues covering different periods of growth and development of B. napus (Additional file 1: Table S1).
We adapted a previously established pipeline in rice [11] to process all transcriptome datasets. In brief, the pipeline cosists of three main steps, transcript assembly, filtering and protein-coding capacity prediction (Fig. 2a). The RNA-seq data was first mapped to the genome sequence of B. napus [39] to perform de novo transcript assembly. This step identified 113,540 and 110,664 transcripts in the ssRNA-seq and poly(A) RNA-seq datasets, respectively. Local perl scripts were then applied to filter out the transcripts that were shorter than 200 nucleotides, as well as transcripts with infrequent expression, without strand information, with single exon and very close to known transcripts, or overlapping with annotated genes. The final step was to estimate the coding potential for the remaining transcripts. In total, we identified 5899 (7763 transcripts) and 4589 lncRNA loci (7308 transcripts) from the ssRNA-seq and the ploy(A) RNA-seq datasets, respectively (Fig. 2a).
Combining results from the two datasets together, we identified 8905 non-redundant lncRNA loci, of which, 7100 were lincRNAs and 1805 were lncNATs (Additional file 2: Table S2). In total, 13,763 transcripts were identified from the 8905 non-redundant lncRNA loci, mainly due to alternative splicing events. The number of lincRNAs and lncNATs identified in the C n subgenome was higher than that in the A n subgenme (lincRNAs: 4130 versus 2763, 1.5 fold difference; lncNATs: 1076 versus 767, 1.4 fold difference; Additional file 3: Figure S1). This difference in complexity may be due to the differences in the size of the A n (314.2 Mb) and C n (525.8 Mb) genomes. Compared to the ssRNA-seq datasets (10.4%, 808 transcripts), the poly(A) RNA-seq datasets had a higher proportion (20.5%, 1501 transcripts) of lncNATs and a much higher proportion of single exon transcripts (44.0%, 3417 transcripts in the ssRNA-seq datasets versus to 4.9%, 357 transcripts in the poly(A) RNA-seq datasets) (Fig. 2b). Only about 20-30% of the lncRNA loci (1561, including 1402 lincRNAs and 159 lncNATs) were identified in both datasets (Fig. 2c), suggesting that, to have a full set of potential lncRNAs, it is necessary to use both library creating and sequencing methods in lncRNA identification.
The properties of lncRNAs in allopolyploid B. napus
To gain a comprehensive understanding of the lincRNAs and lncNATs in B. napus, we compared several different features of the lincRNAs, lncNATs and mRNAs: exon numbers, transcript length, A/U content, relationship with transposable elements (TEs), and chromosome distribution.
(1) With respespect to exon numbers, 32.7% and 29.8% of lincRNAs contained a single and two exons, respectively. The percentages of lncNATs containing a single and two exons were 26.2% and 54.7%, respectively. Both were much higher than those of protein coding transcripts (18.5% and 18.9%; Fig. 3a, Table 1). Most lincRNAs and lncNATs identified from the poly(A) RNA-seq datasets had two exons, while most lincRNAs and lncNATs identified from the ssRNA-seq datasets had a single exon (Additional file 4: Figure S2A, S2B). (2) Transcript length: The mean transcript length of the lncRNAs was 929 bp for lincRNAs and 985 bp for lncNATs. The transcript lengths of lncRNAs were obviously shorter than that of protein-coding genes (1287 bp; Fig. 3b, Table 1), although most transcripts of both lncRNAs and mRNAs are shorter than 2000 bp. The average lengths of lincRNAs and lncNATs were longer in the ssRNA-seq datasets than in the poly(A) RNA-seq datasets (lincRNAs: 967 bp vs 921 bp; lncNATs: 1168 bp vs 968 bp) (Additional file 4: Figure S2C, S2D). (3) When we analyzed A/U content we found that both the lincRNAs and lncNATs (particularly the lincRNAs), tended to be A/ U-riched compared to protein coding sequences (Fig. 3c, Table 1). Transcripts with a high A/U content are less stable [46], suggesting that lncRNAs may be more flexible in interacting with other transcripts [47]. The A/U content difference between lincRNAs and lncNATs seemed to be smaller in the ssRNA-seq datasets than in the poly(A) RNA-seq datasets (Additional file 4: Figure S2E, S2F). (4) TEs account for 34.8% of the B. napus genome, with 25.9% in the A n subgenome and 40.1% in the C n subgenome [39]. When using ≥10 bp as an overlapping criterion, we found that 36.0% of lincRNAs were overlapping with TEs, with 32.2% and 38.2% in the A n and the C n subgenome, respectively. The proportion of lncNATs overlapping with TEs (13.3% in the A n subgenome and 20.4% in the C n subgenome) was close to that of mRNAs (15.2% in the A n subgenome and 17.3% in the C n subgenome) probably due to the co-localization of lncNATs with mRNAs ( Fig. 3d, Table 1). Not surprisingly, we found that the proportion of lncNATs overlapping with TEs was much higher in the ssRNA-seq datasets than in the poly(A) RNA-seq datasets (Additional file 4: Figure S2G, S2H). (5) Of the lncRNAs that could be assigned to a chromosome location, the most (690 lncRNA loci, representing 523 lincRNAs and 167 lncNATs) were found to map to chromosome C03 and the least to chromosome A10 (201 lncRNA loci, representing 152 lincRNAs and 49 lncNATs) (Additional file 3: Figure S1).
Coexpression analysis revealed potential function of lncRNAs in lipid biosynthesis
It was previously reported that the B. napus genome con-tains~2010 genes related to lipid biosynthesis [39]. To identify potential lncRNAs related to lipid biosynthesis, we applied Weighted Gene Coexpression Network Analysis (WGCNA) [48] to establish the coexpression network involving both protein coding genes predicted to be related to lipid biosynthesis and lncRNAs. We reasoned that such co-expressed lncRNAs would also be related to lipid biosynthesis. The analysis was done using protein coding genes and lncRNAs differentially expressed in the following three comparisons: 25 vs 10-20 DAF, 30 vs 10-20 DAF and 30 vs 25 DAF. We first identified differentially expressed lncRNAs and protein-coding genes in each comparison in each individual year, and then combined the DEGs (referring to both mRNAs and lncRNAs) from the 2 years in the coexpression analysis. In total, 1622 (including 104 lncRNA loci), 2528 (including 113 lncRNA loci) and 1416 (including 105 lncRNA loci) DEGs were identified in the three different developmental stages ( Fig. 4a; Additional file 5: Figure S3A; Additional file 6: Figure S4a). A network was constructed for each comparison using the identified DEGs ( Fig. 4b; Additional file 5: Figure S3B; Additional file 6: Figure S4B). The three networks (i.e., 25 vs 10-20 DAF, 30 vs 10-20 DAF and 30 vs 25 DAF) were partitioned into 9, 8 and 13 modules, respectively ( Fig. 4c; Additional file 5: Figure S3C; Additional file 6: Figure S4C). The relationships between each module and the two traits (oil content and DAF) were computed. In the 25 vs 10-20 DAF comparison, only the green module was significantly correlated with both oil content and DAF (p < 0.01 and p < 2 * 10 − 4 ; Additional file 5: Figure S3C). In the 30 vs 10-20 DAF comparison, the yellow and blue modules were significantly correlated with oil content and seven of the eight modules were significantly correlated with DAF (Fig. 4c). In the 30 vs 25 DAF comparison, the black module was the most significant module correlated with oil content (p < 2*10 − 8 ) and the turquoise module was significantly correlated with both oil content and DAF (p < 0.002 and p < 1*10 − 4 ; Additional file 6: Figure S4C). As examples, the significance of each individual gene is shown in the scatterplots for the three selected modules that showed the most significant correlation with oil-content in each comparison ( Fig. 4d; Additional file 5: Figure S3D; Additional file 6: Figure S4D). We further applied Cytoscape [49] to display the lncRNA-related connections in the three modules ( Fig. 5; Additional files 7 and 8: Figures S5, S6). Based on this analysis, 13 lncRNA loci were found to be correlated with 8 lipid-related genes (Additional file 9: Table S3). In the 25 vs 10-20 DAF comparison, two lncRNAs were co-expressed with three lipid-related genes, wheras in the 30 vs 10-20 DAF comparison, 10 lncRNAs were co-expressed with four lipid-related genes. Four lncRNAs were co-expressed with two lipid-related genes in the 30 vs 25 DAF comparison (Additional file 9: Table S3).
Among the eight lipid-related genes identified in our study was BnaC08g11970D, an ortholog of the Arabidopsis oleosin1 encoding gene AT4G25140. Oleosin is a protein found in oil bodies and involved in seed lipid accumulation. BnaC08g11970D is co-expressed with 9 lncRNA loci, including 8 in the 30 vs 10-20 DAF comparison and 4 in the 30 vs 25 DAF comparison. Three (lnc_008548, lnc_014257 and lnc_030111) of the 9 lncRNA loci were found to be co-expressed with BnaC08g11970D in both comparisons ( Fig. 5; Additional file 8: Figure S6; Additional file 9: Table S3).
Among the other lipid biosynthesis related genes of note are BnaC01g01840D, BnaA09g51510D and BnaC08g46110D. BnaC01g01840D annotates as a patatin-related phospholipase A and is co-expressed with 4 lncRNAs. BnaA09g51510D and BnaC08g46110D may have roles in acetyl-CoA biosynthesis, and are co-expressed with 7 and 2 lncRNAs, respectively. BnaC09g41580D and BnaA05g33500D are predicted to (2015 and 2016). B, hierarchical clustering with the topological overlap matrix to identify network modules consisting of the highly correlated genes. C, the correlation between each module and each of the two traits (oil content and DAF). D, the significance of each gene on oil content in the yellow module encode one of the two Δ9 palmitoyl-ACP desaturases responsible for biosynthesis of ω-7 fatty acids in the maturing endosperm (Additional file 9: Table S3). The lncRNAs co-expressed with these two lipid-related genes may be involved in regulation of the expression of the lipid-related genes to play a role in lipid biosynthesis in B. napus.
To verify the coexpression pattern of lncRNAs and lipid-related genes, we analyzed the expression changes of all 13 lncRNAs and 8 lipid-related genes at two developmental stages (10-20 DAF and 30 DAF) in four randomly selected oilseed cultivars, and were able to successfully generated results for 9 lncRNAs and 6 lipid-related genes. All 6 lipid-related genes (except BnaA09g51510D in GY605 and Zheda 622, and BnaC09g41580D in GY605) have a significantly higher expression level in 30 DAF seeds than in 10-20 DAF seeds in all four cultivars analyzed, particularly the two genes encoding putative oleosin1, BnaC08g11970D and BnaC07g39370 ( Fig. 6; Additional file 10: Figure S7). Consistent with the coexpression analysis results, the expression levels of all 9 lncRNAs were also significantly higher in 30 DAF seeds than in 10-20 DAF seeds in all four cultivars analyzed ( Fig. 6; Additional file 10: Figure S7).
Conservation of lncRNAs in B. napus
We estimated conservation of B. napus lncRNAs in other members (B. rapa and B. oleracea and A. thaliana) of the Brassicaceae family based on both sequence similarity and genomic synteny (details see Methods). For comparison of lncRNA sequences, we collected more than 40,000 previously identified lncRNAs in A. thaliana [19][20][21][22]50], and identified 1259 and 1978 lncRNA loci in B. rapa and B. oleracea, respectively, using the publically available RNA-seq datasets (Additional file 11: Table S4; Additional file 12: Table S5). As shown in Tables 2 and Additional file 13: Table S6, only a small number of B. napus genomic sequences encoding lncRNAs had corresponding lncRNA sequences in A. thaliana, B. oleracea and B. rapa (316 (3.5%), 1074 (12.1%) and 731 (8.2%) respectively). Based on synteny analysis, the position of 3929 (44.1%), 2101 (23.6%) and 1729 (19.4%) lncRNA loci in B. napus were found to be Table S7). The sequences of most B. napus lncRNAs are not well conserved in A. thaliana, or in B. oleracea and B. rapa. Based on search against the whole genome sequences of A. thaliana, B. oleracea and B. rapa, only 809 (9.1%) B. napus lncRNAs were found to be conserved in A. thaliana, but 7476 (84.0%) and 7014 (78.8%) B. napus lncRNAs were found to be conserved in B. oleracea and B. rapa, respectively (Additional file 15: Table S8). These results suggest that B. napus lncRNAs have diverged significantly from A. thaliana but are well conserved in the closely related species. Low conservation of lncRNAs between B. napus and its two ancestors based on comparison of lncRNA sequences is probably because only a small portion of B. oleracea and B. rapa lncRNAs have been identified and used in comparison.
To study the relationship between the expression level and conservation of lncRNAs, we divided the B. napus lncRNAs conserved in other three species into four levels based on the coverage of homologous sequences (Level 1: 20-40%; Level 2: 40-60%; Level 3: 60-80%; Level 4: 80-100%). It seemed that the expression level of lncRNAs was positively correlated with the coverage of homologous sequences (Fig. 7). This is similar to the phenomenon observed in animals and human, where the evolutionary rate of the protein-coding genes and lincR-NAs showed a negative correlation with expression level (i.e. highly expressed genes are on average more conserved during evolution [51]). For the 13 lncRNAs identified to be co-expressed with lipid-related mRNAs, none of them was conserved in A. thaliana, however, all of them had conserved sequences in B. rapa or B. oleracea at the genome level (Additional file 16: Table S9), suggesting that, at least in the Brassicaceae family, oil biosynthesis-related lncRNAs are lineage-specific.
Discussion
Several studies have investigated the roles of small noncoding RNAs in lipid biosynthesis through small RNA sequencing and degradome sequencing [42][43][44], but no genome-wide study on lncRNAs has been previously carried out in B. napus up to now. In this study, we carried out the genome-wide study of lncRNAs in B. napus based on the newly sequenced B. napus genome, rRNA removed ssRNA-seq datasets generated from seeds of different developmental stages and publically available poly(A) RNA-seq datasets generated from diverse tissues. As a result, 7100 lincRNA loci and 1805 lncNAT loci were identified. A large number of lncRNAs have been identified in many different plant species [11,17,19,27,32]. In Arabidopsis and rice, about half reported lncRNAs were un-spliced and contain only a single exon [11,17,19]. This feature was observed in B. npaus lncRNAs identified from rRNA-depleted total RNA, but not in the lncRNAs identified from poly(A) enriched mRNA (Additional file 4: Figure S2). Most B. npaus lncRNAs, particularly lncNATs, identified from the poly(A) enriched mRNA datasets contain two exons. Consequently, the average length of lncRNA transcripts (929 bp for lincRNAs and 985 bp for lncNATs) were longer in B. napus than in other plants. LncNATs had a higher proportion of multiple exons than lincRNAs (72% vs 60%). Compared to lncNATs, lincRNA are more likely to be overlapped with or derived from TEs, probably related to their genomic position. It seemed that TE-derived lncRNAs are more likely to generate alternative splicing events, compared to non-TE derived ones (18% vs 13%) (Additional file 17: Figure S8, Additional file 18: Table S10).
Two unique common features reported for lncRNAs are their low expression level and tissue-specific expression pattern [10,11,32]. Although we found the expression levels of both lincRNAs and lncNATs identified from the poly(A) RNA-seq datasets were lower than that of mRNAs (Additional file 19: Figure S9A), the expression levels of both lincRNAs and lncNATs identified from rRNA-depleted ssRNA-seq datasets were higher than that of mRNAs (Additional file 19: Figure S9B). Similar to B. napus homoeologous genes [39], on average, the A n subgenome homoeologous lncRNAs seemed to have a higher expression level than the C n subgenome ones (Additional file 20: Figure S10). In addition to the difference in exon numbers, lncRNAs identified from total RNA and mRNA also differ in their transcript length, A/U content, and degree of overlap with TEs (Additional file 4: Figure S2). These results together with the observed low level of overlap of the lncRNAs identified from total RNA and mRNA suggest that in order to capture a full set of lncRNAs and uncover as many features of the lncRNAs population as possible, it is necessary to use RNAs isolated from as diverse of a set of tissue and developmental staged samples as possible as a source of starting material.
Oil content is the most important agronomic trait of B. napus and increasing seed oil content is the final objective of many rapeseed breeding programs. Identifying genes involved in lipid biosynthesis regulation during seed development, including protein coding and non-coding ones, is an important first step towards improvement of the crop through genetic engineering. LncRNAs have been shown to play an important role in many aspects of plant development [15,[52][53][54]. Although it is now feasible to perform large scale lncRNA identification, it is still a challenge to study the function of lncRNAs and uncover the mechanism(s) underlying lncRNA-mediated regulation. Based on the rationale that genes involved in the same pathway(s) tend to be co-expressed, we reasoned that lncRNAs co-expressed with lipid-related genes would have a potential role in regulation of oil biosynthesis and accumulation in rapeseed. We found 13 lncRNAs whose expression patterns were significantly correlated with that of 8 lipid-related genes (Additional file 9: Table S3). Furthermore, these coexpression relationships were not related to the genomic location of the lncRNAs and lipid-related genes. Many of the coexpression relationships were further confirmed by qRT-PCR analysis of transcript levels in randomly selected B. napus cultivars. Among the coexpression modules, the relationships between several lncRNAs and BnaC08g11970D are particularly of interest. BnaC08g11970D is predicted to encode a protein homologous to oleosin1 of Arabidopsis, which contains a hydrophobic hairpin domain that is located in the surface of lipid droplets to make them stable and facilitate lipid accumulation [55]. The expression level of BnaC08g11970D is dramatically increased in the developmental stage of rapid seed oil accumulation (Figs. 1, 6), strongly suggesting a role of this gene in oil accumulation. LncRNAs co-expressed with this gene would thus be the ideal candidates of further studies to investigate their potential role(s) in regulating the expression and function of BnaC08g11970D. In summary, our finding point to the importance of examining the lncRNAs as a possible source of novel information and tools for Brassica improvement in the future.
Plant materials and generation of RNA-seq libraries
Brassica napus L. cv KenC-8 plants were grown in the field (Hangzhou, China) in 2015 and 2016. Flowers were tagged on the day of blooming (i.e. 0 day after flowering (DAF)). Every 5 days starting from 5 DAF and up to 50 DAF, seeds from 10 individual plants were harvested, pooled and used in oil content analysis. Based on the seed oil content change profile (Fig. 1), seeds from four developmental stages, i.e. early little oil accumulation (10-20 DAF), early rapid accumulation (25 DAF) and middle rapid accumulation (30 DAF) were used in transcriptome analysis. Two 40 DAF samples were also used in transcriptome analysis. Seeds harvested from these four stages were frozen immediately in liquid nitrogen and used in RNA extraction. Total RNA was isolated using BiooPure™ RNA Isolation Reagents and rRNA was removed by using the Ribo-Zero Kit (Epidemiology). RNA-seq libraries were constructed using the Illumina TruSeq Stranded RNA Kit and sequenced on the Illumina Hiseq 4000 (paired-end 150 bp).
Public datasets used in this study
In total, we downloaded 45 publically available RNA-seq datasets from the National Center for Biotechnology Information (NCBI), including 30 poly(A) RNA-seq datasets from B. napus (accession number PRJEB5461, PRJE B2588, PRJNA262144, and PRJNA338132), 7 poly(A) RNA-seq datasets from B. oleracea (accession number PRJNA183713), and 8 poly(A) RNA-seq datasets from B. rapa (accession number PRJNA185152).
Identification of lncRNAs
All of the raw reads from transcriptome sequencing were treated using Trimmomatic (Version 3.0) [56] with the default parameters for quality control. The clean data were then mapped to the B. napus genome using Tophat (Version 2.1.1) [57]. For each mapping result, Cufflinks (Version 2.1.1) [58] was used in transcript assembly. For strand-specific RNA-seq datasets, the parameter "--library-type fr-firststrand" was employed. All transcriptomes were merged with the annotated file from the reference genome to generate a final transcriptome using Cuffmerge. Cuffdiff was used to estimate the abundance of all transcripts based on the final merged transcriptome. We then used the following six filters to shortlist the bona fide lncRNAs from the obtained final transcriptome assembly: (1) transcripts without strand information were removed; (2) all single-exon transcripts that are within a 500-bp flanking region of known transcripts and in the same direction as the known transcripts were discarded; (3) transcripts overlapped with mRNAs annotated in the reference genome were deleted; (4) transcripts with FPKM scores < 0.5 (2 for single-exon transcripts) and shorter than 200 bp were discarded; (5) the coding potential value of each transcript was calculated using CPC [59] and those with CPC scores > 0 were discarded; (6) the remaining transcripts were searched against the Pfam database [60] by HMMER [61] to remove transcripts containing known protein domain. The transcripts remained were regarded as expressed candidate lncRNAs.
Analysis of seed oil content
Seeds harvested at each developmental stage were dried in an incubator at 70°C until their weight became stable. Isolation and GC analysis of seed lipids for total oil content and fatty acid compositions (expressed as μg/mg of total seed weight) were performed previously described [62,63].
The value of expression chosen for boxplot
The maximum FPKM of lncRNAs and mRNAs across all samples were selected as the expression values and used in generating of their expression distribution using Boxplot [10].
Coexpression network construction
Weighted gene coexpression network analysis (WGCNA) [45] was used to predict the potential roles of lncRNAs in lipid biosynthesis. First, we defined a gene coexpression similarity by the Pearson correlation. Second, an adjacency function was employed to convert the coexpression similarity to connection strengths with a soft thresholding power in each comparison. Third, hierarchical clustering with the topological overlap matrix was used to identify network modules consisting of the highly correlated gene expression patterns. Finally, a summary profile (eigengene) for each module was used to correlate eigengenes with traits (oil content and DAF) and calculate the correlation between each gene and traits by defining Gene Significance (GS). The software Cytoscape was employed to visualize the networks [49].
Positional synteny of lncRNAs
The synteny or co-linearity of lncRNAs among the four species (B. napus, B. rapa, B. oleracea and A. thaliana) was detected by MCScanX [64]. BLASTp was employed to determine the synteny by pairwise comparison with the parameters of E-value <1e-5 and max_target_seqs < 6. For each lncRNA, its 10 flanking protein coding loci were retrieved from the annotation of each genome. Homology tests of lncRNA and flanking genes among the four species were performed by BLASTn and the top 5 hits of each B. napus lncRNA were chosen for comparison of its flanking genes. A syntenic lncRNA pair among B. napus, B. rapa, B. oleracea and A. thaliana was defined by with at least one identical upstream or downstream flanking protein coding gene [42,65].
Sequence conservation of lncRNAs
To analyze the sequence conservation of lncRNAs, all the lncRNAs derived from B. napus were used as the query datasets and searched against lncRNAs from B. rapa, B. oleracea and A. thaliana and their genome sequences with BLASTn. The cutoff threshold for significant hits was an E-value <1e-5, coverage > 40% and identify > 50% for the matched regions [65].
Quantitative reverse transcription (qRT)-PCR analysis
Total RNA isolated from seed samples of four cultivars at two stages 10-20 DAF and 30 DAF was used for first-strand cDNA synthesis using a HiScript II 1st Strand cDNA Synthesis kit (Vazyme) according to the manufacturer's protocol. The cDNA was used as templates in qRT-PCR (ChamQ SYBR qPCR Master Mix-Q311 (Vazyme). Real-time PCR was performed using the LightCycler 96 (Roche). The reactions were performed at least in triplicate with three independent experiments, and the data were analyzed by the 2 -ΔΔct method. The primers used in our study were listed in Additional file 21: Table S11, including the reference gene (EF-1α). All values are presented as fold changes of 30 DAF to 10-20 DAF. Student's t-test was performed to determine significant changes (P < 0.05).
Conclusions
In this study, a total of 8905 lncRNA loci were identified, including 7100 lincRNA loci and 1805 loci generating lncNAT. We demonstrated that the B. napus genome has a large number of lncRNA and that these lncRNAs are expressed broadly across many developmental times and in different tissue types. We also provide evidence indicating that specific lncRNAs appear to be important regulators of lipid biosynthesis forming regulatory networks with transcripts involved in lipid biosynthesis. We also provide evidence that these lncRNAs are conserved in other species of the Brassicaceae family. Taken together, our data will provide insight into the further study of lncRNAs roles in oil biosynthesis in B.napus.
|
v3-fos-license
|
2020-12-24T09:11:21.080Z
|
2020-01-01T00:00:00.000
|
235056906
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/82/e3sconf_daic2020_06024.pdf",
"pdf_hash": "3e310435cc4fa909b48ed5cc2a606a4c338f2173",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43168",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Business"
],
"sha1": "e5f23b153cbaa8aef7c0a6e543d76ed7169a1c6b",
"year": 2020
}
|
pes2o/s2orc
|
Personnel-marketing as a direction of development of personnel agricultural complex
. The article considers the ambivalence of personnel marketing as a modern management technology that allows balancing on the principles of the mutual agreement the interests of employers and employees in a situation of choice in the segment of the labor market. The problem of the article lies in the need to search for tools to effectively build (develop) the labor market of agricultural enterprises. Personnel marketing technologies are proposed as tools. Research methods are: methods of theoretical generalization, logical modeling, induction, and other general scientific methods. The article defines the main directions and mechanisms for the development of personnel marketing as a modern social management technology that ensures the tactical and strategic development of company personnel. The possibility of promoting the attractiveness of jobs at agricultural enterprises through the use of external and internal tools of personnel marketing is being considered.
Introduction
Personnel marketing issues are quite relevant in modern Russian literature and affect the interests of a large number of people. There is a common understanding of the importance of marketing staff in providing the effective management tasks of a modern organization. Analyzing the works of authors whose full-text works are presented on the Elibrary.ru resource, we note that there are two different approaches to the issue of personnel marketing. The representative of the first is A. Mikhailova and co-authors, who in their works [13; 14; 1] consider personnel marketing (HR-marketing) as "a type of managerial activity based on the application of the marketing methodology in the human resources management system and aimed at creating and developing intellectual capital with maximum consideration for the needs of the enterprise in personnel and the situation on the labor market" [14].
A. Makhmetova adheres to the same logic [11, p. 112], she considers personnel marketing as a direction for analyzing the situation on the labor market to effectively cover staffing requirements for realizing the organization's goals. A similar logic is presented in the textbook edited by A. Kibanov [6, p. 249], as well as A. Pikhach, who considers marketing staff as a means of avoiding a crisis caused by a lack of staff [15].
The second direction in the understanding of personnel marketing is represented by the work of M. Menshikova and co-authors who believe that "the fundamental task of personnel marketing is to create the maximum possible attractive image of an enterprise as an employer to provide itself with human resources with optimal quantitative and qualitative parameters [10]. Some authors consider marketing personnel a means of evaluating and modeling careers [5]. Others -a means of building staff profiles [19].
Some authors consider the marketing approach basis to the organization's personnel policy [2]. Common in these approaches is the recognition of the fact of the most efficient use of personnel. It should be noted that staff can be considered not only as an internal resource of the organization. It can be used outside the organization -being subcontracted, outsourced, etc. Both of these approaches, in our opinion, are quite relevant for agricultural enterprises.
Without going into a deep terminological analysis, we consider it necessary to clarify the methodological approaches to personnel marketing in the conditions of agricultural enterprises. Analyzing modern scientific concepts of the interpretation of marketing, four of them can be distinguished [10, с. 7]: -marketing as a management concept; -marketing as a system for achieving certain goals; -marketing as a method; -marketing as a philosophy, as a market-oriented thinking style. Marketing approaches, taking into account the occurrence of a shortage of jobs, are simultaneously applied by both employers (to search for an employee) and the employees themselves (to search for jobs). These technologies are less common in agricultural enterprises. The paradox of agricultural enterprises is that often the employer and employee are, in a sense, monopolists in the local labor market. Such monopolism often does not lead to the emergence of unconditional employment, but rather, to the emergence of other forms of competition (refusal of employment, departure, dumping of the offer price), which accordingly negatively affects the prospects of this segment of the labor market.
Method
The above allows us to formulate the problem of the article, which consists of the fact that there is a need to search for tools to develop an effective labor market for agricultural enterprises. We see staff marketing technologies as such a tool.
Research methods are: methods of theoretical, logical modeling, induction, other general scientific methods.
In our opinion, the conditions for effective marketing activities in the field of personnel management are already formed in the labor market of agricultural enterprises: -the market for buyers of labor services is relatively stable, as a result of which an internal labor market has emerged in most enterprises; -competition both between employees and between employers (for employees with high or specific skills); -the employee and the employer form a long-term motivation in the field of employment; -the employee has the opportunity to refuse employment using exit migration; -the conditions for the free movement of capital have been formed [20], which further increases mobility in the labor market.
At the same time, the data of the Federal State Statistics Service [17] for the period 2005-2013 indicate that the proportion of other costs (except for labor compensation) in other (non-agricultural) sectors is much higher. The results are shown in table 1.
Source: compiled by the authors
The demonstrated dynamics indicate that the expenses of agricultural enterprises are oriented towards monetary payments to employees. The costs of cultural, domestic, professional, and other development are below average. So according to data for the period from 2013 to 2019, the share of the rural population decreased from 26% to 25.41% [18]. All of the above gives reason to argue that the workers of agricultural enterprises in modern conditions have the opportunity to act relatively autonomously based on their benefits. It should be noted the significant influence of stakeholders on the activities of these organizations. Modern conditions of the labor market for the professions in the agricultural sector indicate the emergence of mutual competition, which determines the freedom of choice of market participants. Consequently, raising questions about the strategic prospects for the development of agricultural enterprises against the background of a general outflow of the rural population becomes problematic.
Due to the lack of a clear structure of marketing activities in the field of personnel management of agricultural enterprises, the process of interaction between the labor market (both internal and external) and the carriers of labor resources (workers) is still chaotic without a clearly defined concept of interaction. As studies by domestic and foreign authors show, modern processes in various labor markets are characterized by a violation of the psychological contract with the employee expressed in the absence of: 1) justice in distribution, 2) procedural justice and 3) justice in interactions [3]. In our opinion, these results are entirely relevant for agricultural enterprises.
As a result of the lack of a concept, not all active and high-quality labor resources can find full (adequate) use in places of residence. The share of unused (used unproductively, off-profile) labor resources creates negative factors for the development of agricultural enterprises and negatively affects the life of the resource owners themselves, as a result of which the labor resource is lost. It is possible to maximally effectively satisfy the entire set
Study detail and result
Marketing-oriented social technologies require a targeted impact on the social space to organize and maintain effective and balanced cases of exchange (both single and in the system). The classic scenario of marketing activities implies an impact on the behavior of the buyer in the process of making a purchasing decision [9].
The peculiarity of staff marketing is that each of the participants in the marketing process acts both as an object and as a subject of marketing activity (in an enterprise, in the labor market, in economic life). In the system of classical marketing, an object acts as an enterprise that is interested in marketing its products, and as a subject, potential consumers of products [16, p. 9].
This chain for marketing staff acquires a binary character -in the process of "buying" an employee, the company "sells" a specific workplace that a particular worker "buys". As a result, for both participants in this interaction, the result of marketing actions is the determination of the price of the employment contract. The presence in parallel of two differently directed but based on common social mechanisms processes determines the specific requirements for personnel marketing. Adapting the thesis expressed by B. Golodets about the signs of technology of marketing activity [10], to the issues of personnel marketing we can state: -time constraints -the availability of clear information about the start and end of the activity, about the period of maximum effectiveness of the planned activities, which allows you to design the actions of the staff; -optional repeatability of all marketing tools (the repeatability of individual tools is required); -particularity or uniqueness, which implies differences in instrumental approaches to various areas of marketing; -variability -since technology always assumes the existence of alternative methods of activity [10, p. 249]. The direct specificity of marketing approaches has differences. First of all, the differences in technologies implemented by the parties in the framework of the initial "sale" of the workplace and implemented to maintain its attractiveness. The technologies implemented by the employee in competition in the external and internal labor markets will also change.
In the book of V.V. Tomilov and L.N. Semerkova "Labor Marketing" [20] it is proposed to use the following methodological approaches to this type of marketing activity: -firstly, market activity should be oriented towards consumers of the labor forceemployers, customer orientation means studying not the employers' production capabilities but the needs of the market, and, based on this, developing a plan for their satisfaction; -secondly, orientation to other subjects of market relations should be taken into account. In particular, the ability of employees to adapt to the conditions of changing demand for labor, as well as their requirements for wages, production conditions, work and rest, etc .; -thirdly, a focus on a systematic approach should be implemented. All types of activities related to the sale of labor services in marketing conditions should be coordinated and operate synchronously; -fourthly, the basic principle of marketing should be a long-term orientation [20,7]. In our opinion, these principles do not sufficiently take into account the dual nature of marketing interaction in the labor process. Moreover, they come from the resource concept of staff. Modern qualified personnel is no longer a classic workforce, possessing a set of E3S Web of Conferences 222, 06024 (2020) DAIC 2020 https://doi.org/10.1051/e3sconf/202022206024 unique (sometimes irreplaceable) knowledge. In this case, it is advisable to use the project approach since a specific set of competencies is determined by specific (project) goals.
Agribusinesses do not always use the opportunity to increase capacity building for fear of a possible outflow of personnel. Studies by colleagues revealed the institutional barriers that were most significant in the demand for competence: strict internal planning, disclosure of information, strategic behavior of the company, excessive preferences for the same professions, etc. [8]. For agricultural enterprises, we would supplement the list with complex forecasting of market conditions in the agricultural market. The use of low-skilled personnel also poses significant risks for agricultural enterprises. The logic of marketing staff can be applied to eliminate this contradiction. The main thesis is the assertion that the internal labor market for an employee should be constantly more attractive than the external market. Moreover, this logic is associated with the satisfaction of the needs of the employee, not only in material terms. We focus on the pyramid of needs of A. Maslow in the description of needs. A set of some tools to meet possible needs is given in table 2. In table 2, we assume that external tools are also the result of the organization's activities aimed at meeting the needs of employees. The satisfaction of these needs occurs outside the organization, focusing both on the satisfaction of needs and on their formation (including those who are not included in labor relations). Thus, it seems possible to implement a systematic approach proposed by V. Tomilov and L. Semerkova [20].
Discussion
The main functional specificity of personnel marketing as a social technology is the presence of counter-marketing flows of both informational and material nature on the part of each of the participants in social interaction. Creating a holistic system of personnel marketing requires the construction of instrumental social mechanisms of managerial impact. In the process of implementing the concept of personnel marketing at the agricultural enterprises, special attention is paid to modern technologies of network interaction. It allows giving the existing communication processes the character of informal ties. Such relationships exist in many agricultural enterprises, but often mainly at the lower level of social interaction.
A set of social influence methods aimed at meeting the needs indicated in Table 2 may contain a range of methods of social regulation, social regulation, moral incentives, etc. Our proposed logic of personnel management based on marketing technologies requires the development of specific tools based on the characteristics of a particular branch of an agricultural company based on the needs of the people who make up the core of this team.
|
v3-fos-license
|
2018-06-07T12:41:42.800Z
|
2018-05-24T00:00:00.000
|
44128910
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0196968&type=printable",
"pdf_hash": "3bd251443e90415bb42120eb1a8dc24bf4a04efc",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43170",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3bd251443e90415bb42120eb1a8dc24bf4a04efc",
"year": 2018
}
|
pes2o/s2orc
|
The impact of conjunctival flap method and drainage cannula diameter on bleb survival in the rabbit model
Purpose To examine the effect of cannula diameter and conjunctival flap method on bleb survival in rabbits undergoing cannula-based glaucoma filtration surgery (GFS). Methods Twelve New Zealand White rabbits underwent GFS in both eyes. The twenty-four eyes were divided into four groups. Two of the four groups (N = 12) received limbus-based conjunctival flaps (LBCF), and the other two (N = 12) received fornix-based conjunctival flaps (FBCF). Six FBCF rabbit eyes were implanted with 22-gauge drainage tubes, and the other six were implanted with 26-gauge tubes. Likewise, six LBCF rabbits received 22-gauge drainage tubes and six received 26-gauge tubes. Filtration blebs were evaluated every three days by a masked observer. Bleb failure was defined as the primary endpoint in this study and was recorded after two consecutive flat bleb evaluations. Results Group 1 (LBCF, 22- gauge cannula) had a mean bleb survival time (Mean ± SD) of 18.7 ± 2.9 days. Group 2 (LBCF, 26-gauge cannula) also had a mean bleb survival time of 18.7 ± 2.9 days. Group 3 (FBCF, 22-gauge cannula) had a mean bleb survival time of 19.2 ± 3.8 days. Group 4 (FBCF, 26-gauge cannula) had a mean bleb survival time of 19.7 ± 4.1 days. A 2-way analysis of variance showed that neither surgical approach nor cannula gauge made a statistically significant difference in bleb survival time (P = 0.634 and P = 0.874). Additionally, there was no significant interaction between cannula gauge and conjunctival flap approach (P = 0.874), suggesting that there was not a combination of drainage gauge and conjunctival flap method that produced superior bleb survival. Conclusion Limbus and fornix-based conjunctival flaps are equally effective in promoting bleb survival using both 22 and 26-gauge cannulas in the rabbit model. The 26-gauge drainage tube may be preferred because its smaller size facilitates the implantation process, reducing the risk of corneal contact.
Results
Group 1 (LBCF, 22-gauge cannula) had a mean bleb survival time (Mean ± SD) of 18.7 ± 2.9 days. Group 2 (LBCF, 26-gauge cannula) also had a mean bleb survival time of 18.7 ± 2.9 days. Group 3 (FBCF, 22-gauge cannula) had a mean bleb survival time of 19.2 ± 3.8 days. Group 4 (FBCF, 26-gauge cannula) had a mean bleb survival time of 19.7 ± 4.1 days. A 2-way analysis of variance showed that neither surgical approach nor cannula gauge made a statistically significant difference in bleb survival time (P = 0.634 and P = 0.874). Additionally, there was no significant interaction between cannula gauge and conjunctival PLOS
Introduction
Glaucoma is a leading cause of blindness throughout the world and is estimated to affect 76 million people by 2020 [1]. Although elevated intraocular pressure (IOP) is often associated with this disease, glaucoma is chiefly characterized by optic nerve deterioration and visual field loss. In the initial stages of treatment, glaucoma is generally managed with medication. When pharmaceuticals fail to reduce IOP to appropriate levels, trabeculectomy (glaucoma filtration surgery; GFS) and tube-shunt surgery are the two mainstay surgical procedures. The concept of the surgeries is similar: both reroute aqueous from the anterior chamber of the eye to the subconjunctival space, forming a filtration bleb. Although there is debate as to which is more efficacious, the success of both procedures is largely dependent on bleb formation and survival. Surgical fashioning of a conjunctival flap is a key element of both GFS and tube-shunt surgery. This conjunctival flap can be either limbal-based (LBCF) in which an incision is made in the conjunctival and Tenon's capsule tissues several millimeters behind the limbus or fornixbased (FBCF) in which the conjunctival and Tenon's capsule incision is made at the limbus. Clinicians report several advantages and disadvantages to each surgical approach. While LBCFs are considered more difficult and time consuming to perform, some suggest that they have a lower risk of conjunctival wound leakage. In contrast, FBCFs are viewed as less technically demanding but with an increased risk of leakage [2].
Although the general trend has shifted towards the use of fornix-based flaps due to reports of increased rates of cystic blebs associated with LBCFs [3][4][5], a 2017 Cochrane systematic review found no significant differences in bleb survival or IOP control between FBCF and LBCF groups at 12 and 24 months follow-up in trabeculectomy patients. However, the group noted that LBCF eyes were more prone to be complicated by shallowing of the anterior chamber [6]. Compared to trabeculectomy, there is relatively little research regarding conjunctival flap method for tube-shunt surgery. One retrospective study by Suhr et al. found no significant differences in IOP control, overall success and changes in visual acuity between LBCF and FBCF tube-shunt eyes [7].
The rabbit model of GFS has been used to gain a better understanding of surgical techniques and has proven important in the development of drugs that reduce the likelihood of bleb failure, such as 5-flourouracil and mitomycin c [8,9]. Initially, a full thickness sclerostomy was adapted for use in the rabbit model [10]. The sclerostomy model sometimes produced inconsistent results with not infrequent closure of the internal osteum by the iris or occasionally by vitreous. This lead to uncertainty in bleb survival endpoint determination, as scarring of the internal fistula is difficult to detect during clinical evaluation in this model. Later, Cordeiro et al. developed the 22-gauge angiocatheter model for rabbit GFS [11]. In this surgical procedure, a 22-gauge cannula is used to maintain a patent fistula between the anterior chamber and subconjunctival space. This method decreased the risk of internal occlusion by allowing the surgeon to place the tip of the cannula beyond the iris, in direct slit lamp view. The angiocath model of GFS produced more consistent results than sclerostomy, eliminating the uncertainty associated with internal occlusion.
Inserting the 22-gauge angiocath in a rabbit eye can prove difficult due to the shallowness of the rabbit anterior chamber (AC). The rabbit AC is 2.9 ± 0.36 mm deep on average compared to 3.5 ± 0.35 mm in humans [12]. Even when viscoelastic is used to deepen the AC, inserting the 22-gauge is challenging and may lead to significant peripheral iris and/or corneal contact. A smaller angiocath would be easier to insert and less likely to cause ocular damage. To our knowledge, there are no published studies analyzing the effect of drainage cannula diameter in the rabbit model of GFS.
Although it appears that the conjunctival flap method has little effect on long term IOP control, the effect on bleb survival in the rabbit model is yet to be determined. Here we analyze the effects of conjunctival flap method or GFS drainage cannula diameter on filtration bleb survival, using standard 22-gauge and smaller 26-gauge cannulas.
Materials and methods
The study was performed using twelve New Zealand white rabbits, each weighing between 2kg and 4kg. The rabbits are sourced from Charles River Laboratories. The rabbits are housed in the AAALAC certified Animal Care Services facility at the University of Florida in Gainesville, Florida. The University of Florida Institutional Animal Care and Use Committee approved the experimental protocol prior to initiation of the study (study number-#201106599). Throughout the study, our protocol adhered to the Association for Research in Vision and Ophthalmology resolution statement for the use of animals in research.
Study design
Twelve rabbits (a total of 24 eyes) were randomized to one of four treatment groups with six eyes in each group. All rabbits underwent glaucoma filtration surgery (GFS) in each eye from a single surgeon (MBS).
The four treatment groups were based on drainage tube gauge and conjunctival flap approach (Table 1):
Surgical operation
The rabbits were anesthetized using a combination intramuscular injection: 50mg/kg ketamine ("Ketaject", Phoenix, MO) and 10/mg/kg xylazine ("Xyla-ject", Phoenix, MO). Local anesthesia was also provided prior to surgery using topical administration of 0.1% proparacaine eye drops (Bausch & Lomb, Tampa, FL). The surgical technique used for the cannulabased glaucoma filtration surgeries was similar to those described in previous publications by this group [13][14][15][16][17]. All rabbits received the same surgical procedure aside from the conjunctival incision method and drainage cannula diameter. In brief, the surgeon retracted the eyelids with the use of an eyelid speculum. A partial thickness corneal suture was then placed in the superior cornea as a traction suture, allowing rotation of the globe inferonasally. Surgical variations between experimental groups occurred at this step: Groups 1 and 2 (N = 12) received LBCFs, while Groups 3 and 4 (N = 12) received FBCFs.
For LBCF rabbits, Westcott scissors were used to make a posterior incision in the 6-7 mm from the limbus in the superotemporal quadrant. After the surgeon incised the conjunctiva, Tenon's capsule was opened. The conjunctiva and Tenon's capsule were then undermined toward the limbus taking care to not create any button holes in the superficial tissues.
In the FBCF rabbit groups, a standard 5 mm long incision was made at the limbus. The incision was extended around the limbus so that a scleral flap could be formed. Blunt dissection separated the conjunctiva from the Tenon's capsule from the underlying sclera.
After the conjunctival flaps were fashioned, a #75 Beaver blade (Becton Dickinson & Co., Franklin Lakes, NJ) was used to form a corneal paracentesis tract in the superonasal quadrant and a cohesive viscoelastic agent was injected into the anterior chamber. Approximately 1 mm posterior to the limbus, a full thickness scleral tract through the anterior chamber was fashioned using a 27-gauge needle, taking care not to engage either the peripheral iris or cornea.
In twelve of the rabbits (six LBCF rabbits and six FBCF rabbits) a 22-gauge, IV cannula (Insyte Becton Dickinson Vascular Access, Sandy, UT) was inserted into the anterior chamber along the needle tract. A 26-gauge cannula was inserted in the other twelve (six LBCF rabbits and six FBCF rabbits). The needle of the cannula was retracted, and the cannula itself was placed inside of the pupillary margin to prevent occlusion by the iris. The scleral end of the drainage tube was trimmed so that it would protrude less than 1 mm from the insertion point. The cannula was anchored to the sclera using a 10-0 nylon suture (Ethicon Inc., Somerville, NJ).
In the FBCF group, Tenon's capsule and the conjunctiva were closed in one layer using absorbable 8-0 polyglactin suture material (Vicryl 1 , Ethicon Inc., Somerville, NJ) to form a watertight seal at the limbus. In the LBCF group a single layer running closure of the Tenon's and conjunctiva was performed with the same 8-0 polyglactin (Vicryl 1 , Ethicon Inc., Somerville, NJ) suture. After inflating the bleb with BSS via the AC paracentesis tract, a Seidel's test was performed to check for bleb leakage. Following surgery, a topical ointment consisting of Neomycin and Dexamethasone was applied to control inflammation and prevent infection. Rabbits received an oral analgesic for two days post-operatively.
Postoperative clinical evaluation
Post-surgically, rabbits were briefly anesthetized with isoflurane and examined by an experienced observer every three days. The observer assessed bleb elevation and area and evaluated the eyes for surgical complications such as hemorrhage, infection and shallowing of the anterior chamber. The bleb was judged flat when there was no separation of conjunctiva and Tenon's tissues from the sclera and angiocath. After the observer judged the blebs to be flat on two consecutive occasions, bleb failure was recorded. The first of the two evaluation days where the bleb was recorded as flat was designated as the bleb endpoint. If the bleb was noted to be elevated after being declared flat on only 1 occasion, this time point was pre-determined in the study design not to count.
Statistical analysis
A two-way analysis of variance (ANOVA) was performed using GraphPad Prism 5.0 software to examine the effect of cannula gauge and conjunctival flap method, as well as any interaction effects between the two methods. To detect a statistical power of 80% for either the 22 versus 26 or the fornix versus limbal based, there would have to be at least a 2.5 day difference in endpoint with this number of eyes.
Results
Blebs in rabbits receiving LBCF survived an average of (Mean ± SD) 18.7 ± 2.9 days (Table 2). FBCF blebs survived fractionally longer at 19.4 ± 3.95, however this difference was not statistically significant (P = 0.634). No significant surgical complications were noted in any eye on day 1 or throughout the post-operative follow-up.
Eyes that were implanted with 22-gauge cannulas had an average bleb survival of 18.9 ± 3.4 days. This was not significantly different from 26-gauge eyes, which had an average bleb survival of 19.2 ± 3.6 days (P = 0.874).
As depicted in Figs 1 and 2, LBCF rabbits had an average bleb survival of 18.7 ± 2.9 days when implanted with either 22 or 26-gauge drainage cannulas. Blebs in rabbits operated on with an FBCF approach survived an average of 19.2 ± 3.8 days when implanted with 22-gauge cannulas and 19.7 ± 4.1 with 26-gauge cannulas.
A two-way ANOVA revealed that there was no significant interaction between cannula gauge and surgical approach (P = 0.874), meaning that no particular combination of cannula gauge and conjunctival flap method produced significantly better survival results than the others.
Discussion
Our study showed that conjunctival flap method did not have a significant impact on bleb survival in the rabbit model. The results of our study fall in line with previous retrospective human studies [6,[18][19][20] which showed no major differences in efficacy between the two methods. It appears that surgical skill and preference are the main factors that should determine conjunctival flap approach in the rabbit model.
In this study, bleb survival was chosen as the primary outcome measure rather than IOP. Intraocular pressure is known to be an unreliable indicator of bleb function in the rabbit model; there may be reductions in IOP even without patency between the AC and subconjunctival space [10]. This rabbit model is a model designed to study subconjunctival scarring, not glaucoma, as the aqueous outflow pathways are normal and baseline IOP is within the normal range. Therefore, bleb failure has generally been defined as the primary endpoint of GFS in the rabbit [21].
In 1973, Anthony Molteno developed the concept of draining aqueous away from the anterior chamber into a drainage plate via a long silicone drainage tube [22,23]. Later in the early 1980's, Stanley Schocket described another technique using readily available, inexpensive operating room materials. In this procedure, Schocket used an inverted retinal encircling band and 23-gauge Silastic tubing (N-5941-1, Storz) with an external diameter of 0.64 mm and an internal diameter of 0.34 mm to fashion a GDD [24,25]. Since then, all of the more commonly used drainage implants including the Ahmed, Baerveldt and Molteno have adopted these tube dimensions.
Although 0.64 mm has been the default external drainage tube diameter for GDDs, there is a lack of research regarding alternative proportions. The cannula implanted during GFS can be considered a surrogate for the GDD drainage tube. The 22-gauge angiocath has a slightly larger diameter than the standard tube used in a GDD, but the 26-gauge is a smaller diameter. Usually, tube diameter does not present a problem for surgeons; humans have an AC that is sufficiently deep and can be easily expanded with viscoelastic solution. However, some patients have narrow drainage angles, particularly those with hypermetropia or those of East-Asian descent [26][27][28], where the iris is closer to the cornea, limiting space for tube placement.
Twenty-six and 22-gauge drainage cannulas were equally effective at promoting filtration bleb survival. Twenty-two gauge cannulas have an outer diameter of 0.67 mm, leaving a very small margin for insertion error. The American Academy of Ophthalmology has stated that corneal endothelial cell failure is the primary long-term problem associated with tube-shunt surgery [29]. Patients diagnosed with glaucoma may already have limited numbers of endothelial cells [30] and tube-endothelial contact may further damage the endothelium, leading to corneal edema and an increased likelihood of vision loss [31,32]. The multicentered, prospective Tube versus Trabeculectomy study reported that persistent corneal edema was the most prevalent late post-operative complication associated with tube-shunt surgery, with 16% of tube eyes exhibiting this condition [33]. Similarly, the Ahmed versus Baerveldt study also reported a high rate of corneal complications with 11% of eyes complicated with persistent corneal edema [34].
Assuming an average rabbit AC depth of 2.9 ± 0.36 mm, there are between 0.95 and 1.31 millimeters of space on either side of the cannula once it is placed [12,35]. The smaller 26-gauge cannula has an outer diameter of 0.404 mm, giving the surgeon improved clearance for implantation. A smaller diameter angiocatheter is easier to insert, decreasing the risk of complications from iris or corneal contact. Our results showed no statistically significant differences in bleb survival using 22 and 26-gauge drainage cannulas, suggesting that a 26-gauge drainage angiocatheter may be equally good for this glaucoma model and that GDD designers could consider using smaller gauge drainage tubes for patients.
In summary, drainage cannula diameter and conjunctival flap method produced no notable differences with respect to bleb survival in the rabbit model. The data presented support the use of 26-gauge cannulas in rabbit GFS in order to facilitate implantation and reduce postoperative complications. Further research is needed to examine the efficacy of smaller diameter GDD tubes in humans, especially in those patients with anatomically narrow drainage angles.
|
v3-fos-license
|
2017-08-06T10:02:47.114Z
|
2015-10-20T00:00:00.000
|
205356451
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/290/49/29202.full.pdf",
"pdf_hash": "85b7fb25583b49672fa5322866bfdbdf3e7e3d9e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43171",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "2e17933aac70e00cdd36f99603a761bb1fee1e02",
"year": 2015
}
|
pes2o/s2orc
|
Normal Fertility Requires the Expression of Carbonic Anhydrases II and IV in Sperm*
HCO3− is a key factor in the regulation of sperm motility. High concentrations of HCO3− in the female genital tract induce an increase in sperm beat frequency, which speeds progress of the sperm through the female reproductive tract. Carbonic anhydrases (CA), which catalyze the reversible hydration of CO2 to HCO3−, represent potential candidates in the regulation of the HCO3− homeostasis in sperm and the composition of the male and female genital tract fluids. We show that two CA isoforms, CAII and CAIV, are distributed along the epididymal epithelium and appear with the onset of puberty. Expression analyses reveal an up-regulation of CAII and CAIV in the different epididymal sections of the knockout lines. In sperm, we find that CAII is located in the principal piece, whereas CAIV is present in the plasma membrane of the entire sperm tail. CAII and CAIV single knockout animals display an imbalanced HCO3− homeostasis, resulting in substantially reduced sperm motility, swimming speed, and HCO3−-enhanced beat frequency. The CA activity remaining in the sperm of CAII- and CAIV-null mutants is 35% and 68% of that found in WT mice. Sperm of the double knockout mutant mice show responses to stimulus by HCO3− or CO2 that were delayed in onset and reduced in magnitude. In comparison with sperm from CAII and CAIV double knockout animals, pharmacological loss of CAIV in sperm from CAII knockout animals, show an even lower response to HCO3−. These results suggest that CAII and CAIV are required for optimal fertilization.
. These results suggest that CAII and CAIV are required for optimal fertilization.
Post-testicular sperm undergo a multitude of maturation processes to acquire fertility in terms of penetrating the egg and generating a new unique individual. Sperm are transcriptionally and translationally silent. Therefore, the multiple physiological and biochemical modifications (1-5) must result from their interaction with the different environments through which they migrate (6 -8). In the epididymis, including the caput, corpus, and cauda epididymides, a low bicarbonate (HCO 3 Ϫ ) concentration and an acidic pH of the luminal fluid are important for sperm maturation, storage, and fertility (9 -11). Segment-specific gene expression patterns of acid base transport proteins in epithelial cells (12)(13)(14) modulate the distinct fluid composition, which undergoes considerable changes along the epididymal duct (15). Several ion channels and transporters, such as cystic fibrosis transmembrane conductance regulator, SLC26A3/A6, and sodium bicarbonate exchanger, as well as V-ATPases have been proposed to produce the acidic and low HCO 3 Ϫ -concentrated fluid composition (7,(15)(16)(17). Carbonic anhydrases (CAs) 2 have also been identified for HCO 3 Ϫ resorption in the epididymis (18 -21). Extracellular CA isoforms convert HCO 3 Ϫ into CO 2 which can then diffuse into the epididymal epithelial cells. Upon entry, CO 2 can be reconverted into HCO 3 Ϫ by intracellular CAs (22,23). At the basolateral membrane, HCO 3 Ϫ is then extruded by either AE2 and or sodium bicarbonate cotransporters (24,25). Disturbance in the epididymal endothelium of the acid base homeostasis impairs sperm maturation processes and potentially causes male infertility (26).
Ejaculated sperm are not able to penetrate and fertilize the egg in vivo. They have to mature in the female genital tract. Chang (27) and Austin (28) discovered this essential process, termed capacitation, which includes strictly regulated and complex biochemical changes (4,5). Female genital tract fluids are rich in HCO 3 Ϫ and Ca 2ϩ and exhibit an alkaline pH, supporting capacitation (8,29). With regard to sperm motility, the capacitation process comprises major changes of the sperm beating pattern (30). Early HCO 3 Ϫ -mediated events (31) produce a fast, symmetrical flagellar beat and rapid progressive movement that allows sperm to travel the long distance through the uterus and the oviduct. Later events include the hyperactivated motility pattern characterized by high amplitude and asymmetrical flagellar beating, representing an increased torsional force (11,(32)(33)(34). Earlier work has proposed that hyperactivation may be the decisive factor in the release of sperm from the oviductal reservoir through detachment from the epithelium (35,36). Our own findings indicate that sperm attachment and release may be independent of hyperactivation (37). * This study was supported by Deutsche Forschungsgemeinschaft Grant WE The pathway for HCO 3 Ϫ -evoked signaling in sperm is relatively well understood. This work focuses on the early HCO 3 Ϫinduced signaling pathway. The downstream effects of HCO 3 Ϫ have already been well described. Intracellular HCO 3 Ϫ directly activates the soluble adenylyl cyclase, therefore increasing cAMP levels and activating PKA. This leads to increased protein tyrosine phosphorylation and an accelerated beat frequency (38 -41). The mechanisms that initiate responses to HCO 3 Ϫ in sperm have remained elusive. It has been proposed, but not shown, that the Na ϩ /HCO 3 Ϫ co-transporter NBC and the anion transporter SLC26A3/A6 are involved in the import of HCO 3 Ϫ into sperm (42)(43)(44). It has also been suggested that CAs are involved in HCO 3 Ϫ homeostasis because these enzymes catalyze the reversible reaction of CO 2 to HCO 3 Ϫ (45,46). This work focuses on the earliest events in HCO 3 Ϫ signaling in sperm.
So far, 16 CA isoforms have been identified (47), of which CAII and CAIV have been shown to be up-regulated during spermatogenesis (12,48). CAIV is an extracellular glycosylphosphatidylinositol-anchored protein (49) whose involvement in the generation of HCO 3 Ϫ in murine spermatozoa we have already established (50). That work showed that sperm of CAIV knockout mice display an HCO 3 Ϫ disequilibrium at the cell surface and show a delayed and reduced increase of beat frequency in response to stimulation with HCO 3 Ϫ and CO 2 . In this work, we determine the distribution and functional aspects of the CAII isoform as an intracellular counterpart to CAIV.
We show the presence of CAII in the epithelium lining, the epididymal duct, and the sperm tail. Specifically, we show that, in sperm lacking CAII, acceleration of the flagellar beat by HCO 3 Ϫ and CO 2 is delayed and reduced in magnitude. CAII and CAIV together contribute to almost 100% of total CA activity in sperm. CAII/CAIV double knockout mice display subfertility as well as reduced sperm motility. Our studies further reveal that, to uphold the HCO 3 Ϫ -induced rise in beat frequency, genetic double knockout sperm develop a compensatory mechanism. In conclusion, CAII and CAIV are key enzymes in the regulation of sperm motility and, therefore, essential for male fertility.
Animals, Phenotyping, and Fertility Analysis of CAII CAIV Double Knockout Mice-WT C57BL6/J and CAII knockout B6.D2-Car2 n /J (CAII Ϫ/Ϫ ) mice were obtained from The Jackson Laboratory (Bar Harbor, ME). CAIV knockout B6.129S1-Car4 tm1Sly/J (CAIV Ϫ/Ϫ ) animals were provided by the laboratory of William S. Sly (Department of Biochemistry and Molecular Biology, St. Louis University School of Medicine, St. Louis, MO). Because of different chromosomal locations of the CAII (chromosome 8) and CAIV (chromosome 17) genes (51), CAII/CAIV double knockout (CAII Ϫ/Ϫ CAIV Ϫ/Ϫ ) animals were generated in accordance with approved protocols (no. 02/2011) by intercrossing individual heterozygous mice. According to Mendelian law, the probability of obtaining double knockout offspring is 6.25% at the F2 generation. For phenotype analysis of double knockout offspring, mutant mice were weighed once per week from day 21 on, body size was measured at the adult life stage, and organ weight of kidney and testis was determined and compared with WT mice. For further analysis, WT and double knockout testes were combined, embedded in paraffin, and used to study germ cell epithelia. For hematoxylin and eosin-stained testis, slices were examined with a bright-field microscope (Diaphot 300, Zeiss, Jena, Germany), and individual tubuli seminiferi contorti were documented. The thickness of germ cell epithelia was determined with Adobe Photoshop CS4 (Adobe Systems, San Jose, CA), whereby one tubule was calibrated orthogonally four times from the basal membrane to the tubule lumen, and advanced pixel lengths were converted into micrometer units. Results from three independently embedded testes for double knockout and WT mice with a total tubulus count Ն130 are shown as mean Ϯ S.E.
The fertility of double knock-out mice was studied in longterm mating experiments. Double knockout mice were housed as individual mating pairs for 16 weeks. For comparison, other pairs included double knockout mice with a WT partner. The numbers and sizes of litters were recorded, and offspring per week of mating was calculated. Pure WT matings served as a control.
Sperm Preparation and Motility Analysis-Sperm were isolated from the cauda epididymidis and vasa deferentia after animals were sedated with isoflurane (Baxter, Unterschleißheim, Germany), followed by a cervical dislocation as described before (50). Sperm were allowed to swim out in HS buffer for 20 min at 37°C and 5% CO 2 . Released sperm were washed twice with HS buffer (3 min at 300 ϫ g) and resuspended in a final concentration of 1-2 ϫ 10 7 cells/ml in HS buffer. The sperm samples, washed and stored in HS, were used for all subsequent experiments. For further analysis of the effects of phospholipase C (PLC, Life Technologies) on sperm, 1 ϫ 10 6 cells/ml were incubated in 500 l of HS buffer containing 2 units of PLC for 90 min at 37°C in a shaking water bath. Sperm were sedimented (3 min at 300 ϫ g) and resuspended in 250 l of fresh HS buffer.
CA Activity in Sperm-CA enzyme activity experiments were performed on a quadrupole mass spectrometer (OmniStar GSD 320, Pfeiffer Vaccum, Asslar, Germany). Analysis was carried out as described previously (50). The loss of double-labeled 13 (74). From this definition, 1 unit corresponds to 100% stimulation of the noncatalyzed 18 O depletion of doubly labeled 13 C 18 O 2 . For the experiment, 6 ml of HS buffer was filled into a cuvette, and a non-catalyzed reaction was started by adding 6 l of the double-labeled 13 C 18 O 2 for 8 min. 4 ϫ 10 6 sperm cells were added to measure the CAcatalyzed reaction for 10 min. Results are shown as mean Ϯ S.E. of three independent experiments.
Immunoblotting-For CAII and CAIV protein detection, we prepared WT and knockout tissues from mice in the same way as described before (50). 60 g of total protein and 30 l of sperm sample were separated in a NuPAGE 4 -12% BisTris gel (Invitrogen) and blotted on nitrocellulose membranes (Invitrogen). Air-dried membranes were blocked with TBS/5% Slim Fast TM for 1 h before primary antibody (rb anti-CAII IgG or/and gt anti-CAIV IgG, 1:1000, in 10% Roti Block (Roth, Karlsruhe, Germany)) incubation overnight at 4°C. Membranes were washed twice and incubated first with HRP-conjugated anti-gt IgG (1:1000 in TBS-T) and then with HRP-conjugated anti-rb IgG (1:1000 in TBS-T), each for 1 h at room temperature. CAII and CAIV detection was carried out with ECL reagent (GE Healthcare) on a Chemie-Doc TM XRS apparatus (Bio Rad).
To assess capacitation of sperm from WT and CAII Ϫ/Ϫ CAIV Ϫ/Ϫ mice, sperm from the cauda epididymidis and vasa deferentia were incubated for 180 min either in HS medium (37°C, air atmosphere) or in capacitation medium (37°C, 5% CO 2 ). Protein extraction and blotting were carried out according to a protocol published previously (31). Membranes were incubated with anti-phosphotyrosine IgG (diluted 1:1000 in Roti-Block) overnight at 4°C. After washing three times with TBS-T, the membranes were incubated with HRP-conjugated anti-mouse (diluted 1:10,000 in TBS-T) for 1 h at room temperature. Protein bands were detected with ECL reagent (GE Healthcare) on a Chemie-Doc TM XRS apparatus (Bio Rad).
Immunohistochemistry-Organs were isolated from mice, fixed, and cut as described previously (50). Isolated sperm were air-dried and fixed in methanol for 15 min at room temperature.
qRT-PCR-Tissue homogenization, RNA isolation, and cDNA synthesis were performed as described previously (49). cDNA served as a template for the following RT-PCR, and RNA expression of CAII, CAIV, and CAXIV in tissues from the male reproductive tract was assessed by relative quantification with the ⌬⌬Ct method (54). 18S rRNA served as the endogenous standard and kidney as the reference tissue. Amplification and detection were carried out according to an Applied Biosystems protocol, each cDNA template as triplicate with a respective TaqMan gene expression assay (Applied Biosystems, Darmstadt, Germany) on a StepOnePlus TM cycler and software (Applied Biosystems). Results are presented as mean real-time quantitative values or relative amounts Ϯ S.E., each from three independent experiments.
Determination of Flagellar Beat Frequency-Beat frequency was determined as described previously (50). In brief, flagellar beat frequency was observed on an inverted microscope (Diaphot 300, Nikon, Tokyo, Japan) and recorded at 300 Hz in a 1200 ϫ 1400 pixel region with an IDT M3 high-speed camera (IDT Inc., Tallahassee, FL) and Motion Studio 64 software (Imaging Solutions, Regensburg, Germany). Determination of single sperm beat frequency was performed as described previously (55). Single sperm sequences were cut, arranged, and contrasted by ImageJ v1.37 software. Images of maximum amplitudes were merged into one sum file with MetaMorph v7.1 (Molecular Devices, Sunnyvale, CA) and analyzed by a semiautomated algorithm written in Igor Pro TM v6.04 (Wavemetrics, Lake Oswego, OR). Data are shown as mean Ϯ S.E., calculated with SigmaPlot v11.0 (Systat Software) and with a minimum of 17 single sperm for each waveform experiment (for exact sperm numbers, see the figure legends).
Dye Loading and pH i Measurements-To measure the steady-state pH i , sperm were loaded and measured as described previously (53). In brief, 250 l of HS buffer was spiked with 0.1 M Pluronic-Fl27 (Invitrogen) and 0.1 M pH-sensitive 2Ј,7Јbis-(2-carboxyethyl)-5-(and-6)-carboxyfluorescein, acetoxymethyl ester (Invitrogen) and mixed with 250 l of HS-stored sperm (3 ϫ 10 6 cells/ml) suspension. After three washing steps, cells were measured on a Nikon Eclipse TE2000-U microscope equipped with a monochromator (Till Photonics, Munich, Germany). An intracellular calibration was performed by suspension of the dye-loaded sperm in K ϩ -based medium variously buffered at pH 5.0, 7.0, or 9.0 and treatment with the K ϩ -selective ionophore nigericin (Sigma Chemicals). For pH i equilibration, the fluorescence ratio of 436/488 was transferred to a cellspecific pH i (56). To measure the kinetics of changes in the pH i dye loading, the experimental procedures were carried out the same way as described above, with the following exceptions. 250 l of the sperm solution (3 ϫ 10 6 cells/ml) was mixed with an equal volume of HS buffer containing 0.1% PowerLoad TM and 0.5 M pHrhodo TM Red acetoxymethyl ester (Invitrogen). Cells were incubated for 30 min in the dark at room temperature, washed two times with fresh HS buffer, and subsequently used to measure the pH i . Fluorescence was sampled during 50 ms of excitation, applied at 1 Hz. Changes in fluorescence were normalized to resting fluorescence (F/F0) with SigmaPlot 11.0 (Systat Software).
CAII and CAIV Distribution in the Male Reproductive
Tract and in Sperm-Double immunostaining (Fig. 1A) was performed to localize CAII and CAIV in the testis, epididymis, and sperm. CAII was detected with DAB (brown) and CAIV with Texas Red (red). WT testis (Fig. 1A, a) shows a specific CAII signal in elongated sperm, whereas early germ cell states are CAII-negative. In the epididymis, CAII is present in single epithelial cells of the caput (Fig. 1A, c), the cauda epididymidis ( Fig. 1A, g), and in nearly all cells of the corpus epididymidis (Fig. 1A, e). CAIV signals are not detectable in WT testis (Fig 1A, a) and the caput epididymidis (Fig. 1A, c). However, a specific immunoreaction is visible in the apically located stereocilia network of epithelial cells in the corpus epididymidis (Fig. 1A, e) as well as in the cauda epididymidis (Fig. 1A, g). Luminally located sperm show a specific CAII signal in the testis and in all parts of the epididymis. In contrast to CAII, CAIV is not detectable in luminal sperm of the testis and caput epididymidis. CAIV is only present in sperm from the corpus and cauda epididymidis. Tissue from CAII Ϫ/Ϫ CAIV Ϫ/Ϫ mice, which served as a negative control, does not show any specific CAII and CAIV immunoreaction (Fig. 1A, b, d, f, and h). The same results were CAII is present in elongated spermatids, epididymal spermatozoa, single epithelial cells of the caput and the cauda epididymidis, and nearly all epithelial cells of the corpus epididymidis. CAIV is localized in the stereocilia network of the corpus and the cauda epididymidis as well as in luminal sperm after passing the corpus region. In Western blot analyses, CAII (28 kDa) is detectable in the testis, all parts of the epididymidis, and cauda sperm. A specific CAIV band at 38 kDa is only present in the corpus and the cauda epididymidis as well as in sperm. Neither CAII nor CAIV are detectable in any of the double knockout tissues or in the protein samples, which served as control (Scale bars ϭ 50 m (a-h) and 10 m (insets). obtained from Western blot analyses, as shown in Fig. 1B. Protein extracts from WT and CAII Ϫ/Ϫ CAIV Ϫ/Ϫ mice were stained for the presence of CAII and CAIV. In the WT testis and caput epididymidis, only a CAII signal (28 kDa) is detectable, whereas the WT corpus and cauda epididymidis display signals for CAII and CAIV (38 kDa). Protein extracts from isolated WT sperm show a prominent immunoreactive CAII and a weaker CAIV band. No signal was detected in any tissue or sperm of CAII Ϫ/Ϫ CAIV Ϫ/Ϫ animals (Fig. 1B, Ϫ/Ϫ). To analyze development-dependent protein expression in the male genital tract, we stained WT tissues (ϩ/ϩ) of prepubertal (3-week-old) and pubescent (5-week-old) mice with antibodies against CAII ( Fig. 2A) and CAIV (Fig. 2B). CAII is not detectable in 3-week-old WT testis ( Fig. 2A, a) and the distinct parts of the epididymis ( Fig. 2A, c, e, and g). Puberty leads to significant changes in CAII distribution. In tissues from 5-week-old animals, CAII is localized throughout the entire genital tract with similar but weaker signals ( Fig. 2A, b, d, f, and h) compared with the tissues from adult animals in Fig. 1A. In contrast to CAII, a specific CAIV signal is already present in prepubertal (3-week-old) WT tissue from the corpus epididymidis (Fig. 2B, e). No CAIV signal is present in the 5-week-old testis (Fig. 2B, b) and caput (Fig. 2B, d). Intense staining occurs in the apically located stereocilia network of the corpus epididymidis (Fig. 2B, f) and the cauda epididymidis (Fig. 2B, h) of 5-week-old mice.
To localize CAII and CAIV more systematically in epididymal sperm, we performed double immunofluorescence staining (Fig. 3). CAII signals (green) are detectable in the cytoplasm of the principal piece of sperm tail. CAIV signals (red) are localized in parts of the acrosome and in the plasma membrane of the entire sperm tail, predominantly in the mid-piece. In comparison with WT sperm, sperm from double knockout mice are negative for CAII and CAIV (data not shown).
CAII and CAIV Are the Most Abundant Isoforms in Sperm-To determine CAII and CAIV enzyme activity, mass spectrometry was performed. A comparison of total CA activity between the sperm of WT and CAII or CAIV knockout animals provides information about the relative activity of these CA isoforms in sperm. The results illustrated in Fig. 4A indicate a total CA enzyme activity in WT sperm of 5.20 Ϯ 0.2 units/ml. In comparison with WT sperm, CAIV Ϫ/Ϫ leads to a significant reduction of 31.4% (3.57 Ϯ 0.25 units/ml) and CAII Ϫ/Ϫ of 63.2% (1.84 Ϯ 0.07 units/ml). We consequently measured the CA activity in sperm of CAII Ϫ/Ϫ CAIV Ϫ/Ϫ animals. The detected activity of 0.7 units/ml (Ϯ 0.02) equals a reduction of 86.9% compared with WT sperm. This activity is less than the activity measured in native Xenopus oocytes (1.5 units/ml), which do not express CA at all (57) Because CAs are involved in generating HCO 3 Ϫ , we examined the influence of CAII and CAIV on the HCO 3 Ϫ -mediated early activation of sperm. For this, we determined the beat frequency of single sperm as a function of time by application of 15 mM HCO 3 Ϫ (Fig. 4B). WT sperm respond to HCO 3 Ϫ with an acceleration of their beat frequency from 3.3 Ϯ 0.09 Hz to a maximum of 7.04 Ϯ 0.18 Hz within the first 20 s. Sperm from CAIV, as well as from CAII knockout mice, show a delayed and reduced HCO 3 Ϫ response compared with WT sperm. The sperm beat frequency of CAIV Ϫ/Ϫ animals rises from 2.78 Ϯ 0.09 Hz to 5.47 Ϯ 0.25 Hz within 20 s and displays a maximum of 5.88 Ϯ 0.26 Hz (t ϭ 60 s). Sperm from CAII Ϫ/Ϫ animals accelerate from 2.89 Ϯ 0.1 Hz to 3.9 Ϯ 0.18 Hz in the first 20 s, reaching a maximum of 5.58 Ϯ 0.19 Hz after 60 s of stimulation. Similar beat frequency results were obtained by stimulating sperm with 2% CO 2 (Fig. 8B). These findings confirm the direct involvement of both enzymes in the HCO 3 Ϫmediated pathway.
Compensatory Expression of CAII and CAIV in Tissues of Single Knockout Mice-qRT-PCR analyses were performed to examine the expression of CAII and CAIV mRNA in tissues of the male reproductive tract. First, the expression in WT tissue was determined (Fig. 4C). In relation to the kidney, which served as the internal standard (relative expression value, 1.0), CAIV mRNA is only expressed in the corpus epididymidis (1.66 Ϯ 0.49). CAII mRNA is present in all parts of the reproductive tract, with the highest expression values of 0.99 Ϯ 0.23 in the testis. Next, CAIV mRNA expression was determined in CAII Ϫ/Ϫ animals and vice versa to verify a potential compensatory overexpression. Fig. 4D shows the relative CAIV expression in CAII Ϫ/Ϫ knockout animals. In these experiments, WT tissue served as internal standards. An overexpression of CAIV in CAII knockout tissues can be detected in the caput (5.70 Ϯ 5.48), corpus (7.06 Ϯ 6.78), cauda epididymidis (9.65 Ϯ 7.18), and vas deferens (1.89 Ϯ 1.32). In contrast, CAIV Ϫ/Ϫ animals show (Fig. 4E) DECEMBER 4, 2015 • VOLUME 290 • NUMBER 49
Carbonic Anhydrase in Sperm
reproductive tract do not show any difference compared with WT organs (data not shown). However, a closer examination after weaning reveals a significant reduction in animal and testis weights compared with WT mice. The mutant offspring was weighed once a week from day 21 on, and a reduced weight in all life stages was detected. Fig. 5A shows the body weight of 10-week-old male mice. CAII Ϫ/Ϫ CAIV Ϫ/Ϫ mice have an average weight of 22.13 Ϯ 0.61 g, whereas WT mice of the same age weigh 26.82 Ϯ 0.78 g. Adult testes of CAII Ϫ/Ϫ CAIV Ϫ/Ϫ mice reach an average weight of 72.33 Ϯ 3.51 mg, which is a reduc- In comparison with WT sperm, the deletion of CAII or CAIV leads to a reduction in activity of 32% and 65%, respectively. CA activity in CAII Ϫ/Ϫ CAIV Ϫ/Ϫ mice is 0.7 units/ml (Ϯ 0.02), which equals a reduction of 86.9%. Data are mean Ϯ S.E. of six or more measurements from three independent experiments. The asterisks refer to the values of the bar for WT. *, p Յ 0.05; **, p Յ 0.01; ***, p Յ 0.001. B, analysis of the single-sperm beat frequency response to 15 mM HCO 3 Ϫ . CAII Ϫ/Ϫ and CAIV Ϫ/Ϫ sperm display a delayed and reduced acceleration of beat frequency. Data are mean Ϯ S.E. of Ն40 single sperm. C, expression of CAII and CAIV mRNA in WT tissue detected by qRT-PCR. D and E, in relation to WT tissue, CAIV is overexpressed in the caput, corpus, and cauda epididymidis (epi.) of CAII Ϫ/Ϫ mice (D), whereas CAII expression is highly increased in the cauda epididymidis and vas deferens (vas def.) of CAIV Ϫ/Ϫ animals (E) (n ϭ 3). tion of 32.2% compared with WT testes (106.75 Ϯ 6.5 mg) (Fig. 5A). Because CAs are essential for the regulation of pH i , we determined the steady-state pH i of the sperm of double knockout mice (Fig. 5B). Mean values of pH i in CAII Ϫ/Ϫ CAIV Ϫ/Ϫ cells (pH i ϭ 6.75 Ϯ 0.07) do not show any significant change compared with the sperm of WT animals (pH i ϭ 6.78 Ϯ 0.05). Through morphological studies of the testes (Fig. 5C), we observed a thinner germ epithelium in the seminiferous tubules, which might be an explanation for the significant lower testis weight. By measuring 130 tubuli of three different mice, an average germ cell epithelium of 60.97 Ϯ 0.93 m in CAII Ϫ/Ϫ CAIV Ϫ/Ϫ animals and 76.00 Ϯ 0.95 m in WT testes was detected (Fig. 5D). We used capacitating conditions to compare the sperm of WT and CAII Ϫ/Ϫ CAIV Ϫ/Ϫ mice. Sperm of both animals were able to show tyrosin phosphorylation as an indicator of late effects of bicarbonate (Fig. 6A). Because no differences in the steady-state pH i of WT and double knockout sperm were observed, we next investigated whether there was a difference in the kinetics of intracellular alkalization upon stimulation with ammonium chloride. Fig. 6B shows the response of WT sperm (black line) and CAII Ϫ/Ϫ CAIV Ϫ/Ϫ sperm (gray line) to ammonium chloride. The decrease of fluorescence is 25.5% in WT and 22.6% in CAII Ϫ/Ϫ CAIV Ϫ/Ϫ sperm. To verify the compensatory expression of CAII in CAIV Ϫ/Ϫ mice and vice versa, we used Western blot analysis. Sperm of CAII Ϫ/Ϫ animals show a reduced CAIV protein level whereas the CAII protein level in sperm of CAIV Ϫ/Ϫ mice corresponds to that of WT mice (Fig. 6C).
CAII/CAIV Deletion Drastically Affects Sperm Motility and Fertility-To assess different motility patterns, we performed CASA analyses with sperm populations from CAII Ϫ/Ϫ and CAII Ϫ/Ϫ CAIV Ϫ/Ϫ animals (Fig. 7A) determined by offspring analysis from three independent mating experiments. As indicated in Fig. 7B, WT males were mated with WT females (type 1) or CAII Ϫ/Ϫ CAIV Ϫ/Ϫ females (type 2). Furthermore, WT females were mated with CAII Ϫ/Ϫ CAIV Ϫ/Ϫ males (type 3). Type 4 indicates the mating of CAII Ϫ/Ϫ CAIV Ϫ/Ϫ males with CAII Ϫ/Ϫ CAIV Ϫ/Ϫ females. Each pair type was mated over a period of 16 weeks, and the numbers of litters and pups were recorded. WT mating (type 1) produced an average offspring per week of 2.02 Ϯ 0.19. Smaller litter sizes resulted from matings of CAII Ϫ/Ϫ CAIV Ϫ/Ϫ females with WT males (type 2). In this case, four individual matings produced nine living litters with a total number of offspring of 29, which equals 0.45 Ϯ 0. 16 Ϫ -mediated pathway in early sperm activation, we performed waveform analyses with single sperm. Fig. 8, A and B, shows the beat frequency of sperm from WT (solid line), CAII Ϫ/Ϫ (dashed line), and CAII Ϫ/Ϫ CAIV Ϫ/Ϫ (dotted line) mice over time by perfusion with HCO 3 Ϫ (Fig. 8A) and CO 2 (Fig. 8B). The WT sperm beat frequency accelerates within 20 s from 3.34 Ϯ 0.09 Hz to 7.04 Ϯ 0.18 Hz with HCO 3 Ϫ and from 3.53 Ϯ 0.08 Hz to 6.51 Ϯ 0.19 Hz with CO 2 . CAII Ϫ/Ϫ sperm show an increase in beat frequency within the first 20 s from 2.90 Ϯ 0.10 Hz to 3.90 Ϯ 0.18 Hz by HCO 3 Ϫ and from 2.77 Ϯ 0.09 Hz to 3.72 Ϯ 0.14 Hz by CO 2 application. The respective Ϫ ) and 6.57 Ϯ 0.21 (CO 2 ). These results reveal that the additional CAIV gene loss of CAII Ϫ/Ϫ mice does not potentiate the adverse effects of the mutant. A residual CA activity in the double KO sperm preparations could be due to contamination by somatic cells.
To confirm the cleavage of CAIV, WT sperm were first treated with PLC to produce biochemical CAIV knockout sperm (CAIV BCϪ ). These PLC-incubated sperm show a reduction of the immunoreaction with the CAIV antibody in the entire sperm tail compared with WT sperm (Fig. 8C). The same results can be obtained by Western blot analysis (Fig. 8D). The PLC-treated sperm fraction displays a diminished band intensity in comparison with non-treated WT sperm (Fig. 8D, left panel). Additionally, the CAIV protein is present in the corresponding supernatant of PLC-incubated sperm, whereas no CAIV-specific band is detectable in the supernatant of nontreated WT sperm (Fig. 8D, right panel). To exclude that the treatment with PLC has an effect on the HCO 3 Ϫ -mediated beat frequency acceleration, we incubated sperm from genetic CAIV Ϫ/Ϫ mice with PLC (CAIV Ϫ/Ϫ / BCϪ ). The response of such cells to HCO 3 Ϫ compared with CAIV BCϪ sperm is not significantly different (Fig. 8E). The beat frequency of CAIV Ϫ/Ϫ / BCϪ sperm increases during 180 s from 3.06 Ϯ 0.19 Hz to 4.93 Ϯ 0.24 Hz after stimulation with HCO 3 Ϫ . Similar values were obtained with CAIV BCϪ sperm (3.09 Ϯ 0.21 Hz to 4.90 Ϯ 0.25 Hz). The fact that sperm of CAIV Ϫ/Ϫ / BCϪ mice respond similar to HCO 3 Ϫ as CAIV BCϪ cells confirms that treatment with PLC does not affect the response to HCO 3 Ϫ in early sperm activation because of a loss of other glycosylphosphatidylinositol-anchored proteins.
Loss of CAII and CAIV Substantially Affects Sperm Motility
and Murine Fertility-Our immunohistochemical and PCR studies show the presence of CAII in epithelial cells of all parts of the epididymis. These results concur with studies in rats (19,59) and humans (15). Because of its enzymatic activity, CAII is presumed to be involved in the acidification process of the epididymal fluid. The strongest CAII protein signal was detected in epithelial cells of the corpus epididymidis. This result indicates a dominant regulatory function of CAII in this section, where pH-dependent sperm maturation processes like protein transfer, lipid remodeling, and protein modifications take place (1, 60 -63). Furthermore, sperm storage in a quiescent state in the cauda epididymidis is a pH-and HCO 3 Ϫ -dependent process in which CAII is likely to be involved (64).
In contrast to CAII, this study and a previous study (50) demonstrate the presence of the membrane-bound CAIV only in the stereocilia network of the corpus epididymidis. Here, the transfer of CAIV from the stereocilia of the epithelial cells to the sperm plasma membrane takes place (50,65), which may explain the exclusive and high expression levels in this part of the epididymis. The high CAIV protein content in the cauda epididymidis we detected with Western blot analysis is the result of the luminal CAIV-positive sperm, which were not flushed out prior to protein isolation. An additional function of CAIV, together with the co-localized CAII in corpus epithelial cells, could be HCO 3 Ϫ resorption combined with parallel intracellular H ϩ generation (22,66). This hypothesis is supported by the work of Au and Wong (20), who showed that the intravenous application of the nonspecific CA inhibitor acetazolamide blocks luminal acidification in the rat epididymis. The analysis of prepubertal (3-week-old) and pubescent (5-week-old) mice show that the CAII protein is detectable during puberty, pointing to specifically regulated expression. This could be a hormonally regulated mechanism that has already been described in rats for CAs (67)(68)(69). We propose that CAII enzyme activity plays a major role during puberty to achieve fertility. In contrast to CAII, CAIV is already prepubertally expressed in the corpus epididymidis. However, in the cauda epididymidis, CAIV is initially present in 5-weekold mice. The specific tissue expression might also reflect different functions of CAIV in the male reproductive tract, such as CAIV protein transfer in the corpus epididymidis and sperm storage in the cauda epididymidis.
Generating CAII/CAIV double knockout mice allowed us to demonstrate the importance of both enzymes in sperm for the achievement of fertility. Zhou et al. (70) have described macroscopic anomalies in the rete testis and ductuli efferentes of CAII-deficient mice. This study demonstrates a significantly reduced animal and testis weight in CAII Ϫ/Ϫ CAIV Ϫ/Ϫ mice. Detailed macroscopic analyses of the testes indicate that the reduced weight is accompanied by a thinner germinal epithelium. Interestingly, spermatogenesis and sperm storage are not disturbed in CAII Ϫ/Ϫ CAIV Ϫ/Ϫ mice. Sperm can be recovered in equal numbers from WT and CAII-and CAIV-null mutant mice despite the significantly reduced height of the germ cell epithelium. It is known that, in the stomach of CAII Ϫ/Ϫ animals, gene loss is compensated by overexpression of CAIX (71). Analogous results were observed with qRT-PCR analyses in tissue from CAII-as well as CAIV-deficient mice. In comparison with the expression levels in WT tissue, CAIV is overexpressed in all parts of the epididymidis from CAII Ϫ/Ϫ mice, and the expression of CAII is higher in the cauda epididymidis and vas deferens from CAIV Ϫ/Ϫ animals. These results were substantiated by relative quantification. RNA expression levels were calculated in relation to the kidney as a control and 18S RNA as an internal control. According to gene and protein expression studies (59), CAXIV, which is localized in the distal part of the epididymidis, might be a putative isoform to compensate for the loss of CAII or CAIV.
Another plausible explanation for the subfertility of CAII Ϫ/Ϫ CAIV Ϫ/Ϫ animals is the direct regulatory function of CAII and CAIV in sperm. CAII is a constitutively expressed protein during spermatogenesis and is located in the principal piece of the sperm tail, which explains the high CAII RNA amounts in the testis tissue of WT mice. In contrast, CAIV is a protein that is transferred into the plasma membrane of the entire sperm tail in the corpus epididymidis (65). CAII Ϫ/Ϫ as well as CAIV Ϫ/Ϫ (50) mice each display reduced sperm motility and velocity. Using mass spectrometry, we show that CAII and CAIV are the two most important isoforms in murine sperm. The loss of almost all CA activity in sperm of CAII Ϫ/Ϫ CAIV Ϫ/Ϫ animals suggests that CAII and CAIV account for most of total CA activity in sperm. As might be expected, CAII Ϫ/Ϫ CAIV Ϫ/Ϫ mice exhibit an additional reduction in sperm motility and velocity, and, also, the percentage of immotile sperm was more than doubled compared with WT animals. Two important factors for the regulation of sperm motility are pH and HCO 3 Ϫ (31, 72), both of which are regulated by CAs (66). An imbalanced pH and/or HCO 3 Ϫ homeostasis seems to be the mechanistic basis for the reduced motility. However, the regulatory mechanisms are complex, species-specific, and not yet well understood (73). Nonetheless, the sperm pH i of CAII Ϫ/Ϫ CAIV Ϫ/Ϫ mice under steady-state conditions is not affected and remains unchanged in physiological buffer. Furthermore, sperm of CAII Ϫ/Ϫ CAIV Ϫ/Ϫ animals do not show any defects in intracellular alkalization upon stimulation with ammonium chloride. It is still unclear whether motility dysfunction is caused by an altered fluid composition in the epididymidis or by an unbalanced internal proton regulation in sperm by the loss of CAII and CAIV.
Fortunately, in vivo mating experiments provide the most informative findings because CAII Ϫ/Ϫ CAIV Ϫ/Ϫ mice are subfertile. The mating of male CAII Ϫ/Ϫ CAIV Ϫ/Ϫ mice with WT females results in a drastic decrease in offspring and a reduced fertility of 90% in comparison with pure WT matings. The fertility of CAII Ϫ/Ϫ CAIV Ϫ/Ϫ female mice is also affected. One possible cause of the reduced female fertility of 78% could be an altered composition of the uterine fluid (8,29). In this view, unsuccessful sperm capacitation decreases the fertility potential. The effect of subfertility was even more dramatic when CAII Ϫ/Ϫ CAIV Ϫ/Ϫ male and female mice were mated. Only one living litter with one living pub was born. We think that this effect points to the possibility that CAII and CAIV are not only responsible for bicarbonate regulation in sperm but also for effects in the female reproductive tract.
CAII and CAIV Are Key Enzymes in the HCO 3
Ϫ -mediated Beat Frequency Increase during Early Sperm Activation-To prove the involvement of CAII and CAIV in early HCO 3 Ϫ -me- Ϫ stimulation. F and G, shown is the response of genetic CAII Ϫ/Ϫ and CAII Ϫ/Ϫ CAIV BCϪ double knockout sperm to HCO 3 Ϫ (F) and CO 2 (G). The beat frequency of CAII Ϫ/Ϫ CAIV BCϪ sperm increases only to a maximum of 4.16 Ϯ 0.15 Hz with HCO 3 Ϫ and of 3.87 Ϯ 0.19 Hz with CO 2 application (t ϭ 80 s) and displays the weakest response. Data are mean Ϯ S.E. of Ն17 single sperm. diated events of capacitation, we analyzed sperm motility on a single-cell level. HCO 3 Ϫ induces an increase in sperm beat frequency within the first 30 s after application (31). This occurs physiologically when sperm enter the uterus, which is rich in HCO 3 Ϫ (29). We are able to increase the sperm beat frequency in vitro by stimulating the cells with 15 mM HCO 3 Ϫ and 2% CO 2 (31). In a previous study, we have shown the involvement of CAIV in the HCO 3 Ϫ -mediated pathway during early sperm activation (50). This study demonstrates that sperm of CAII Ϫ/Ϫ mice also display a reduced and delayed response to HCO 3 Ϫ . We conclude that the loss of CAII activity leads to an unbalanced intracellular HCO 3 Ϫ homeostasis, which is reflected in the delayed and decreased response to HCO 3 Ϫ . The findings that CAII and CAIV are the two most important CA isoforms in murine sperm led to the idea of generating CAII Ϫ/Ϫ CAIV Ϫ/Ϫ animals to study sperm behavior. To address the question of whether capacitation and late activation of sperm are altered by the loss of CAII and CAIV, we performed a Western blot analysis with sperm of CAII Ϫ/Ϫ CAIV Ϫ/Ϫ mice using a phosphotyrosine antibody. As expected, we did not see any changes in tyrosin phosphorylation under capacitating conditions in CAII Ϫ/Ϫ CAIV Ϫ/Ϫ compared with sperm of WT animals.
CASA analysis, which tracks the swimming path of the whole sperm population, reveals additional effects on the reduction of motility and velocity in CAII Ϫ/Ϫ CAIV Ϫ/Ϫ mice compared with sperm from single CAII knockout mice. Surprisingly, singlesperm beat frequency experiments of motile CAII Ϫ/Ϫ CAIV Ϫ/Ϫ sperm do not reveal the expected reduction of response to HCO 3 Ϫ . In fact, sperm of CAII Ϫ/Ϫ animals show the greatest delay in response to HCO 3 Ϫ . One explanation for the counterbalancing of the loss of CAII and CAIV could be that transporters or exchangers adopt the absent CA function, like Na ϩ /HCO 3 Ϫ co-transporter and Cl Ϫ /HCO 3 Ϫ exchanger (52), or that sperm of CAII Ϫ/Ϫ CAIV Ϫ/Ϫ mice compensate for gene loss with the expression of another CA isoform. Such compensatory mechanisms have already been described for CAIX in the stomach of CAII-deficient mice (71). We did not find a compensatory mechanism of CAII and CAIV for each other using Western blot analysis. Post-testicular sperm are translatively inactive, and a possible overexpression of another CA isoform in CAII Ϫ/Ϫ CAIV Ϫ/Ϫ sperm is not detectable by qRT-PCR, as is the case in other reproductive tissues. To bypass such a possible genetically induced compensatory mechanism in sperm, the genetic CAII deficiency was combined with a biochemical loss of CAIV by treating sperm of CAII Ϫ/Ϫ animals with PLC. The response of such CAII Ϫ/Ϫ CAIV BCϪ sperm to HCO 3 Ϫ or CO 2 is more delayed and reduced in comparison with the response of the sperm of CAII Ϫ/Ϫ animals. Because of the fact that HCO 3 Ϫ can evolve spontaneously, a response to HCO 3 Ϫ without any CA presence is also possible. We conclude that developing sperm possess compensatory mechanisms that help to sustain the essential HCO 3 Ϫ -mediated pathway during early sperm activation.
In summary, the epididymal localization of CAII and CAIV suggests their involvement in the acidification mechanism of the luminal fluid. In sperm, the catalytic reaction of these two enzymes contributes to nearly 100% of the total CA activity in sperm. They are key enzymes in the regulation of sperm motil-ity and essential for the HCO 3 Ϫ -mediated beat frequency increase during early sperm activation. Therefore, double knockout of CAII and CAIV leads to subfertility in mice.
|
v3-fos-license
|
2019-07-19T20:04:00.241Z
|
2019-06-12T00:00:00.000
|
197564613
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4441/11/6/1229/pdf",
"pdf_hash": "3aa9df3b7ae76d358a407c49b17995a8f0f354f2",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43173",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "0cf60519a2157baafda3cafb4a01213ace7cb153",
"year": 2019
}
|
pes2o/s2orc
|
Hydraulic Parameters for Sediment Transport and Prediction of Suspended Sediment for Kali Gandaki River Basin, Himalaya, Nepal
Sediment yield is a complex phenomenon of weathering, land sliding, and glacial and fluvial erosion. It is highly dependent on the catchment area, topography, slope of the catchment terrain, rainfall, temperature, and soil characteristics. This study was designed to evaluate the key hydraulic parameters of sediment transport for Kali Gandaki River at Setibeni, Syangja, located about 5 km upstream from a hydropower dam. Key parameters, including the bed shear stress (τb), specific stream power (ω), and flow velocity (v) associated with the maximum boulder size transport, were determined throughout the years, 2003 to 2011, by using a derived lower boundary equation. Clockwise hysteresis loops of the average hysteresis index of +1.59 were developed and an average of 40.904 ± 12.453 Megatons (Mt) suspended sediment have been transported annually from the higher Himalayas to the hydropower reservoir. Artificial neural networks (ANNs) were used to predict the daily suspended sediment rate and annual sediment load as 35.190 ± 7.018 Mt, which was satisfactory compared to the multiple linear regression, nonlinear multiple regression, general power model, and log transform models, including the sediment rating curve. Performance indicators were used to compare these models and satisfactory fittings were observed in ANNs. The root mean square error (RMSE) of 1982 kg s−1, percent bias (PBIAS) of +14.26, RMSE-observations standard deviation ratio (RSR) of 0.55, coefficient of determination (R2) of 0.71, and Nash–Sutcliffe efficiency (NSE) of +0.70 revealed that the ANNs’ model performed satisfactorily among all the proposed models.
Introduction
It is important to understand the sediment transport and river hydraulics in river systems for a variety of disciplines, such as hydrology, geomorphology, and risk management, including reservoir management. The sediment yield from a catchment is dependent on several parameters, including the topography, terrain slope, rainfall, temperature, and soil type of the catchment area [1]. On the other hand, the yield of sediment fluxes is a combination effect of weathering, land sliding, glacial, and fluvial erosions [2]. Sediment yield from these effects is quite complex [3] and the sediment transport in rivers varies seasonally. The hydrology of Nepal is primarily dominated by the monsoons, characterized by higher precipitation during the summer monsoon from June to September, contributing about 80% of the total annual precipitation [4]. Dahal and Hasegawa [5] reported that about 10% of the total precipitation occurs in a single day and 50% of the total annual precipitation occurs within 10 days of the monsoon period, responsible for triggering landslides and debris flows. The main natural agents for triggering landslides in the Himalayas are the monsoon climate, extremities in precipitation, seismic activities, excess developed internal stress, and undercutting of slopes by streams [6]. The sediments are transported by mountain streams in the form of a suspended load, as well as a bedload [7], depending on the intensity of the rainfall and number of landslide events that occurred within the catchment area [8]. Dams constructed to regulate flood magnitudes limits the downstream transportation of all suspended sediments [9]. However, the annual deposition of sediment in reservoirs decreases the capacity of reservoirs, which compromises the operability and sustainability of dams [10]. Basin morphology and lithological formation governs the amount of sediment crossing a stream station at a certain timepoint, which is generally acted upon by both active and passive forces [11].
Outbursts of glaciers and the failure of moraine dams trigger flash floods [6,[12][13][14], which is one of the main causes of large boulder transportation in high gradient rivers in mountain regions. Different hydraulic parameters, such as shear stress, specific stream power, and flow velocity, can be combined in different ways to form sediment transport predictors [15,16]. Shear stress is a well-known hydraulic parameter that can easily determine the ability of rivers to transport coarse bedload material [17,18]. Similarly, flow competence assessments of floods related to the largest particle size transported are described by the mean flow stress, specific stream power, and mean velocity [19,20]. A number of studies have demonstrated the relationships of shear stress [20][21][22][23][24], specific stream power [20,23,24], and flow velocity [20,21,[23][24][25][26] of rivers with the size of the boulder movement in the river. It is important to perform this study in Kali Gandaki River as this river originates from the Himalayas and there is limited research on sediment transport by this river, which is crucial in Nepal due to differences in the terrain within a short distance.
In this study, relationships between the fluvial discharge and hydraulic parameters, such as the shear stress, specific stream power, and flow velocity, were generated to derive a lowest boundary equation for the maximum size of particles transported by fluvial discharge in the Kali Gandaki River at a point 5 km upstream of the hydropower dam. The equation was used to calculate the maximum size of particles transported by fluvial discharge during 2006 to 2011. Additionally, it explored the nature of hysteresis loops, developed a hysteresis index, quantified the annual suspended sediment load (ASSL) transport, developed different suspended sediment transport models for Kali Gandaki River, and applied them to predict the suspended sediment rate as well as the average ASSL transport.
Study Site Description
The Kali Gandaki River is a glacier-fed river originating from the Himalaya region, Nepal [27]. The basin has a complex geomorphology and watershed topography with rapid changes in elevation, ranging from about 529 m MSL to 8143 m MSL. It flows from north to south in the higher Himalayan region before flowing eastward through the lower Himalayan region, entering the Terai plains of Nepal and connecting with Narayani River, which ultimately merges with the Ganges River in India. The snowfall area is separated, with elevation ranges less than 2000 m MSL having no snow cover, 2000 to 4700 m MSL having seasonal snow, 4700 to 5200 m MSL having complete snow cover except for 1 or 2 months, and elevations greater than 5200 m MSL having permanent snow [4]. The Kali Gandaki catchment basin covers a 7060 km 2 area, comprised of elevations of 529~2000 m MSL covering 1317 km 2 (19% coverage); 2000~4700 m MSL covering 3388 km 2 (48% coverage); 4700~5200 m MSL covering 731 km 2 (10% coverage); and elevations greater than 5200 m MSL covering 1624 km 2 (23% coverage). Figure 1a shows the different altitude areas' coverage map showing river networks, with the locations of meteorological stations, created in ArcGIS 10.3.1 (ERSI Inc., Berkeley, CA, USA) software. The elevations of Kali Gandaki River decrease from 5039 m MSL in the higher Himalayas to 529 m MSL at Setibeni, 5 km upstream of the hydropower dam ( Figure 1b). This encompasses a wide variation in mean rainfall, ranging from less than 500 mm year −1 in the Tibetan Plateau to about 2000 mm year −1 in the monsoon-dominated Himalayas [8]. The main physiographic characteristics of the Kali Gandaki River basin at the hydropower station are shown in Table 1.
The discharge of this river varies seasonally and is dependent on the rainfall received by its tributaries' catchments in addition to the amount of snow melting from the Himalayas. A dam (27 •
Data Collection and Acquisition
The department of Hydrology and Meteorology (DHM), Nepal established a gauge station (28 • 00 30 N, 83 • 36 10 E) in 1964 (www.dhm.gov.np) and it operated until 1995. The gauge station was not operated during the hydropower dam construction period (1997)(1998)(1999)(2000)(2001)(2002). The bed level of the dam increased yearly due to the trapping of bedload as well as a suspended sediment load by the dam, which reduced the sediment load downstream. The cross-sectional areas of different years were calculated from area-discharge regression equations obtained from historical discharge rating DHM data . Sedimentation lowers the reservoir capacity of the dam annually.
Analysis of Shear Stress, Specific Power, and Flow Velocity
Historical discharge and cross profile elevations data sourced from Nepal Electricity Authority (NEA), Nepal were used to calculate the bed shear stress, specific power developed, and flow velocity by using the following common equations [28][29][30]: The mean available power supply over a unit of bed area is calculated by: where w t represents the width of the flow, and Ω is the available stream power supply or the time rate of the energy supply to the unit length of the stream in w.m −1 and is given by: The flow velocity is calculated by Manning's formula: where τ b is the bed shear stress (N·m −2 ), ρ is the density of water (1000 kg·m −3 ), g is the acceleration due to gravity (9.81 m·s −2 ), R is the hydraulic radius (m), i is the slope of the river bed (m·m −1 ), ω is the mean available specific stream power per unit area (w·m −2 ), Q is the observed discharge (m 3 ·s −1 ), v is the flow velocity (m·s −1 ), and n is Manning's constant. Manning's constant, n, in a steep natural channel is calculated by the equation proposed by Jarrett [31]:
Development of Different Models for Suspended Sediment Predictions
The daily suspended sediment load transported by the river in the catchment area is a key indicator to visualize the sediment losses from the higher Himalayas and to assess the reservoir management in hydropower projects. Different researchers have developed multiple linear regression (MLR) and nonlinear multiple regression (NMLR), sediment rating curve (SRC), and artificial neural networks (ANNs) models for the prediction of the daily suspended sediment load [32,33].
Multiple Linear Regression
Multiple linear regression assumes that the sediment load transported by a river is in the linear form. A dependent variable, the suspended sediment load, Q s t , depends on two independent variables, the daily average discharge of a river (Q w t ) and the average rainfall (R t ) of a catchment area, and the model is expressed in the form of a regression equation [32,33]: The different linear models were created by considering Q w t , a day lag discharge; Q w t−1 and R t , and a day lag rainfall, R t−1 , were the input variables and the performance of different models was also evaluated.
Nonlinear Multiple Regression
The suspended sediment transported by the river shows a dynamic state in a nonlinear form so that it is expressed in the form of a polynomial equation [32,33]: Different nonlinear models were also created and their performance was evaluated separately.
Sediment Rating Curve
SRC is expressed [34] in the form of: where Q s t is the suspended sediment load (kg·s −1 ), Q w t is the daily average discharge of river, and a and b are coefficients that depend on the characteristics of a river.
Artificial Neural Networks
An artificial neural network is capable of solving complex nonlinear relationships between input and output parameters, which consists of three different layers known as an input, hidden, and output layer, respectively [33]. MATLAB (R2016a) software was used to develop different artificial neural networks, where the input consisted of the average daily river discharge (Q w t ), a day lag discharge (Q w t−1 ), and average daily rainfall (R t ), a day lag rainfall (R t−1 ), where the output consisted of the average daily suspended sediment load (Q s t ). Out of 2191 data sets, 70% of the data was used for training, 15% for validation, and 15% for testing in the ANNs.
Model Performance
The performance of different models was assessed in terms of the root mean square (RMSE), percent BIAS (PBIAS), RMSE-observations standard deviation ratio (RSR), coefficient of determination (R 2 ), and Nash-Sutcliffe efficiency (NSE) [32,35,36]: The lower the RMSE value, the better the model's performance: where the optimal PBIAS value is 0.0, and positive values indicate a model underestimation bias and negative values indicate a model overestimation bias: The optimal value for RSR is 0.0; the lower the RSR, the lower the RMSE and the better the model's performance.
Coefficient of determination: The optimal value for R 2 is 1.0; the higher the value of R 2 , the better the model's performance: The optimal value for NSE is 1.0 and values range from −∞ to 1. Values between 0.0 and 1.0 are taken as acceptable levels of performance whereas negative values indicate that the mean observed value is a better predictor than the predicted value, which indicates an unacceptable performance. Here, Q s o,i and Q s p,i are the observed and predicted suspended sediment and Q s o,i and Q s p,i are the average observed and average predicted suspended sediment, respectively.
Results and Discussion
The (Figure 2b) and sediment deposited into the reservoir decreased the reservoir's capacity. The effects of climate change in the higher Himalayas appeared in the form of uneven patterns of increasing rainfall, glacial rate erosion, and permafrost degradation, resulting in an increase in landslides and debris flows [2], which also reflects the temporal and spatial variation of the water balance components in the Kali Gandaki basin [37]. The amount and intensity of rainfall around its catchment affected the discharge rating curve [27].
Relationship of Shear Stress, Specific Stream Power, and Flow Velocity with Discharge
The calculated shear stress, specific stream power, and flow velocity of Kali Gandaki River at the discharge gauge station, which was about 5 km upstream from the dam within limited data from 2003 to 2011, was related as: The highest shear stress, specific stream power, and flow velocity were observed during 2008 whereas the lowest were observed during 2007. These parameters were directly related with the hydraulic radius in the case of shear stress and flow velocity, whereas the fluvial discharge in the case of specific power (Equations (1), (2), and (4)). The sedimentation process increased the bed level elevation, changing the cross geomorphology of the bed (Figure 2b). These parameters followed nearly the same trends during the remaining years. The shear stress, specific power, and flow velocity of the river increased the function of the fluvial discharge, as shown in Figure 3a
Relationship of Particle Sizes and Fluvial Discharge
The hydraulic parameters were the shear stress, specific stream power, and flow velocity depict transportation of different particle sizes. When subjected to the same fluvial discharge, the specific power showed an increase of 327 mm to 2062 mm particle size whereas the flow velocity depicted an increase of 37 mm to 1794 mm. The shear stress exhibited an increase of 147 mm to 1492 mm particles, which covered the lowest maximum particle sizes compared to the specific power and flow velocity ( Figure 4b). These three parameters were derived from the fluvial discharge, as summarized in the lowest boundary equation form of the fluvial discharge as shown in Figure 4b: Equation (17) predicted that from the 2003 to 2011, the discharge during monsoons was capable of transporting an 840 mm particle size. Hydraulic parameters, such as the bed shear stress, specific stream power, and flow velocity, have gained wider acceptability among different researchers [20][21][22][23][24][25][26] regarding their useful contribution to the derivation of the relationship between particle sizes and hydraulic parameters. The shear stress and particle size relationship of this study was compared with Costa's [20] average of τ b = 0. For a comparative study of the specific stream power, the particle size relationship of this river was compared with Costa's [20] average of ω = 0.030d 1.686 and lower boundary of ω = 0.009d 1.686 for 50 mm ≤ d ≤ 3290 mm; O'Connor's [23] average of ω = 0.002d 1.71 and lower boundary ω = 30 × 1.00865d 0.1d for a particle size of 270 mm ≤ d ≤ 6240 mm; and Williams' [24] lower boundary of ω = 0.079d 1.3 for a particle size of 10 mm ≤ d ≤ 1500 mm ( Figure 6). The calculated values of the shear stress, specific stream power, and flow velocity were less than the observed values by Fort [6], who reconstructed the 1998 landslide dam located about 76 km upstream of the existing hydropower dam of Kali Gandaki River and estimated the hydraulic parameters with an exceptional dam breach discharge of 10,035 m 3 s −1 . This high discharge was responsible for the movement of a maximum boulder size of 4300 mm [6]. The higher shear stress, specific stream power, and flow velocity observed due to a higher fluvial discharge after the breaching of landslide dam were responsible for the transportation of larger sized boulders (Figures 5-7).
Estimation of the Return Period by Gumbel's Distribution
The flood return period from the historical data of DHM, Nepal can be forecasted by the Gumbel method [38] as Q T = Q + kσ n , where Q is the mean discharge, k is the frequency factor, and σ n is the standard deviation of the maximum instantaneous flow, respectively. The frequency factor is given by k = y t −y n s n , where y n is the mean and s n is the standard deviation of Gumbel's reduced variate; y t is given by y t = −ln ln T T−1 . The observed highest flood in 1975 was 3280 m 3 ·s −1 . According to the Gumbel frequency of flood distribution, the highest flood will occur after a 40 year return period, as shown in Figure 8a, and the observed extreme discharge, as shown in Figure 8b.
Boulder Movement Mechanisms in the Himalayas
High gradient river hydraulics are strongly influenced by large boulders, with the diameters on the same scale as the channel depth or even the width [39]. Williams [24] mentioned that five possible mechanisms of boulder transport by high gradient river are by ice, mudflow, water stepwise creep by periodic erosion, undermining of stream banks, and avalanches. The bed forming material remains immobile during typical flows, and larger bed forming particles in steep gradient channels typically become mobile only every 50 to 100 years during a hydrologic event [40]. After that, the gravel stocked in low energy sites during lower floods is mobilized and travels as the bedload [40].
The failure of the mountain slope of Kali Gandaki catchment in 1988, 1989, and 1998 was due to an evolved rock avalanche and caused the damming of the Kali Gandaki River [2]. The shockwaves after the massive 7.8 M w Gorkha earthquake, Nepal on 25 April 2015 and its aftershocks on 23 May 2015 created cracks in the weathered rocks and weakened the mountain slopes of this catchment, which brought rocks, debris, and mud down into the river [41,42]. The river was blocked about 56 km upstream from the hydropower dam by a landslide on 24 May 2015 for 15 h [41] (Figure 9a,b). The downstream fluvial discharge after the blockage was almost zero and a flash flood occurred after an outburst of the natural landslide dam (Figure 9c,d). Extreme flooding during the monsoon period due to high rainfall and a flash flood (Figure 9b), generated by the overtopping of landslide dams [42], was responsible for the noticeable transport of large boulders in the river bed of Kali Gandaki River. The combination of fluid stress, localized scouring, and undermining of the stream banks may cause small near vertical displacements of large boulders [43]. Catastrophic events, such as natural dam breaks and debris flows, are responsible for larger translations of boulders in rivers [40,43].
Hysteresis Curve and Hysteresis Index (HI mid ) Analysis
The relationship between the suspended sediment concentration and fluvial discharge can be studied by the nonlinear relationship between them known as hysteresis [44]. Generally, a clockwise hysteresis loop is formed due to an increasing concentration of sediment that forms more rapidly during rising limb, which suggests a source of sediment close to the monitoring point and sediment depletion in the channel system. Conversely, an anticlockwise hysteresis loop shows a long gap between the discharge and concentration peak, which suggests that the source is located far from the monitoring point or bank collapse [45,46].
Clockwise hysteresis loops were developed, increasing the suspended sediment load on the rising limb of hysteresis from December to July, leading to a maximum value of the suspended sediment load of 10,691 kg·s −1 for a fluvial discharge of 1053 m 3 ·s −1 on August 2009. The suspended sediment load decreased on the falling limb of hysteresis from July/September to November. Overall, these six years were characterized by distinct clockwise hysteresis patterns (Figure 10a).
The HI mid is a numerical indicator of hysteresis, which effectively shows the dynamic response of suspended sediment concentrations to flow changes during storm events [47].
The midpoint discharge was calculated by Lloyd [46] and Lawler [47]: where k is 0.5, Q max is the peak discharge, and Q min is the starting discharge of an event.
The hysteresis index value was calculated by Lloyd [46] and Lawler [47]: where Q sRL and Q sFL are the suspended sediment on the rising and falling limb, respectively.
Yearly Suspended Sediment Yield and Prediction by Different Models
A regression equation derived from the observed data (2006 to 2011) of the suspended sediment versus the discharge of the river shown in Figure 10b is given by: The total suspended sediment yield from the catchment is given by: where Y s is the total annual sediment yield from the catchment, C wi is the suspended sediment concentration in mg·L −1 , Q wi is the fluvial discharge in m 3 ·s −1 , dt is the time interval, t i and t i+1 are the preceding and succeeding time in seconds, respectively. This study showed that the median ASSL transported by KaliGandaki River in the hydropower reservoir was 0.003 Mt during winter, increased to 0.026 Mt during spring, was 41.405 Mt during the summer season, and decreased 0.175 Mt during the autumn season ( Figure 11a). Compared to the seasonal transport of suspended sediment, more than 96% of the suspended sediment was transported during the summer season. This depicts a wide seasonal variability of the suspended sediment caliber, which was nearly 14,000 times higher than the winter season (Figure 11a). The maximum observed ASSL transported by the river was 58.426 Mt in 2009, and after that it decreased (Figure 11b).
The HI mid ≈ 0 indicated a weak hysteresis loop whereas HI mid > 0 indicated a clockwise hysteresis loop, and HI mid < 0 an anticlockwise hysteresis loop. Moreover, the maximum HI mid developed was +2.64 in 2006, depicting the higher sediment transport rate in the rising limb but lower sediment transport rate in the falling limb (Figure 10a). The minimum HI mid developed was +0.53 in 2008, depicting the nearly same paths of the rising and falling limb and indicating a weak hysteresis loop (Figures 10a and 11b).
Different types of MLR, NLMR, general power, log transform linear, and ANNs models, including inputs of the fluvial discharge and average rainfall of the catchment, were developed to select the most suitable model and the results are shown in Tables 2-6, respectively. The performance parameters of MLR and NLMR were satisfactory but predicted negative sediment values for low fluvial discharges and low rainfall, thus these models are unacceptable. The RMSE, PBIAS, RSR, R 2 , and NSE values of the general power model, log transform models, and ANNs are shown in Tables 4-6. In general, the model simulation can be judged as "satisfactory" if NSE > 0.50, and RSR ≤ 0.70, and if the PBIAS value is ±25% for the stream flow and the PBIAS value is ±55% for the sediment [35]. In this study, the predicted values from ANNs (4−10−1−1) showed an RMSE value of 1982 kg·s −1 , PBIAS value of +14.26, RSR value of 0.55, R 2 value of 0.71, and an NSE value of +0.70, which indicates that the ANNs model's performance was satisfactory. Figure 12a-d show the comparison between the model's predicted transport rates of the suspended sediment discharge in kg·s −1 of the SRC, log transform power model, log transform linear models, and ANNs and the observed suspended sediment values respectively.
Among the SRC, power, log transform, and ANN models, the best median ASSL predicted by the ANN model was 37.611 Mt for the period of 2006 to 2011, whereas the observed median ASSL was 41.678 Mt. The mean ASSL transported by the river to the hydropower reservoir was 40.904 ± 12.453 Mt for 2006 to 2011 and the ANNs' predicted mean value was 35.190 ± 7.018 Mt ( Figure 13). Struck [8] reported that the average annual suspended sediment transported by this river was 36.9 ± 10.6 Mt. Comparison of different models' predicted and observed yearly total suspended sediment transport. Central lines indicate the median, and bottom and top edges of the box indicate the 25th and 75th percentiles, respectively. The whiskers extend to the most extreme data points not considered outliers, the '+' symbol represents outliers (1.5-fold interquartile range), and the circle shows the mean value.
Conclusions
Shear stress, specific stream power, and flow velocity are important key hydraulic parameters to describe sediment transport in river systems. The monsoon fluvial discharge and landslide dam outburst flood (LDOF) were responsible for boulder movements in Kali Gandaki River, Nepal. The lower boundary equation derived from a broad range of observed and calculated data sets estimated that a maximum particle size of 840 mm was transported by the monsoon fluvial discharge from 2003 to 2011. The ASSL transported by KaliGandaki River in the hydropower reservoir increased from winter to pre-monsoon to monsoon, respectively, and decreased in the post-monsoon period. It was estimated that 40.904 ± 12.453 Mt suspended sediment is lost annually from the higher Himalayas. Additionally, the ANN model provided satisfactory results for the prediction of the suspended sediments' transport rate in Kali Gandaki River, where the annual predicted mean ASSL value was 35.190 ± 7.018 Mt. These parameters are important for visualizing sediment loss from the higher Himalayas to the sea and also for monitoring the dead storage volume of reservoirs for hydroelectric power generation.
|
v3-fos-license
|
2023-01-16T14:08:47.823Z
|
2015-04-02T00:00:00.000
|
255829392
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1186/s12885-015-1206-0",
"pdf_hash": "e8acc18a268386597f4ef12efb8110174ffe16af",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43174",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "e8acc18a268386597f4ef12efb8110174ffe16af",
"year": 2015
}
|
pes2o/s2orc
|
Lack of a protective effect of cotton dust on risk of lung cancer: evidence from two population-based case-control studies
Lung cancer is the leading cause of cancer death in North America. Exposure to cotton dust has previously been reported to decrease the risk of lung cancer. We used data from two large case-control studies conducted in Montreal from 1979-1986 (Study 1) and 1996-2002 (Study 2) respectively, to examine the association between occupational exposure to cotton dust and risk of lung cancer. Cases were diagnosed with incident histologically-confirmed lung cancer (857 in Study 1, 1203 in Study 2). Population controls were randomly selected from electoral lists and frequency-matched to cases by age and sex (533 in Study 1, 1513 in Study 2). Interviews for the two studies used a virtually identical questionnaire to obtain lifetime occupational and smoking history, and several lifestyle covariates. Each participant’s lifetime occupational history was reviewed by experts to assess exposure to a number of occupational agents, including cotton dust. Odds ratios (ORs) and 95% confidence intervals (CIs) were estimated by unconditional logistic regression, adjusting for potential confounders. The lifetime prevalence of exposure to cotton dust was approximately 10%-15% in both studies combined, with some variation by study and by sex. Overall there was no decreased risk of lung cancer among subjects exposed to cotton dust. Rather, among all subjects there was a suggestion of slightly increased risk associated with any lifetime exposure to cotton dust (OR = 1.2, 95% CI: 1.0-1.5). This risk appeared to be concentrated among cases of adenocarcinoma (OR = 1.6, 95% CI: 1.2-2.2), and among moderate and heavy smokers (OR = 1.3, 95% CI: 1.0-1.7). There was no association when restricting to cases of either squamous cell or small cell cancer, or among never smokers and light smokers. An analogous examination of subjects exposed to wool dust revealed neither increased nor decreased risks of lung cancer. There was no evidence that cotton dust exposure decreased risks of lung cancer.
Background
Lung cancer is the leading cause of cancer death in North America, accounting for about a quarter of all cancer deaths [1,2]. Due to a lack of effective screening, most cases of lung cancer are diagnosed at a relatively advanced stage, and consequently survival is very low (15% five-year survival rate) [3]. Lung cancer likely results from a combination of genetic and environmental factors, including smoking and occupational exposures.
Many occupational exposures, including asbestos, silica, nickel, and hexavalent chromium, have been identified as lung carcinogens [4]. Cotton dust as an occupational exposure has been associated with adverse respiratory effects including byssinosis and diminished lung function [5]. Peculiarly, cotton dust exposure has also been linked with a decreased risk of lung cancer [6][7][8][9][10]. An early report of decreased lung cancer risk among cotton textile workers came from the United States, where a standardized lung cancer mortality ratio of 0.55 (95% CI: 0.39-0.76) was reported in Georgia [7]. Subsequently there have been some other reports of decreased lung cancer risk in cotton-exposed workers in North Carolina [9], China [11,12], the UK [8], and Poland [13]. In some of these studies the decreased risk was restricted to certain sex, smoking subgroups, or calendar years [8,9,13], and some of the decreased risks were not statistically significant [11]. Furthermore, there have been other reports from Australia [14], Lithuania [15], and Italy [16] which found no evidence of decreased risks. A 2009 meta-analysis of 11 studies reported a summary relative risk of lung cancer among textile workers of 0.71 (95% CI: 0.52-0.95), albeit with considerable variability between studies and equivocal dose-response information within studies [17].
This ostensible decreased risk is hypothesized to result from exposure to endotoxins contained in cotton dust. Endotoxins are components of Gram negative bacteria consisting of three components (O-specific polysaccharide, core polysaccharide, and lipid A), one of which (lipid A) appears to have anti-carcinogenic activity [18,19]. Further epidemiologic evidence for this hypothesis came from a study among female textile workers in Shanghai, in which cumulative exposure to endotoxin was associated with a significantly decreased risk of lung cancer, with a dose-response relationship observed (HR of 0.60 [95% CI: 0.43-0.83] for highest levels of exposure compared to no exposure) [6].
While there are some indications of biologic plausibility of a protective effect of cotton dust on lung cancer supported by some, albeit inconsistent, epidemiologic evidence, it is important to produce further complementary evidence to assess this hypothesis. Montreal, Canada, with a population of about 3 million, is a propitious locale for such analyses, with approximately 25,000 jobs in the textile and clothing industries, and 1000 companies in the metropolitan Montreal area.
We carried out two large case-control studies in Montreal to determine the association between a large number of occupational exposures, including various textile dusts, and cancer, with detailed data collected on smoking history and other potential confounders. We used this database to analyze the association between cotton dust and risk of lung cancer. While our primary interest was to assess a possible protective association with cotton dust exposure, we also analyzed wool dust and compared both sets of results because wool is an organic fiber of similar exposure prevalence to cotton, levels of contamination with endotoxins are much lower in wool than cotton dust, and endotoxin exposure among workers in wool processing is generally lower than in cotton processing [20]. If there were a general protective effect associated with working in the textile industry, it should manifest in reduced risks for both wool dust and cotton dust. The analysis of wool dust thus informs us about the specificity of any effect we might observe for cotton dust.
Design and study subjects
Both studies used a case-control design, with eligible subjects restricted to Canadian citizens resident in the Montreal area. Study 1, conducted from 1979 to 1986, included males aged 35 to 70 years diagnosed with cancer at any of 19 sites, including the lung. Study 2, conducted from 1996 to 2002, included men and women aged 35 to 75 diagnosed with a lung malignancy. In both studies, cases were ascertained in the 18 largest hospitals located in the metropolitan Montreal area; only incident, histologically confirmed cancers were included. In both studies, population controls were randomly sampled from population based electoral lists, stratified by sex and age to the distribution of cases. In Quebec, Canada, electoral lists were maintained by means of active enumeration of households until 1994; they are since then continually updated and are thought to represent nearly complete listings of Canadian citizens residing in the province. Ethical approval was obtained for each study from each participating hospital and academic institution (Institut Armand-Frappier, McGill University, Université de Montréal, Centre de recherche de l'Université de Montréal). All participating subjects provided informed consent. Additional details of subject ascertainment and data collection have been published previously [21][22][23][24].
In Study 1, 1082 lung cancer cases and 740 eligible population controls were identified and attempts were made to interview them. Of these, 857 (79%) cases and 533 (72%) controls completed the interview. Since Study 1 included cancers at several different sites, it was possible to constitute an additional control group for the lung cancer series, namely subjects with cancers at other sites. We refer to these as 'cancer controls'. Sampling of these cancer controls was carried out excluding sites of the respiratory system; further, we subsampled the rest to ensure that none of the sites comprising the cancer controls would constitute more than 20% of the total. With these restrictions, the cancer control series consisted of 1349 subjects. In Study 2, there were 1203 cases (response rate 84%) and 1513 population controls (response rate 69%) interviewed. For subjects who were deceased or too ill to respond, we accepted proxy response from close family members; proxy response accounted for 23% of respondents in Study 1 (29% among cases and 13% among controls) and 21% in Study 2 (38% among cases and 8% among controls).
Data collection
Data collection techniques and the variables ascertained were almost identical between Study 1 and Study 2. Interviews were divided into two parts: a structured section requested information on socio-demographic and lifestyle characteristics, and a semi-structured section elicited a detailed description of each job held by the subject in his working lifetime. Among the sociodemographic and lifestyle factors assessed were: ethnicity, socio-economic status as measured by education level, familial financial situation during childhood and current income, residential history, smoking history (smoking status, ages at initiation and cessation, periods of interruption, average number of cigarettes smoked per day over the lifetime), alcohol and coffee consumption, selected dietary factors, selected medical history conditions, household heating and cooking practices, and many others. Male subjects (Studies 1 and 2 combined) and female subjects (Study 2) had held a median of 4.0 jobs each. For each job held, a trained interviewer asked the subject about the company, its products, the nature of the worksite, the subject's main and subsidiary tasks, and any additional information (e.g., equipment maintenance, use of protective equipment, activities of coworkers) that could provide clues about work exposures and their intensity. Occupations were coded according to the Canadian Classification and Dictionary of Occupations [25] and the Canadian Standard Industrial Classification [26,27]. For some occupations, supplementary questionnaires were used to assist interviewers with detailed technical probing [28]. A team of chemists and industrial hygienists examined each completed questionnaire and translated each job into a list of potential exposures using a checklist of 294 agents that included cotton dust, wool dust and several recognized lung carcinogens [23]. Endotoxin exposure was not on the checklist and its possible presence is only inferred from the presence of cotton dust.
In the two studies combined, nearly 30,000 jobs were evaluated. The team of coders spent about 50 personyears on these projects, including helping to develop the methodology, monitoring the quality of the interviewing, conducting background research on exposures in different occupations, coding the individual participants' files, and recoding after the initial complete rounds of coding were finished. The final exposure codes attributed to a subject were based on consensus among the coders. Coders did not know the subject's case or control status. For each substance considered present in each job, the coders noted three dimensions of information, each on a three-point scale: their degree of confidence that the exposure had actually occurred (possible, probable, definite), the frequency of exposure in a normal workweek (low [<5% of hours worked], medium [5% to 30% of hours worked], high [>30% of hours worked]), and the relative level of concentration of the agent (low, medium, high). Concentration levels were established with reference to certain benchmark occupations in which the substance is found. Specifically, we identified some hypothetical workplace situations a priori which would correspond to low, medium and high exposure for each substance, and the experts rated each real job against these benchmarks. Unfortunately, it proved impossible to reliably estimate absolute concentration values corresponding to the relative levels coded. Non-exposure was interpreted as exposure up to the level that can be found in the general environment. The exposure assessment was based not only on the worker's occupation and industry, but also on individual characteristics of the workplace and tasks as reported by the subject; an illustrative example is in the Appendix of Parent et al [29].
Statistical analysis
The main purpose for this analysis was to estimate the relative risk of lung cancer in relation to cotton dust and wool dust exposure. The availability of two studies, with two control groups among males in Study 1 and two sexes in Study 2, provided various opportunities. We first carried out analyses of the Study 1 data by comparing the cases separately with population controls and with cancer controls, defined above. There are pros and cons with cancer controls and population controls and we cannot affirm that one is necessarily more valid than the other [24,30]. Our prior belief was that the two control groups in Study 1 were equally valid. Consequently, to avoid giving greater weight to the more numerous cancer controls, we carried out a weighted logistic regression analysis giving equal weight to the two control series. For Study 2, we analyzed males and females separately. In order to maximize precision of estimates, we also conducted analyses pooling the Study 1 and Study 2 samples, both cases and controls, but only using population controls from Study 1 and Study 2. We thus present six distinct risk estimates: Study 1 using population controls among males, Study 1 using cancer controls among males, Study 1 with weighted population and cancer controls, Study 2 using population controls among males, Study 2 using population controls among females, and Study 1 plus Study 2 pooled using population controls among males plus females.
For each job in which the subject was exposed to cotton dust, we had the duration of the exposure in years and a set of ordinal values for confidence, frequency, and concentration. If a subject was exposed in two or more jobs, then lifetime values of confidence, frequency, and concentration were calculated by taking averages, weighted by the durations of the various jobs in which exposure occurred. The combination of duration, confidence, frequency, and concentration was used to categorize the lifetime exposure into categories as follows: unexposed, exposed at non-substantial level, exposed at substantial level. Because of latency considerations, exposures occurring within 5 years of diagnosis or interview were excluded. In order to be classified as exposed at the substantial level, a subject had to have been exposed at confidence of probable or definite, concentration and frequency of medium or high, and for duration greater than 5 years. All other exposed subjects were then classified in the non-substantial category. We consider this non-substantial/substantial dichotomy to be a simple proxy for cumulative exposure. The reference group for analyses consisted of those subjects who were never exposed to cotton dust. Wool dust was treated the same way.
Unconditional logistic regression was used to estimate odds ratios (ORs) and corresponding 95% confidence intervals (CIs). In order to control for the effect of potential confounders, multivariate models were constructed including the following covariates: age (continuous), ethnicity (French Canadian, other), years of education (0-7, 8-12, ≥13), familial financial situation during childhood (difficult, intermediate, comfortable), respondent status (proxy, self ), smoking history (CSI, continuous), and ever exposure to some known occupational lung carcinogens -asbestos, chromium compounds, nickel compounds and silica. These occupational covariates were selected for inclusion because they are on the IARC Group 1 list of lung carcinogens [4], and because the prevalence of exposure to these substances in the study population was over 3%. Smoking history was parameterized using a comprehensive smoking index (CSI) as described in Leffondre et al [31]. The CSI takes into account the lifetime average number of cigarettes smoked per day, the total duration of smoking, and time since quitting in a single parameter index. It was demonstrated to provide a good fit to the data while maintaining a parsimonious representation of lifetime smoking history, in contrast to multivariable modelling of separate effects of several dimensions of smoking behavior [31]. We have previously described smoking characteristics of cases and controls from Study 2 according to quartiles of the CSI variable distribution [32].
For pooled analyses, we analyzed all lung cancer cases and population controls, and in addition to the covariates above, all models included Study (1 or 2) as an adjustment factor, since case/control ratios differed by study. Further, a series of analyses was conducted among self-respondents only. In addition, we also examined job and industry titles associated with exposure to cotton dust, and potential effect modification by smoking history and sex. For stratified analyses, never smokers were grouped with low smokers, defined as individuals having a CSI value at or below the 25 th percentile. Medium to heavy smokers were those with a CSI value above the 25 th percentile.
Results
Demographic characteristics of the study populations are outlined in Table 1. Among the 857 lung cancer cases in Study 1 were 41.9% squamous cell carcinoma, 18.6% small cell carcinoma, and 19.5% adenocarcinoma. In Study 2, there were 1203 lung cancer cases: 29.3% squamous cell carcinoma, 17.2% small cell carcinoma, and 38.1% adenocarcinoma. Study 1 was restricted to males, while Study 2 included both males (60.3%) and females (39.7%). The age distribution was similar across all groups. In both studies, most participants were French Canadian, and most had less than 13 years of schooling. Nearly all the cancer cases were smokers, as well as a majority of male controls. About half of the females in Study 2 had ever smoked regularly. Among smokers, the majority smoked for over 30 years prior to interview. Except for histological subtypes, all of the covariates in Table 1 were included in multivariate estimates of odds ratios.
The most commonly listed broad occupation groups for individuals exposed to cotton dust are listed in Table 2. They include: fabricating, assembling and repairing of textile, fur and leather products; fiber preparing, spinning, twisting, winding, reeling, weaving and knitting; apparel and furnishing service occupations, and; material recording, scheduling and distributing occupations. Not surprisingly, the most commonly listed industry was clothing and textile, followed by retail and wholesale trades. The specific occupational groups most commonly associated with cotton dust exposure were: tailors and dressmakers; patternmaking, marking and cutting of textile, fur and leather products; foremen in fabricating, assembling and repairing of textile, fur and leather products; sewing machine operators, textiles and similar materials; shipping and receiving clerks; pressing occupations; fabricating, assembling and repairing of textile, fur and leather products not elsewhere classified.
As assessed by our team of expert industrial hygienists, lifetime prevalence of exposure to cotton dust among male controls was about 8% in Study 1 and 13% in Study 2 (Table 3). Lifetime exposure prevalence was about 25% among female controls in Study 2. It seems that there was some shift in the threshold for assigning exposure between Study 1 and Study 2, since the increase among males was concentrated among assignments with the designation "possible" exposure and low concentration. Consequently, whereas cumulative cotton dust exposure was about evenly divided between substantial and nonsubstantial levels in Study 1, in Study 2 the majority of exposure was in the non-substantial category. Among those with cotton dust exposure, the majority was considered definitely exposed, and for at least 30% of their working hours (Table 3). About one-third had been exposed to cotton dust for 1-5 years, and 28% for >20 years. Exposure concentration was generally lower in Study 2 compared to Study 1. Exposure prevalence was somewhat lower for wool dust than for cotton dust, though the overall patterns were similar. As expected there was some overlap between these two textile exposures. In Study 1, out of 510 subjects exposed to cotton dust, 37.3% (n = 190) were also exposed to wool dust; in Study 2, 52.7% (n = 117) of 222 subjects exposed to cotton dust were also exposed to wool dust. Other exposures commonly assigned to jobs with cotton exposure were treated fibers, synthetic fibers, aliphatic aldehydes, formaldehyde, and magnetic and pulsed electromagnetic fields. Table 4 shows adjusted ORs between each exposure and lung cancer, and in each study. An OR was estimated with each control group in Study 1, for each sex in Study 2, and for a pooled analysis. We show results corresponding to ever exposure and to substantial exposure, as defined above. The pooled analysis indicates a weak effect (OR = 1.2) of borderline significance for any exposure (concentrated among males when compared with population controls), and non-statistically significant for substantial exposure. For wool dust, no significant excess risks were observed. Since the proportion of proxy respondents was higher among cases than among controls (29% and 38% of cases in Study 1 and 2, respectively, and 13% and 8% among controls), some differential misclassification of exposure might have occurred and resulted in biased OR estimates. We therefore repeated the analyses in Table 4, restricting to self-respondents only. The results were similar to those in the main analysis (OR for any exposure to cotton dust of 1.0, 95% CI: 0.8-1.2, and OR for substantial exposure to cotton dust of 1.2, 95% CI: 0.7-2.0). We also repeated the analyses, adjusting for smoking with the following three variables instead of the CSI: smoking status (ever/never), natural logarithm of cigarette-years, and years since cessation. Results did not differ from those presented in Table 4 (data not shown). We evaluated whether there was a difference in the effect of cotton dust exposure according to age at first exposure. Approximately two-thirds of exposed subjects had their first exposure before age 25, and we used this as the cut-point for a stratified analysis. Among those first exposed before age 25, the OR corresponding to ever exposure vs. never exposed was 1.2 (95% CI: 0.9-1.6) and that corresponding to substantial exposure was 1.1 (95% CI: 0.6-2.1). Analogous estimates for those first exposed at ages 25 and older were 1.6 (95% CI: 1.1-2.2) and 1.3 (95% CI: 0.5-3.0). Table 5 shows results for each of the three major histologic subtypes of lung cancer. There were no statistically significant deviations from the null value for squamous cell or small cell carcinoma, but there was a significantly increased risk when restricting to adenocarcinoma cases (OR = 1.6, 95% CI: 1.2-2.2). Since some previous studies reported effect modification by smoking, we also analyzed the exposure-cancer associations separately in different smoking strata, namely in a category combining never smokers with light smokers and in another of medium to heavy smokers. As shown in Table 6, the association between ever exposure to cotton Table 2 Most commonly listed broad occupation and industry groups for persons exposed to cotton dust and wool dust in two studies in Montreal, Canada, cases and controls combined a
Study 1 Study 2
Cotton Dust Total N exposed = 222 Total N exposed = 510 a Numbers and percentages based on persons ever holding a job with the given occupation/industry code, over total subjects with the given exposure. Percentages may total over 100, due to persons holding multiple jobs in different occupations and industries.
dust and lung cancer was slightly stronger in the stratum of medium-heavy smokers (OR = 1.3, 95% CI: 1.0-1.7), but there was no effect modification evident with ever exposure to wool dust. Some previous studies were based on cohorts in certain high exposure industries or occupations, whereas our database included workers across the entire spectrum of occupations and industries. To determine whether exposure to cotton dust in different occupations or industries is associated with different risks, we carried out analyses of cotton dust exposure, stratified on the main industries in which cotton dust exposure occurred in our population. Due to small numbers, these subgroup analyses produced rather unstable risk estimates, but there was no evidence of a protective effect of cotton dust exposure within any industry (data not shown).
Discussion
We used data from two large case-control studies conducted in Montreal to assess the relationship between occupational exposure to cotton dust and wool dust and risk of lung cancer. Subjects in Study 1 were in their active work years roughly from the 1940s to the 1970s, whereas the active period for Study 2 subjects was the 1950s to 1980s. Thus there was considerable overlap. It is likely that the average concentrations of exposure declined between the two studies because of improved industrial hygiene and use of personal protective equipment. Historically the Province of Quebec was the hub of the clothing and textile industries in Canada, and despite decreasing quotas and increasing offshore production, it so remains with approximately 50,000 workers employed in these fields [33]. Lifetime prevalence of exposure was higher in Study 2 than in Study 1 because females, who were disproportionately active in the textile and clothing industries, were not included in Study 1, and because there seemed to be a lower threshold among our exposure experts for assigning these exposures in Study 2 than in Study 1. These various trends between the two studies did not bias our risk estimates which were stratified by study and adjusted for study in the pooled analyses. Overall there was little evidence of a protective effect of cotton dust exposure on lung cancer, in Study 1 or Study 2, in males or in females. In fact the point estimates were usually slightly above 1.0 and attained borderline statistical significance in some of the contrasts. Nor do the analyses by histologic type provide clear evidence of protective effects of cotton dust; indeed the strongest association indicated an excess risk of adenocarcinoma of the lung. Our results for wool dust, which overlaps with exposure to cotton dust, tended to be close to the null value, except in small and statistically unstable subgroups.
While most studies of cotton textile workers have reported protective effects, and a meta-analysis estimated a summary decrease in risk of 28%, several studies have either found no association between work in the textile industry and lung cancer risk [14][15][16], or a suggestion of increased risk of lung cancer [34]. Our results on cotton dust and wool dust were closer to the null than to a protective effect. Most previous studies of cotton exposed workers had no or little information available on smoking habits. The most prominent exception was the study of Shanghai female textile workers, which collected smoking information from all subjects, and in which there were very few smokers [6]. The validity of the smoking data is questionable since the relative risk estimates for smoking and lung cancer were quite low compared with other studies which have estimated relative risks among female smokers. However, very low cumulative smoking might explain this weak association. In any case, after adjusting for smoking, the investigators reported a strong protective effect of cotton dust. More recent studies suggested an increased risk of lung cancer among workers exposed to organic dust [35]. In addition, further analyses of the Shanghai female textile workers suggested increased lung cancer risk among those whose first exposure to endotoxin occurred in the more distant past, and thus at a younger age [36,37]. In contrast, we did not find evidence of a stronger effect among those first exposed at a young age.
The failure of our study to demonstrate a protective effect of cotton dust exposure is unlikely to be due to simple measurement error in the assessment of cotton dust exposure, as this is not an exposure that is particularly difficult for experts to identify in a work history, given the information that was available to our experts (industry, occupation, worker's tasks, and other details of the workplace). However, if there really is a protective effect of cotton dust exposure, we may have failed to find such an association for one of the following reasons.
First, it may be that the intensity of exposure, on average, in our subjects was much less than that in the cohort studies that have previously reported protective effects. Since ours was a population-based case-control study with workers exposed to cotton dust across a wide range of occupations and industries, the proportion of very highly exposed workers may have been low. Without absolute exposure measures it is hard to evaluate this possibility. Nevertheless, we can affirm that in our population-based study covering the range of exposure intensities, there was no meaningful departure from the null. Second, there may be an effect modification by smoking. The strongest evidence of a protective effect of cotton dust comes from studies conducted in China where there were few smokers [6]. In our study, there are too few nonsmokers to be able to affirm whether or not there is a protective effect in this stratum. The third possible reason for our failure to detect a protective effect has to do with the "endotoxin hypothesis" [18,19]. If there is indeed a protective effect due to endotoxin content of cotton dust, then cotton dust with less endotoxin content may not be protective. Marchand et al have reported on endotoxin measurements taken in four Quebec textile mills [38]. They found measureable and even quite high levels throughout the plants, with considerable variability in concentration by plant, process, work station, and season. While the lack of standardized analytical method prevents the direct comparison of Marchand et al's results to a slightly older study also performed in textiles mills in Taiwan [39], the concentrations in both studies were of the same order of magnitude, reaching > 500 ng of endotoxins per cubic meter in the most exposed areas.
While some of our exposed subjects were from textile mills, most were from occupations and industries further down the production and retailing chain of textile products. Unfortunately there is little hard data available on endotoxin content of cotton dust or on ambient endotoxin exposure levels in such environments. The evidence from the textile mills remains ambiguous, suggesting lower levels as one goes further in the processing chain within the mill [39], but also elevated levels in later processing steps such as spinning and winding [38]. We presume that the processing of cotton fibers leads to reduction of endotoxin content and that exposure to endotoxins would be much lower further down in the retailing chain of textile products. Thus, while our results are informative about cotton and wool dust in relation to lung cancer, without additional data on endotoxin levels in a wider range of cotton-exposed occupations, it is difficult to assess whether our results are informative about endotoxins and lung cancer. The only hint from our own data was that in analyses of subgroups exposed to cotton dust in different occupations, we saw no difference in the OR estimates according to the occupation in which the exposure to cotton dust occurred (e.g., occupation codes indicating fiber preparation vs. occupation codes indicating textile product fabrication). But these were based on small numbers with wide confidence intervals.
In assessing the associations between cotton and wool dusts and lung cancer, our study had several strengths, including: large sample sizes with fairly high numbers of exposed cases and controls; fairly high participation rates which reduces the risk of selection bias; complete lifetime work histories with detailed descriptions of each job; job-by-job evaluation of exposures by a team of experts; detailed lifetime history of smoking; and information on a host of other covariates. While there were large numbers of proxy respondents, the results of analyses restricted to self-respondents were virtually identical to the main ones. Notwithstanding these strengths, the study was limited by lack of measurements of cotton and wool dust, and inferences regarding endotoxins are limited by lack of endotoxin measurements.
Conclusion
In conclusion, neither cotton dust nor wool dust showed associations with lung cancer. We found no evidence for a decreased risk of lung cancer among persons exposed to cotton dust.
|
v3-fos-license
|
2022-08-06T15:03:17.280Z
|
2022-07-21T00:00:00.000
|
251360767
|
{
"extfieldsofstudy": [],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://jbasic.org/index.php/basicedu/article/download/3831/pdf",
"pdf_hash": "88d5956efd58cb7da195a581854798f354e44e73",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43175",
"s2fieldsofstudy": [
"Education"
],
"sha1": "cf012cec9e4bb301fa08cf56fa09f9b8a797582c",
"year": 2022
}
|
pes2o/s2orc
|
JURNAL BASICEDU English Language Teaching Online Class during Covid-19 Era
Online learning is a transitional form of the education system in the era of the Covid-19 pandemic. This is attempted as a form of continuing education by utilizing the Zoom, WhatsApp (WA), Google Form, and Google Meet applications. This study attempts to describe English learning activities at SMP IT Ad-Durrah Medan which implements an online learning system via WhatsApp. The data collection process was carried out systematically using interview techniques (also online via google form). The results of this study indicate that online learning systems are considered appropriate as an alternative to maintaining the sustainability of the educational process in the Covid-19 era, including those practiced by education providers at SMP IT Ad-Durrah Medan. The reason for using the WhatsApp application is based on the familiarity of the application among teachers, students, and student guardians. The process of learning activities, assignments, and learning assessments are carried out virtually by educators during the Covid-19 pandemic. Likewise, the weaknesses of the online learning system are obtained, namely, the internet signal is often weak so that it is slow to access search pages on google and student learning facilities are still limited.
INTRODUCTION
Quality education is the hope of all Indonesian people. This is based on the importance of education as an individual provision in navigating life. In addition, through a quality education process, a superior generation (human resources) will be born (Patilima, 2022;Al-Issa & Al-Bulushi, 2012: 141-176). In line with this, the government has issued a policy in the form of a national education system as the basis (principle) for the nation's human resources to seek to improve their quality, and are assisted by education providers in facilitating the best education for the community, the aim is to create a solutive society in addressing the existing problems (Usman, 2014: 13-31). In this context, it takes a set of appropriate methods, techniques, approaches, and strategies for the child's learning process, known as the educational curriculum.
The Covid-19 pandemic has become a contemporary problem in the world of education, as well as all sectors of human life. This is because all access to human interaction must be limited to break the "chain" of the spread of the virus, which is thought to be very dangerous and lead to death. This is also the basic reason for the transfer of all learning activities through virtual (online). The term work from home (WFH) is also known to employees and education personnel (Zendrato, 2020: 242;Duan & Zhu, 2020: 300-302).
Supporting these efforts, the government through the ministry of education and culture has officially established an online learning policy in the Covid-19 pandemic situation (Permendikbud number 4 of 2020). In this regulation, the principles and limitations of the online learning system are regulated in such a way that they are carried out from their respective homes. These efforts must be balanced with improving the quality of educators in using virtual methods and other learning tools (Sajow, 2022;Hikmat, et.al., 2020).
The application of online learning is considered appropriate as an alternative to the learning process in the era of the Covid-19 pandemic. Likewise, there are still various weaknesses, especially in terms of using the application for teachers who are accustomed to offline learning systems (face to face). For this reason, Rapih & Sutaryadi (2018: 78-87) explain that teachers are given the freedom to create the learning process, but the indicators and learning orientations are increasing students' thinking skills, from Lower Order Thinking Skill (LOTS) to Higher Order Thinking Skills (HOTS). In line with this, Saragih & Nasution (2021: 40-47) revealed that higher-order thinking skills (HOTS) will encourage students to reason broadly on any material taught by the teacher.
In practice, teachers are required to be able to use technology actively. As is the case with the teachers at SMP-IT Ad Durrah, the teacher deliberately uses WhatsApp as a learning medium, because it is familiar and easy to use by all people. The learning system via WhatsApp also facilitates learning interactions from each other's homes. Not only that, various forms of documents, photos, and videos for assignments to students can also be sent via WhatsApp (Marbun & Sinaga, 2021: 3299-3305).
Even though the online learning system has various weaknesses in the implementation process, it provides a lot of convenience by learning from each other's homes (without being bound by learning space and time). This is of course supported by the very rapid sophistication of technology and information, as well as fast internet service facilities so that learning continues to be carried out properly without worrying about the spread of the Covid-19 virus (Baety & Munandar, 2021: 880-989;Pradipta, et.al., 2021: 144-148).
Learning the online system is the main answer to continuing education in the emergency period of the Covid-19 outbreak. In this context, educators are required to be able to facilitate student learning optimally (Oktaviani, et.al., 2021: 77-88). Then, students are also expected to increase the time and quality of learning, so that later they can overcome various life problems and student learning tasks.
Online learning has the main goal as a form of optimizing various decisions that students make in equipping themselves independently online. The form of debriefing in question includes the search for additional information and strengthening self-competence automatically related to skills or soft skills. Of course, this effort helps students in treading technological sophistication through the learning process, so that the quality of reasoning and appreciation of students can also increase. Furthermore, the most distinctive characteristic of online learning is the ease of setting a study schedule, where teachers and students have flexible time to carry out learning activities (Sartika, 2021: 49-54;Winarsieh & Rizqiyah, 2020: 159-164). The current digital era, with its various sophistications, is faced with an emergency in the form of the Covid-19 pandemic. Even so, education as the front line in the human resource empowerment sector must continue to run. For this reason, the application of online learning is considered appropriate as an alternative to learning, through the use of the internet network (Uyun & Warsah, 2022: 395-412;Sabin, et.al., 2020: 1-12). In this way, learning will continue, and limited access and interaction during the pandemic will also be maintained.
Online learning or also known as distance learning (PJJ) during the Covid-19 pandemic situation presents significant technological benefits for the learning process (education world). The entire learning process can be carried out well through various application features that are very helpful. Furthermore, Indonesia as a nation that prioritizes education in the state constitution seeks to educate the nation's life, of course fully supporting the sustainability of education. This is done to avoid the occurrence of a lost generation. The application features that are very helpful in the learning process are e-learning, WhatsApp, google classroom, zoom, and Youtube (Wilson, 2020;Novita & Hutasuhut, 2020: 1-11).
Based on the initial (preliminary) study in this study, the researcher distributed the google form link to all educators at SMP IT Ad-Durrah Medan to obtain information related to the use of online learning applications. As a result, it is known that educators are very familiar with WhatsApp, so this application is the most popular in the learning process, including zoom, google form, and so on. In fact, through voice notes, systematic ways of listening to students' voices were also obtained.
From the information above, the author also obtained information about what learning methods are suitable for carrying out online learning during the current pandemic. Furthermore, information was also obtained about the total number of how many obstacles experienced by teachers and students, what material was delivered, how the student learning outcomes were, and how many times the teacher met students doing online learning when online learning methods were applied using online applications. Furthermore, this research is summarized in the title, "English Language Teaching Online Class during Covid-19 Era".
METHOD
This research is qualitative research (approach) with a descriptive study method. The background of this research is at the Integrated Islamic Middle School Ad-Durrah Medan. The data collection process is carried out in a way, the researcher spreads a number of questions in the google form link, then the teachers fill in the answers in the form (Assingkily, 2021). Then, the researchers obtained information from the answers that had been filled in by the teachers regarding online learning (English teaching) at SMP IT Ad Durrah Medan. The number of research respondents, namely a total of 31 teachers. The questions that the researchers asked included the procedures or procedures for implementing online learning, the methods applied, the applications used, and the obstacles experienced by teachers during the online learning process.
RESULTS AND DISCUSSION
Based on the Minister of Education and Culture Number 3 of 2020 regarding steps to prevent the spread of the Coronavirus, education in Indonesia is shifted to online or online learning to stay safe at home. Online learning from home implemented by SMPIT Ad-Durrah Medan utilizes various online applications that are available so that the learning process runs optimally. This writing aims to evaluate the online teaching and learning process at SMPIT Ad-Durrah Medan and to find out what information technology in the form of online applications is used.
Picture 1. Online applications used for online learning
From the results of the data obtained by the researcher, all 31 teachers implemented online learning using online applications during the Covid-19 pandemic. Picture (1) shows an online application that is used for online learning, based on data as many as 21 teachers use the WhatsApp application to do online learning, but the 21 teachers also use other applications to further support learning activities to be more leverage, 5 teachers use Google Form application, 3 teachers use the Zoom application, and 2 teachers use Voice Note. This is because the WhatsApp application is owned by both teachers and students on their respective mobile phones, making it easier for teachers to convey materials and assignments to students by sending them to the WhatsApp Group.
Based on Picture 1 shows that in addition to using the WhatsApp application, as many as 5 teachers used the Google Form application, which is an application owned by Google and is easy to access. Used for the evaluation process of students after receiving material online by their respective subject teachers by sending a questionnaire containing assignments, then students are instructed to complete the task by providing answers in the column provided in the questionnaire. In one subject, several platforms can be used to deliver learning materials. The use of the platform is the right alternative to facilitate the online learning process.
Google Form has the advantage of being used as a medium for online learning, including having various types of tests that can be used, such as multiple-choice, checklists, or long-answer tests. This application also has an attractive appearance with many templates so that it can be more colorful, and also has facilities for users to add images or photos. In the teaching and learning process using this application students can send responses or answers quickly and wherever they are (Parinata & Puspaningtyas, 2021: 56-65).
Picture 1 also shows data that as many as 2 teachers at SMPIT Ad-Durrah Medanuse Voice Note in online learning. It is used in Tahfidz subjects so that teachers can apply the Talaqi method to listen to students' memorizing deposits of the Qur'an, also used in English subjects so that students can send voice recordings in the form of assignments when reading the English text, whether it is correct or not.
. Online applications used for online learning
From the data above, it can be seen that the problem that many faces are the less stable signal during online learning. Most of the teachers, as many as 62% stated that students could not take online learning or collect assignments that had been given on time due to unstable signals, sometimes there was no signal at all. Not only that but doing Work From Home by implementing online learning also requires an adequate data package, as many as 24% of teachers complain about it. In fact, especially students, often run out of internet data packages due to economic factors during the current pandemic. All applications used for online learning cost a lot of internet quota, especially for the use of the Zoom application in the form of Video Conferences. And as many as 5% of teachers complained about student absenteeism or their presence in online learning due to some of the problems mentioned above. This makes it difficult for both teachers and students to deliver and receive learning materials.
The application of online learning methods was carried out suddenly along with the Corona Virus, it caused both students and teachers not to be used to and fully prepare themselves to carry out online learning activities. As many as 9% of teachers stated that students and teachers themselves were not used to doing online learning especially using the Zoom Video application. This habit problem is one of the success factors in learning, if teachers and students are accustomed to using the application, then the basic factors for implementing online learning have been fulfilled and support the success of online learning activities (Mian, Efforts are made to overcome various obstacles that arise when online learning is carried out so that learning activities can run optimally. Based on the data obtained by the researcher, when there are obstacles to students in the form of unstable signals, limited internet data packages, or students who are not accustomed to operating online applications, causing difficulties in participating in online learning which affects student attendance, the teacher at SMPIT Ad-Durrah Medan does repeated learning, instructing students who have been able to follow online to redistribute the material obtained to other friends, also allowing students to work in groups of 2 to 3 students on one device while still adhering to health protocols.
CONCLUSION
Based on the description of the results and discussion of the research above, it is concluded that online learning systems are considered appropriate as an alternative to maintaining the sustainability of the educational process in the Covid-19 era, including those practiced by education providers at SMP IT Ad-Durrah Medan. The reason for using the WhatsApp application is based on the familiarity of the application among teachers, students, and student guardians. The process of learning activities, assignments, and learning assessments are carried out virtually by educators during the Covid-19 pandemic. Likewise, the weaknesses of the online learning system are obtained, namely, the internet signal is often weak so that it is slow to access search pages on google and student learning facilities are still limited.
|
v3-fos-license
|
2023-03-08T16:08:52.738Z
|
2023-03-01T00:00:00.000
|
257394083
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/24/5/5002/pdf?version=1678173449",
"pdf_hash": "dbf8c96f261eafb9bc22b66db6d7355312653c41",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43176",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "760193d1dc48310e27e3b181c83ce65bc1b4c85d",
"year": 2023
}
|
pes2o/s2orc
|
An Immunological Perspective on the Mechanism of Drug Induced Liver Injury: Focused on Drugs for Treatment of Hepatocellular Carcinoma and Liver Transplantation
The liver is frequently exposed to potentially toxic materials, and it is the primary site of clearance of foreign agents, along with many innate and adaptive immune cells. Subsequently, drug induced liver injury (DILI), which is caused by medications, herbs, and dietary supplements, often occurs and has become an important issue in liver diseases. Reactive metabolites or drug–protein complexes induce DILI via the activation of various innate and adaptive immune cells. There has been a revolutionary development of treatment drugs for hepatocellular carcinoma (HCC) and liver transplantation (LT), including immune checkpoint inhibitors (ICIs), that show high efficacy in patients with advanced HCC. Along with the high efficacy of novel drugs, DILI has become a pivotal issue in the use of new drugs, including ICIs. This review demonstrates the immunological mechanism of DILI, including the innate and adaptive immune systems. Moreover, it aims to provide drug treatment targets, describe the mechanisms of DILI, and detail the management of DILI caused by drugs for HCC and LT.
Introduction
Drug-induced liver injury (DILI), an injury to the liver or biliary system caused by medications, herbs, or dietary supplements, accounts for 50% of acute liver failure cases in the United States [1,2]. DILI is classified as intrinsic (or direct) or idiosyncratic according to its pathogenesis [3]. Intrinsic DILI, which is predictable and acute-onset, occurs in a dosedependent manner and can be reproduced in animal models [2,4]. However, idiosyncratic DILI, the most frequent type, is unpredictable and not dose-related DILI, although a minimum dose of 50 mg/day is usually required for its development [5].
The incidence of DILI varies by study design and cohort. Retrospective cohorts show lower incidence rates of DILI than prospective studies. According to several prospective studies, the annual incidence of DILI is approximately 13.9-19.1 per 100,000 inhabitants [6,7]. DILI can be influenced by multiple factors, such as age, sex, environmental exposure, and genetics, including human leukocyte antigen (HLA) [8,9]. Its diagnosis is based on an appropriate temporal relationship between drug intake and liver injury, along with the exclusion of other possible causes of liver damage, including viral infection and alcohol consumption [2]. The Roussel Uclaf Causality Assessment Method (RUCAM) is the most widely used assessment scale for DILI [10]. Moreover, according to elevated liver enzyme levels, represented as the alanine aminotransferase (ALT)/alkaline phosphatase (ALP) ratio (R), DILI patterns can be determined as follows: hepatocellular pattern (R ≥ 5), cholestatic pattern (R ≤ 2), and mixed pattern (2 > R < 5) [2,11]. Recently, the updated RUCAM of 2016 was introduced to improve the diagnostic accuracy of DILI [12]. According to the updated RUCAM, assessment of DILI is differently suggested according to the pattern of DILI using ALT/ALP ratio (R) at first presentation. The updated RUCAM also presents a check list of differential diagnosis of DILI and criteria for a positive result of DILI following unintentional re-exposure [12]. The diagnosis of DILI can be confounded by several factors, including comedication and concomitant diseases; therefore, causality assessment using the updated RUCAM is important.
Recent studies have suggested that specific human leukocyte antigen (HLA) genotypes, such as HLA-B*5701, are risk factors for the development of DILI in patients receiving some drugs [13,14]. However, HLA genotypes cannot sufficiently explain the risk of DILI. Moreover, microsomal cytochrome P450 (CYP) also play a role in the development of DILI [14]. As CYP is involved in the metabolism of many drugs, various isoforms of CYP, including CYP3A4, may be associated with the development of DILI [15]. Population-based studies have also demonstrated that pre-existing liver disease, concomitant severe skin reactions, and comedications, such as nonsteroidal anti-inflammatory drugs, are associated with the development DILI [6,16]. Furthermore, ferroptosis can also be a potential factor in the pathogenesis of DILI [17]. Ferroptosis, an iron-dependent form of cell death, reduces cystine uptake causing the production of lethal reactive oxygen species, which can lead to the development of DILI [18].
Regarding immunologic perspective, the liver is the primary site of the clearance of foreign chemical agents; thus, it is exposed to many potentially toxic chemicals that can cause hepatocyte damage via mitochondrial dysfunction and oxidative stress [19]. In addition, the liver is an immune organ with abundant innate (e.g., neutrophils, natural killer [NK] cells, and Kupffer cells) and adaptive (T cells and B cells) immune cells [20]. Although the liver is an immunologically tolerant organ, immune responses, including innate and adaptive immune cells, play pivotal roles in the development of DILI. Tyrosine kinase inhibitors (TKIs), such as sorafenib, lenvatinib, and regorafenib, were developed to treat advanced hepatocellular carcinoma (HCC). Moreover, recent studies have demonstrated the high efficacy of immune checkpoint inhibitors (ICIs), including atezolizumab plus bevacizumab, in HCC [21,22]. Along with the high efficacy of these novel drugs, DILI has become a critical issue in ICI use.
In this review, we discuss the immunological perspective of the mechanism of DILI, including the innate and adaptive immune systems ( Figure 1). Moreover, we describe the frequency, hepatobiliary manifestations, and mechanism of DILI in patients with HCC treated with TKIs and ICIs. We also demonstrate the development of DILI in liver transplant (LT) patients administered immunosuppressants (ISs).
Figure 1.
Mechanisms of the development of drug-induced liver injury (DILI). Reactive metabolites or drug-protein complex causes ER and oxidative stress in hepatocytes. BSEP inhibition and mitochondrial damage also damage hepatocytes, leading to the secretion of DAMPs, including HMGB-1, heat shock proteins, S100 proteins, and ATPs. DAMPs activate innate immune systems and stimulate immune response. Activated innate immune systems (e.g., Kupffer cells, neutrophils, NK cells, NK T cells, and Mast cells) damage hepatocytes, recruit immune cells, and stimulate adaptive immune response. Reactive metabolites or drug-protein complexes are presented by APCs, which lead to activation of adoptive immune response (e.g., T cells and B cells) along with the stimulation of APCs by DAMPs. Meanwhile, Treg cells decrease and fail to maintain immune tolerance. APC, antigen presenting cells; ATPs, adenosine triphosphate; BSEP, bile salt export pump; DAMP, damageassociated molecular patterns; ER, endoplasmic reticulum; HMGB, high-mobility group box; IFN, interferon; IL, interleukin; NK, natural killer; TNF, tumor necrosis factor; Treg, regulatory T cells.
Danger Hypothesis
T cell-mediated liver injury is the cornerstone of DILI development [23]. The hapten hypothesis, which suggests that haptens make the proteins "foreign" and lead to their recognition and destruction by the immune system, was introduced to explain this immune response [24]. However, this hypothesis is insufficient to support the strong immune response in DILI. Subsequently, the danger hypothesis was proposed to redeem the hapten hypothesis. The generation of reactive metabolites or drug-protein complexes damages hepatocytes via several pathways, including oxidative stress, endoplasmic reticulum (ER) stress, bile salt export pump (BSEP) inhibition, and mitochondrial damage [3,25]. Damaged hepatocytes release several damage-associated molecular patterns (DAMPs), such as high-mobility group box (HMGB)-1, heat shock proteins, S100 proteins, and ATPs, which play a pivotal role in the activation of antigen-presenting cells (APCs) by producing a second signal (interaction of CD28 with B7 molecules) [26]. This co- Figure 1. Mechanisms of the development of drug-induced liver injury (DILI). Reactive metabolites or drug-protein complex causes ER and oxidative stress in hepatocytes. BSEP inhibition and mitochondrial damage also damage hepatocytes, leading to the secretion of DAMPs, including HMGB-1, heat shock proteins, S100 proteins, and ATPs. DAMPs activate innate immune systems and stimulate immune response. Activated innate immune systems (e.g., Kupffer cells, neutrophils, NK cells, NK T cells, and Mast cells) damage hepatocytes, recruit immune cells, and stimulate adaptive immune response. Reactive metabolites or drug-protein complexes are presented by APCs, which lead to activation of adoptive immune response (e.g., T cells and B cells) along with the stimulation of APCs by DAMPs. Meanwhile, Treg cells decrease and fail to maintain immune tolerance. APC, antigen presenting cells; ATPs, adenosine triphosphate; BSEP, bile salt export pump; DAMP, damage-associated molecular patterns; ER, endoplasmic reticulum; HMGB, high-mobility group box; IFN, interferon; IL, interleukin; NK, natural killer; TNF, tumor necrosis factor; Treg, regulatory T cells.
Danger Hypothesis
T cell-mediated liver injury is the cornerstone of DILI development [23]. The hapten hypothesis, which suggests that haptens make the proteins "foreign" and lead to their recognition and destruction by the immune system, was introduced to explain this immune response [24]. However, this hypothesis is insufficient to support the strong immune response in DILI. Subsequently, the danger hypothesis was proposed to redeem the hapten hypothesis. The generation of reactive metabolites or drug-protein complexes damages hepatocytes via several pathways, including oxidative stress, endoplasmic reticulum (ER) stress, bile salt export pump (BSEP) inhibition, and mitochondrial damage [3,25]. Damaged hepatocytes release several damage-associated molecular patterns (DAMPs), such as highmobility group box (HMGB)-1, heat shock proteins, S100 proteins, and ATPs, which play a pivotal role in the activation of antigen-presenting cells (APCs) by producing a second signal (interaction of CD28 with B7 molecules) [26]. This co-stimulation often refers to a "danger signal" according to the danger hypothesis. Activated APCs lead to the activation of adaptive immune responses, including CD4 + , CD8 + , and B cells, which cause idiosyncratic DILI [26] (Figure 1).
Innate Immune Systems in DILI
As discussed above, reactive metabolites or drug-protein complexes can damage hepatocytes via ER and oxidative stress, inhibition of BSEP, and mitochondrial damage [3,25]. Damaged hepatocytes secrete DAMPs, including HMGB-1, heat shock proteins, S100 proteins, and ATPs, which activate the innate immune system and stimulate the immune response [27]. Activated innate immune systems (e.g., Kupffer cells, neutrophils, NK cells, and NK T cells) damage hepatocytes, recruit immune cells, and stimulate adaptive immune response during DILI ( Figure 1) [28].
Kupffer Cells
Kupffer cells, resident macrophages in the liver, are important in DILI development. They play key roles in phagocytosis, antigen presentation, and pro-inflammatory cytokines [29]. Traditionally, Kupffer cells can be classified into two types as follows: M1, Kupffer cells that secrete pro-inflammatory cytokines, such as interleukin (IL)-6 and tumor necrosis factor alpha (TNF-α); M2, Kupffer cells secreting potent immunosuppressive cytokines [30,31]. During DILI, Kupffer cells are activated by DAMPs and release pro-inflammatory cytokines and reactive oxygen radicals, along with infiltrated macrophages [32]. Kupffer cells also produce chemokine ligands to recruit monocytederived macrophages to the liver during the early phase of inflammation [33]. Activated Kupffer cells can exacerbate liver injury through these pathways.
Neutrophils
Neutrophils, the first-line responders to bacterial and fungal infections, are the most abundant fraction of the innate immune cell group [34]. They defend against infection via phagocytosis, degranulation, and extracellular trapping [35]. Granulocyte colonystimulating factor is a key regulator of neutrophil generation and maturation. The gut microbiome and metabolites may also play a role in neutrophil function [36]. During infection and inflammation, neutrophils are recruited to the site of inflammation via cytokine and chemokine production [34]. Neutrophils extravasate into the liver parenchyma via chemotactic signal from hepatocytes and other extravasated neutrophils. Extravasated neutrophils directly contact hepatocytes and trigger neutrophil activation. Eventually, abnormally activated neutrophils promote oxidative stress, mitochondrial dysfunction, and necrotic cell death, which can lead to acute liver injury during DILI [36]. Liver injury can be exacerbated by oxidative stress, involving myeloperoxidase and proteolytic enzymes [35,37].
NK Cells
NK cells, the key players in liver immunity, are abundant in the liver, constituting 30-50% of intrahepatic lymphocytes [38]. NK cells have cytotoxic functions and express immunomodulatory cytokines, such as IL-1β, IL-2, IFN-γ, and TNF-α, which can be categorized into subsets according to their characteristics, including cytokines and cytotoxic capabilities [39,40]. These functions can also mediate DILI pathogenesis. The release of cytotoxic granzymes and perforin along with the production of TNF-α and IFN-γ can result in liver injury during DILI [41,42]. The IFN-γ production can mediate the infiltration of immune cells and release of cytokines, which results in hepatocyte apoptosis during DILI [42,43].
NK T Cells
NK T (NKT) cells are unique lymphocytes that have both T and NK cell properties in their phenotype and function [44,45]. NKT cells, characterized by semi-variant T cell receptors (TCRs) and the major histocompatibility complex class I-like molecule CD1d, are pivotal in immunity against pathogens, bridging innate and acquired immunity [46][47][48]. These cells can be activated in both TCR-dependent and -independent manners and stimulate NKT cells to release cytokines, including IFN-γ and IL-17, which can recruit neutrophils, macrophages and activate adaptive immune responses, resulting in acute liver injury (DILI) [49,50]. However, studies have shown that NKT cells also have protective roles in liver injury and cancer immunology [28,51]. Recent studies have also demonstrated the potential role of the gut microbiome as a regulator of NKT cells, with further validation studies needed [51,52].
Mast Cells
Mast cells (MCs) originate from hematopoietic stem cells and play a role in the initiating the response of the innate immune system [53,54]. MCDs are activated by DAMPs, cytokines, and chemokines [55][56][57]. Activated MCs undergo degranulation and release histamines and TNF, which activate the innate immune systems and exacerbate inflammation [58][59][60]. This process stimulates hepatic stellate cells, Kupffer cells, and pro-fibrogenic signaling pathways, which aggravate liver damage and fibrosis [61,62]. Recent studies have also demonstrated that activated MCs affect T cell activation and contribute to adaptive immunity [63,64].
Adaptive Immune Systems in DILI
The adaptive immune response is stimulated by activated innate immune systems, released DAMPs, and APCs presenting reactive metabolites or drug-protein complexes. The adaptive immune response, a critical process in acute injuries, includes CD4 + and CD8 + T-cell responses and B cell-mediated humoral reactions [65]. During DILI, activated CD4 + and CD8 + T cell and B cells damage hepatocytes. Meanwhile, regulatory T (Treg) cells and their functions are decreased, exacerbating liver injury in DILI [65] (Figure 1).
CD4 + and CD8 + T Cells
Among T cells, CD4 + and CD8 + T cells are the main T lymphocytes in adaptive immune responses and are pivotal during liver injury [66]. The presentation of reactive metabolites or drug-protein complexes by APCs along with signal 2 activates CD4 + Th0 cells, which triggers a subsequent adaptive immune response [25,67]. Among subsets of CD4 + T cells, activated helper T (Th) 1 cells secrete IFN-γ, IL-2, and TNF-α and activate CD8 + T cells during DILI [68,69]. Th2 cells, an important subset of CD4 + T cells, release IL-4 and drive the proliferation and differentiation of B cells, which cause B cell-mediated humoral reactions [70,71]. Infiltrated CD8 + T cells, a major cell killer in adaptive immunity, have direct cytotoxic function and secrete granzymes, perforin, and cytokines, including TNF-α, IL-17 which cause cell death during DILI [65,72]. Indeed, infiltration of cytotoxic T cells (CTLs) may play an important role in fulminant drug-induced hepatic failure [73].
B Cells
B cells originate from hematopoietic stem cells in the bone marrow. After maturation, B cells migrate from the peripheral blood into the spleen and germinal center [74]. As in other liver diseases, B cells participate in immune response and hepatocyte damage during DILI. B cells account for 8% of intrahepatic lymphocytes, which are activated and mature into plasma cells [75]. Plasma cells produce antibodies against proteins and damage hepatocytes during DILI [76]. During DILI, plasma cells can also produce autoantibodies against native proteins, such as cytochrome P450, which exacerbates liver injury [77].
Treg Cells
Treg cells, accounting for 5-10% of CD4 + T cells, are crucial for maintaining immune homeostasis and tolerance in liver disease and transplantation [78][79][80]. Treg cells secrete IL-10 and TGF-β, suppressing the proliferation of CD4 + T and CD8 + T cells and secretion of IFN-γ [81]. Moreover, Treg cells inhibit the proliferation of Th17 cells and release of IL-17 [82]. A recent study demonstrated that Treg cells can be modulated by the gut microbiome in patients with autoimmune diseases, IBD, and transplantation, which might be associated with the pathogenesis of these diseases [83][84][85]. Indeed, a decrease in Treg cells induces an inflammatory response that leads to liver damage [86]. During DILI, intrahepatic Treg numbers and Foxp3 expression decrease, exacerbating liver injury with a decreased IL-10 level [87]. Increasing Treg cell numbers may alleviate liver injury via the secretion of IL-10 and TGF-β, which might be a treatment target for DILI [88,89].
DILI Caused by Drugs Treating HCC
HCC remains a global burden, accounting for 800,000 deaths worldwide [90]. Despite the development of screening protocols and surgical or locoregional treatments for early HCC, diagnosis commonly occurs at the advanced stage [29]. Moreover, approximately half of all patients with HCC experience systemic therapies in their treatment history [91]. In the past decades, sorafenib, a TKI, has been used as the 1st line therapy for advanced HCC. Several TKIs, including lenvatinib, regorafenib, and cabozantinib, have been developed for the 1st and 2nd line treatment of advanced HCC [90]. Recently, immune checkpoint inhibitors, including atezolizumab plus bevacizumab, have shown high efficacy in the treatment of advanced HCC [21,22].
As described above, the liver contains various immune cell types, whose response to ICIs is mostly affected by the tumor microenvironment (TME), which is composed of Treg cells, tumor-associated macrophages (TAMs), cytotoxic T cells, myeloid-derived suppressive cells (MDSCs), and neutrophils [92,93]. The crosslinking between tumor cells and several immune cells causing an immuno-suppressive status has been a treatment target for ICIs to restore the immune response to HCC [94]. During ICI treatment, liver injury can be induced via direct or indirect immune pathways. In this section, we discuss the target, frequency, mechanism, and treatment of DILI caused by drugs for HCC (Table 1).
During treatment with TKIs, elevated serum aminotransferase levels are common (~50%); however, severe hepatitis with values greater than five times the upper limit of normal is rare [101]. However, several studies have reported that TKI-induced DILI is associated with progressive liver injury and failure [102,103]. Along with DILI, hand-foot syndrome and skin rash can be present in some patients who are administered TKIs, such as sorafenib and regorafenib [101,104,105]. In liver histopathology, hepatocellular necrosis is the most frequent manifestation of TKI-induced DILI, and immune-mediated hepatitis has also developed, including sorafenib-induced DILI [104]. Although the specific mechanism remains unclear, several TKIs, including sorafenib and regorafenib, are metabolized via the CYP 3A4 pathway, which may be associated with the production of a toxic intermediate ( Figure 2) [101,105]. The direct effect of inhibition of cellular kinases, such as by lenvatinib and cabozantinib, can be another suggested mechanism for TKI-induced DILI [106,107]. TKI can also induce oxidative stress and apoptotic pathway activations, which can lead to immune response activation and TKI-induced DILI [104,108]. Moreover, several signal transduction pathways, including epidermal growth factor receptor and platelet-derived growth factor receptor, which interact with TKI, play pivotal roles in regulating DILI and are associated with TKI-induced DILI [109]. cause of clinically apparent liver injury; category C, probable rare cause of clinically apparent liver injury; category D, possible cause of clinically apparent liver injury; category E, unproven but suspected rare cause of clinically apparent liver injury. CTLA-4, cytotoxic T-lymphocyte-associated protein 4; FGF, fibroblast growth factor; MET, hepatocyte growth factor receptor; mTOR, mammalian target of rapamycin; PD-1, programmed cell death 1; PDGF, platelet derived growth factor; PD-L1, programmed cell death ligand 1; R, ratio; RET, rearranged during transfection; VEGFR, vascular endothelial growth factors receptor.
Owing to the possibility of DILI, the Food and Drug Administration recommends monitoring liver function with the use of some TKIs, including regorafenib. As TKI-induced DILI usually recovers its discontinuation, appropriate monitoring and dose reduction or temporary cessation can successfully control TKI-induced DILI [101,104].
The combination of anti-VEGF drugs with ICIs changes the tumor endothelium, increasing the infiltration of effector immune cells [113]. Moreover, combination therapy has a synergistic effect of increasing antitumor immune cell responses and inhibiting immunosuppressive pathways [114]. Indeed, ICIs that inhibit PD-1 or PD-L1 restore the function of effector CD8 + T cells [115]. CTLA-4 inhibitors activate naïve CD4 + and CD8 + T cells by promoting the interaction between costimulatory signals (B7 with CD28) [116]. Moreover, the addition of anti-VEGF drugs can show synergistic effects via several mechanisms, such as normalization of the vessel, which can lead to improvement in drug delivery and reduction in the immunomodulatory effect of VEGF on TAMs, MDSCs, Treg cells, and effector T cells [117].
Immune Check Point Inhibitors
Recently, several ICIs have been approved for HCC treatment. Atezolizumab (an anti-PD-L1 antibody) plus bevacizumab (an anti-VEGF antibody) have changed the treatment landscape and paved the way for the combination therapy, with ICIs showing better overall survival than sorafenib [21]. Moreover, durvalumab (anti-PD-L1 antibody) and Figure 2. Suggested mechanisms of drug-induced liver injury (DILI) caused by drugs administered to patients with hepatocellular carcinoma or liver transplantation. Several tyrosine kinase inhibitors (TKIs), calcineurin inhibitors, and mTOR inhibitors are metabolized via the cytochrome P450 pathway, which may be associated with the production of a toxic intermediate. These drugs can also induce oxidative stress and apoptotic pathway activations, which can lead to the activation of immune response. Mycophenolate mofetil can induce mitochondrial damage, which then leads to DILI. Immune checkpoint inhibitors (ICIs) deplete Treg cells inducing the reduction of anti-inflammatory cytokines and proliferation of CD8 + T cells. Moreover, early B cell changes may induce autoreactive B cells, leading to ICI-induced DILI. APC, antigen presenting cells; BSEP, bile salt export pump; DAMP, damage-associated molecular patterns; ER, endoplasmic reticulum; ICIs, immune checkpoint inhibitors; MMF, mycophenolate mofetil; mTORi, mammalian target of rapamycin inhibitors; NK, natural killer; TKI, tyrosine kinase inhibitors; Treg, regulatory T cells;↑, an increase in the indicated cells; ↓, a decrease in the indicated cells; , the reduction and depletion of indicated cells. ICI-induced DILI is an the immune-related adverse event, which is characterized by elevated aspartate aminotransferase (AST) and alanine aminotransferase (ALT) levels [117]. Although the pattern of ICI-induced DILI is heterogeneous, the hepatocellular type is usually frequent [118]. Using the RUCAM model, ICI-induced DILI usually begins 8-12 weeks after ICI initiation, although ICI-induced DILI can occur at any time [119,120]. The incidence of ICI-induced DILI is known to be higher in patients treated with combination therapy (up to 18%) than in those treated with monotherapy (up to 9%) [120,121]. Moreover, as patients with HCC usually have chronic hepatitis or cirrhosis, the incidence of ICI-induced DILI is more frequent than that in patients without liver cancer [122]. According to type and dose of ICIs, the incidence of ICI-induced DILI in any grade ranges from 8% to 20% and is the highest in patients treated with the combination of anti-PD-1 and anti-CTLA4 antibodies [111,[123][124][125]. In the diagnosis of ICI-induced DILI, it is essential to exclude other confounding factors, including co-medication, concomitant diseases, and hepatic metastasis, as well as to evaluate the possibility of ICI-induced DILI based on RUCAM [15,126]. Moreover, ICI-induced DILI should be differentiated from autoimmune hepatitis (AIH) [127]. ICI-induced DILI usually has a negative or low titer of antinuclear and anti-smooth muscle antibodies and does not have a female preponderance [120].
Several mechanisms have been proposed to explain ICI-induced DILI development ( Table 2 and Figure 2). The first is the reduction and depletion of Treg cells, which are essential immune cells for maintaining tolerance induced by ICI treatment, especially in CTLA-4 blockades [128,129]. The depletion of Treg cells subsequently induces the reduction of anti-inflammatory cytokines and proliferation of CD8 + T cells [130,131]. Moreover, early B cell changes, including elevation of the CD21 lo subtype, may induce autoreactive B cells, leading to ICI-induced DILI [132]. Representative histopathologic features of ICI-induced DILI are shown in Figure 2. Liver histopathology showed moderate portal inflammation with CD3 + , CD4 + , and CD8 + T cell infiltration along with periportal hepatocytic necrosis ( Figure 3A-D). Predominant infiltration of histiocytes (CD68 + cells) was identified, along with mild infiltration of CD38 + cells, suggesting the presence of plasma cells ( Figure 3E,F). ICI-induced DILI usually presents with lympho-histiocytic infiltration with lobular hepatitis, whereas AIH presents with interface hepatitis with plasma cell infiltration [120]. The gut microbiome may contribute to the development of immune-related adverse events (irAEs), especially immune-related colitis [133]. Gut microbial composition and their changes are associated with various liver disease and may influence the response to cancer immunotherapy [134][135][136]. In this context, the gut microbiota may be a biomarker for predicting irAEs including DILI. Further studies are needed to elucidate the specific pathogenic mechanisms underlying ICI-induced DILI.
ICI-induced DILI is asymptomatic in most cases; however, skin reactions (rashes) can occur in some patients [137]. Skin reactions are frequent irAEs after ICI treatment [138]. Moreover, irAEs frequently involve the gastrointestinal tract and endocrine organs, including the thyroid and lung [138]. The severity of ICI-induced DILI is classified according to the Common Terminology Criteria for Adverse Events (CTCAE) of the National Cancer Institute (Table 2) [139]. From grade 2, ICI-induced DILI is treated by stopping ICI along with corticosteroid [140,141]. In grade 2 DILI, 0.5-1 mg/kg/day of prednisolone is recommended, and in grades 3 and 4, the dose rises to 1-2 mg/kg/day of IV methylprednisolone [142]. High dose ursodeoxycholic acid (UDCA) can also be added for patients with cholestasis [118]. In patients with refractory to corticosteroids, mycophenolate mofetil (MMF), azathioprine, or tacrolimus have been used to improve liver function tests [142][143][144]. Although the time to resolution of ICI-induced DILI varies, patients with ICI-induced DILI usually recover within two weeks [145]. Reinduction of ICI after DILI can be applied to patients with grade 2 and 3 DILI, whereas patients with grade 4 DILI must permanently discontinue ICI [146]. Corticosteroids can increase the risk of bacterial infection; therefore, strict evaluation and diagnosis of ICI-induced DILI using the updated RUCAM are needed before the commencement of corticosteroid therapy [12,147]. Moreover, further studies are required to identify and validate predictors for ICI-induced DILI development. Treg cells Reduction in Treg cells and anti-inflammatory cytokines [107,108] Th1 cells Increase in Th1 cells and pro-inflammatory cytokines causing activation of CTLs and macrophages [107][108][109][110] CTLs Stimulate proliferation of CD8 + T cells [107][108][109][110] B cells Ealy B cell changes including the elevation of CD21 lo subtype may induce the autoreactive B cells [111] Grade of DILI Definition Management [116] Grade ICI-induced DILI is asymptomatic in most cases; however, skin reactions (rashes) can occur in some patients [137]. Skin reactions are frequent irAEs after ICI treatment [138]. Moreover, irAEs frequently involve the gastrointestinal tract and endocrine organs, including the thyroid and lung [138]. The severity of ICI-induced DILI is classified according to the Common Terminology Criteria for Adverse Events (CTCAE) of the National Cancer Institute (Table 2) [139]. From grade 2, ICI-induced DILI is treated by stopping ICI
DILI Casued by Drugs for Treating LT Immunosuppressants
LT patients generally require life-long ISs due to the risk of graft rejection after LT [148,149]. The most used ISs are calcineurin inhibitors, mycophenolate mofetil (MMF), and the mammalian target of rapamycin inhibitors (mTORi) [150]. Of the calcineurin inhibitors, cyclosporine inhibits the activation of T cells by binding cyclophilin, whereas tacrolimus binds to intracellular proteins and inhibits calcineurin phosphatase activity [150]. Subsequently, the nuclear factors of activated T cells cannot move to the nucleus, which shuts down the production of IL-2, leading to a decrease in T-cell response [151]. MMF, another type of ISs, inhibits the formation of guanosine monophosphate by blocking inosine monophosphate dehydrogenase and suppressing T-cell proliferation [152,153]. The mechanism of action of mTORi, including sirolimus and everolimus, includes the inhibition of serine/threonine kinase activity, a family of phosphatidylinositol-3 kinases (PI3K), which inhibits the PI3K/Akt/mTOR signaling pathway, the transduction signal of IL-2 receptors, and T-cell proliferation [154,155] (Table 1).
Significant elevation of liver function, including AST and ALT, is not frequent with calcineurin inhibitors and mTORi [156,157]. Generally, the abnormalities in liver function tests caused by calcineurin inhibitors and mTORi are asymptomatic [158]. Mechanismically, calcineurin inhibitors and mTORi are mainly metabolized by the cytochrome P450 system (CYP 3A4), which may be associated with DILI. Liver injury can be caused by direct hepatotoxicity or activation of immune cells induced by its metabolites [156,157]. Only a small portion of patients receiving MMF treatment experience elevation in serum liver function [159]. MMF is not usually metabolized by cytochrome P450 enzymes, and MMF-induced DILI may be associated with mitochondrial damage and its immunogenic metabolites [160]. As IS-induced DILI is generally mild and self-limiting, dose reduction or pausing ISs can resolve DILI.
Conclusions
The liver contains many innate and adaptive immune cells and, during the development of DILI, reactive metabolites or drug-protein complexes initiate innate and adaptive immune responses, including neutrophils, Kupffer cells, NK cells, CD4 + T cells, CD8 + T cells, and B cells. Multiple activated immune cells damage hepatocytes, leading to DILI. Meanwhile, Treg cells and their functions are suppressed, exacerbating DILI. Understanding the underlying mechanism of DILI may provide clues for future treatment targets for DILI.
The TME, composed of Treg cells, TAMs, cytotoxic T cells, MDSCs, and neutrophils, affects HCC development and responses to TKIs and ICIs. Recently approved ICIs target PD-1/PD-L1 and CTLA-4 to restore the immune response in HCC. An activated immune response can cause irAEs, including DILI, via direct and indirect pathways. DILI caused by TKIs and ICIs is usually asymptomatic and recovers after drug discontinuation. ISs used in LT patients infrequently cause DILI and require regular tests to monitor of liver function. According to the degree of DILI, appropriate treatment with corticosteroids may be needed in severe cases. Along with advances in the treatment of HCC and LT, it is mandatory that future studies elucidate the specific mechanism and appropriate management of DILI.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2016-05-14T02:27:41.459Z
|
2016-03-22T00:00:00.000
|
1937327
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcmedgenomics.biomedcentral.com/track/pdf/10.1186/s12920-016-0177-6",
"pdf_hash": "ee6af6830586d28c5c6b33ab25fb6510205d58a0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43179",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "71dbd4eef7f94f6cab51848ba47afd93846f9c7c",
"year": 2016
}
|
pes2o/s2orc
|
Genetic association and stress mediated down-regulation in trabecular meshwork implicates MPP7 as a novel candidate gene in primary open angle glaucoma
Background Glaucoma is the largest cause of irreversible blindness affecting more than 60 million people globally. The disease is defined as a gradual loss of peripheral vision due to death of Retinal Ganglion Cells (RGC). The RGC death is largely influenced by the rate of aqueous humor production by ciliary processes and its passage through the trabecular meshwork (TM) in the anterior part of the eye. Primary open angle glaucoma (POAG), the most common subtype, is a genetically complex disease. Multiple genes and many loci have been reported to be involved in POAG but taken together they explain less than 10 % of the patients from a genetic perspective warranting more studies in different world populations. The purpose of this study was to perform genome-wide search for common variants associated with POAG in an east-Indian population. Methods The study recruited 746 POAG cases and 697 controls distributed into discovery and validation cohorts. In the discovery phase, genome-wide genotype data was generated on Illumina Infinium 660 W-Quad platform and the significant SNPs were genotyped using Illumina GGGT assay in the second phase. Logistic regression was used to test association in the discovery phase to adjust for population sub-structure and chi-square test was used for association analysis in validation phase. Publicly available expression dataset for trabecular meshwork was used to check for expression of the candidate gene under cyclic mechanical stress. Western blot and immunofluorescence experiments were performed in human TM cells and murine eye, respectively to check for expression of the candidate gene. Results Meta-analysis of discovery and validation phase data revealed the association of rs7916852 in MPP7 gene (p = 5.7x10−7) with POAG. We have shown abundant expression of MPP7 in the HTM cells. Expression analysis shows that upon cyclic mechanical stress MPP7 was significantly down-regulated in HTM (Fold change: 2.6; p = 0.018). MPP7 protein expression was also found to be enriched in the ciliary processes of the murine eye. Conclusion Using a genome-wide approach we have identified MPP7 as a novel candidate gene for POAG with evidence of its expression in relevant ocular tissues and dysregulation under mechanical stress possibly mimicking the disease scenario. Electronic supplementary material The online version of this article (doi:10.1186/s12920-016-0177-6) contains supplementary material, which is available to authorized users.
Background
Glaucoma is the second largest cause of blindness after cataract [1] and it is the leading cause of irreversible blindness worldwide. Primary Open Angle Glaucoma (POAG), a multifactorial complex disease, is the most common subtype. The disease is characterized by progressive loss of peripheral vision due to death of retinal ganglion cells and a characteristic abnormal appearance of optic nerve head [2]. Ocular risk factors for this disease are high Intra-Ocular Pressure (IOP), thinner Central Corneal Thickness (CCT) and myopia [3]. High IOP (>21 mm of Hg) is the most important risk factor of POAG, although it is neither necessary nor sufficient for the disease onset [4]. However, the most effective treatment strategy till date is IOP management and it has proven to be beneficial even for normal tension glaucoma patients (IOP < 21 mm Hg) [5,6].
The balance between production of aqueous humor by ciliary body and outflow through the trabecular meshwork determines IOP [4]. It has been shown that highly penetrant genetic mutations in MYOC gene can result in reduced filtration rates of aqueous humor due to protein aggregation and sequestration due to misfolding causing elevation of IOP [7,8].
The genetic etiology of POAG is poorly understood. Family based linkage analyses have revealed 17 linked loci for POAG of which six genes have been identified (OMIM 137760). Candidate gene studies have suggested multiple susceptibility loci to be associated with this disease [3]. A total of 11 Genome Wide Association Studies (GWAS) have been reported for POAG to date from different populations of the world. About 14 GWA studies on optic disc parameters are reported, namely Intra-Ocular Pressure (IOP), Vertical Cup-Disc Ratio (VCDR), Central Corneal Thickness (CCT) and Optic disc area [3]. From these studies, a few loci were replicated in populations of different ancestries [3,9,10]. Among these, studies in Indian population do not show evidence of association for CDKN2B-AS1 [11] and PLXDC2 loci [12] probably indicating a different genetic structure of this population. There is no data for other loci and no unbiased genetic screen has been performed for POAG from this part of the world. Here, we report a genome-wide search for common variants associated with POAG in a large population residing in the West Bengal state of India.
Selection of study subjects and sample preparation
A total of 364 POAG cases and 365 controls were selected for the discovery phase of the study and 382 cases and 332 controls were selected for the replication cohort. The patients were diagnosed through clinical ocular and systemic examinations. The inclusion and exclusion criteria for samples were the same as reported earlier [13]. Briefly, the patients were recruited if they were positive for 2 out of the 3 criteria, namely, Intraocular pressure (IOP) >21 mm of Hg, glaucomatous field damage and significant cupping of the optic disc. Individuals with ocular hypertension and with any history of inflammation or ocular trauma (past & present) were excluded from this study.
Controls were selected without any history of ocular disease and wherever possible were tested negative for POAG by means of routine eye examination for glaucoma as described above. The study protocol adhered to the tenets of the Declaration of Helsinki and was approved by the Institutional Review Board.
Peripheral blood was collected with EDTA from the POAG patients and controls. A written informed consent was obtained from each individual. Genomic DNA was prepared from fresh whole blood using the PAXgene blood DNA isolation kit (Qiagen, Hilden, Germany) according to the manufacturer's protocol. The DNA was dissolved in TE (10 mM Tris-HCl, 1 mM EDTA, pH 8.0).
Genome-wide genotyping and quality control for discovery phase
In the discovery phase, genome-wide genotyping was done using Illumina Human660W-Quad chip (Illumina Inc., San Diego, CA, USA) following the manufacturer's protocol. Genotype data of SNPs were obtained from Genome Studio version 2011.1. Gentrain score >0.3 was taken as threshold for cluster quality of SNPs. Duplicate samples and close relatives (first degree relatives) were removed by Identity-by-state analysis in PLINK (version 1.06). Samples with call rate >98 % and SNPs with call rate >95 % were retained. Subsequently, SNPs with minor allele frequency <0.01 in controls and SNPs which do not follow Hardy-Weinberg equilibrium (HWE p < 0.01) were removed. The genomic inflation factor (λ) in the discovery cohort was 1.06 suggesting population sub-structure. Three outlier samples were removed from the final analysis after multidimensional scaling (see Additional file 1) and p-values were adjusted for remaining stratification using values of four components as covariates by logistic regression. The inflation factor for the adjusted p-values was observed to be 1.01. We have also performed chi-square based statistics for which the data is provided in Additional file 2.
Linkage disequilibrium (LD)-based SNP clumping
To assess the confidence of association of independent loci, we performed genome-wide LD-based clumping.
Genotype imputation
Further to increase the genomic coverage for the regions we imputed SNP data using MACH (version 1.0.16). The reference populations for imputation were the combined HAPMAP phase 3 data of CEU (Utah residents with European ancestry) and GIH (Gujarati Indians in Texas, Houston) [15]. The representative genotype and allele error rates are given in Additional file 3.
Targeted genotyping and quality control for validation phase
The SNPs with p < 10 −3 and the associated clumped SNPs from 31 clumps after imputation were taken forward for validation in an independent cohort. Thus, 514 SNPs were genotyped using the Illumina GoldenGate genotyping assay (Illumina GGGT assay) in 382 cases and 332 controls from the same population background. As mentioned above for the discovery phase, here also we have performed QC checks for call rate, MAF and HWE. Additionally, we removed six SNPs which showed significant allele frequency difference (Bonferroni-adjusted p-value < 0.05) between controls of discovery and validation cohorts (see Additional file 4). A total of 494 SNPs passed all quality checks (see Methods section) and were tested for association using chi-squared test in 319 cases and 297 controls. A total of 37 samples were genotyped in duplicate to check the accuracy which showed a concordance of >98.8 % (see Additional file 5).
Statistical analysis
Statistical analysis for quality control, chi-square test of association, logistic regression for adjustment of p-values and multi-dimensional scaling for population stratification were performed using PLINK version 1.07 [14]. LDbased clumping was done using PLINK (version 1.06). Meta-analysis of discovery and replication phases was done using METAL [16]. Manhattan plots were created in qqman package of 'R' [17] and regional association plot was created using LOCUSZOOM [18].
Analysis of GEO expression dataset
Microarray expression data of human trabecular meshwork (HTM) cell cultures was taken from publicly available gene expression omnibus (GEO) dataset (GSE14768). HTM cell cultures were obtained from cadaver eyes of three donors, 48 h post-mortem, with no history of eye diseases. The cells were subjected to cyclic mechanical stress and non-stressed parallel control cultures were incubated under the same conditions in the absence of stress. Data were analyzed using GEO2R online tool to check for the differential gene expression and tested for significance using T-test.
Results
In this study we recruited a total of 1443 samples from a large population residing in the state of West Bengal in eastern part of India. The average age of patients was 54.32 ± 14.62 years and 51.61 ± 11.40 years for controls. The average IOP for the patients was 21.99 ± 7.88 mm of Hg.
Genome-wide association study reveals association of novel loci for POAG The allelic association was tested on 347 cases and 354 controls for 521,873 autosomal SNPs (see Additional files 2 and 6) in the discovery phase and in the validation phase 494 SNPs were tested for association in another 319 cases and 297 controls from the same population (Fig. 1a, see methods). Meta-analysis of the entire data revealed most significant association of rs7916852 (p = 5.7x10 −7 , OR = 1.70) from MPP7. We observed association of 13 additional SNPs of MPP7 gene in the validation phase (meta-analysis p values ranging between 10 −7 to 10 −3 ; Table 1, see Additional file 2). Two more SNPs, rs10763644 and rs10763643, also showed the same magnitude of significance (Table 1). Interestingly, all the 14 SNPs were associated in the discovery phase as part of a single clump) (Fig. 2). It is worth highlighting that rs10763643, found to be one of the most associated SNPs in our genetic screen was originally obtained through imputation of genotypes from the HAPMAP and was experimentally validated in our cohort (Fig. 1, Table 1).
We further analysed the haplotypes of 6 tag SNPs out of 14 MPP7 SNPs (see Additional file 7) and the haplotype with ' A' allele at the fifth position (GAAGAC) for rs10763643, was found to be associated as a risk haplotype for POAG (p = 6.97X10 −5 ). Two different haplotypes (AAGAGA and AGGAGA) with the other allele ('G' for rs10763643) are associated as a protective haplotype (p = 6x10 −4 and 3.1x10 −3 respectively). The details are furnished in Additional file 8.
MPP7 is downregulated in human trabecular meshwork cells upon cyclic mechanical stress
We observed abundant expression of MPP7 protein in the HTM cells by western blot (Fig. 3a). The glaucoma phenotype, especially those associated with elevated intra-ocular pressure, is usually linked with restricted outflow of the aqueous humour through the TM -thus mimicked in vitro by cyclic mechanical stress on TM cells. A publicly available gene expression dataset at gene expression omnibus (GSE14768) of primary TM cells from donor eye, without any history of glaucoma, revealed MPP7 expression to be significantly down-regulated (FDR adjusted p-value = 0.018, fold change = 2.6) under cyclic mechanical stress as compared to the TM cells without stress (Fig. 3b). Note: P logistic refers to the p-value calculated by logistic regression analysis and P of validation phase refers to chi-square p-value. P meta refers to the meta-analysis of P logistic of discovery phase (see Additional file 2) and P of the validation phase MPP7 is highly expressed in the ciliary processes of murine eye MPP7 protein expression was also analyzed in the murine eye by immunofluorescence. Experiments were performed on C57/BL6 mouse at P15 (15 days after birth). High expression was observed in the sclera and ciliary body (Fig. 4a, b). Within the ciliary processes the protein was mainly detected in the internal limiting membrane (Fig. 4d, e).
Association of reported susceptibility loci of POAG in our study cohort
We checked for the association of loci that are already reported to be associated with POAG through GWA studies. Independently, the CDKN2B-AS1 locus was tested and found to be not associated in this population [11]. For all other 10 loci, we found suggestive association of multiple SNPs (see Additional file 9 and 10). Among these, one SNP of AFAP1 is associated in both discovery and validation phase (see Additional file 11). Among the reported SNPs from these loci, we found nominal association of rs4236601 in CAV1/CAV2 and rs7081455 of PLXDC2 (see Additional file 9).
Discussion
Using an unbiased two stage genome-wide screen, we suggest that MPP7 as a potential novel candidate locus for POAG. MPP7 (membrane protein palmitoylated 7) is a member of Membrane-Associated Guanylate Kinase (MAGUK) subfamily of proteins and facilitates epithelial tight junctions formation together with Discs, Large Homolog 1 (DLG1), another MAGUK subfamily member [19,20]. In eye, tight junctions in the non- The MPP7 SNP (rs10763643) having the lowest association in the validation phase is indicated. b Regional association plot for rs10763643 of MPP7 gene with 100 Kb upstream and downstream regions. The data of genotyped SNPs are denoted as circles while the imputed data is denoted as squares. The arrows represent association of rs10763643 in discovery, validation and the meta analysis. The left Y-axis represents -log 10 p-values and the right Y-axis represents the recombination rate. X-axis represents position of SNPs on chromosome 10 (human genome build 36) pigmented ciliary body epithelium are crucial in barrier function responsible for ultra-filtration of plasma leading to the production of aqueous humour [21]. Rate of aqueous humour production, its content and outflow through the Trabecular Meshwork (TM) is reported to be disturbed in glaucoma [22][23][24][25]. This has also been one of the main line of disease management strategy [26] both for high and low tension glaucoma groups. The association and allele frequencies for the MPP7 SNPs were consistent when we divided the patients into high tension (IOP > 23 mm Hg) and low tension (IOP < 19 mm Hg) sub-groups and compared against the controls (see Additional file 12). This is in agreement with the possible role of MPP7 influencing aqueous humour dynamics in POAG not specific to any sub-type categorized based on IOP. It has been recently shown by in vitro experiments that MAPK pathways in TM can be activated by ciliary epithelial cells (ODM-2) implicating crosstalk between TM and ciliary epithelium [27]. This indicates that a dysfunctional crosstalk can result in dysregulated aqueous humor outflow influencing glaucoma pathogenesis. Interestingly, we found abundant expression of MPP7 protein in human trabecular meshwork cells (Fig. 3a) and the internal limiting membrane of ciliary processes of the murine eye (Fig. 4). Further, the analysis of publicly available expression dataset has revealed that upon cyclic mechanical stress (CMS), The genotyped SNPs are represented as diamonds and imputed SNPs are represented as triangles. The blue colour denotes 14 SNPs of MPP7 clump which were selected for validation phase. b This plot represents data of 14 MPP7 clump SNPs in the validation phase. All 14 SNPs have a p < 0.05. X-axis denotes the genomic position of SNPs in Mb (human genome build 36) and Y-axis represents -log 10 p-values. The most significant SNP in the validation phase is marked with an arrow MPP7 is significantly down-regulated in the trabecular meshwork (Fig. 3b).
In humans, the majority of aqueous humor exits the eye via the conventional outflow pathway, composed of trabecular meshwork (TM) and Schlemm's canal (SC) [28]. IOP is a dynamic stressor that continuously alters the biomechanical environment to which the ocular parts involved in outflow are exposed. It has been reported that cyclic IOP in perfused anterior segments of human and porcine eyes resulted in a significant decrease in outflow facility and suggest that it may result from active cellular responses to the cyclic mechanical stimulus [28]. It has been also reported that upon CMS, the family of proteins involved in regulation of celladhesion and cytoskeletal organization are significantly down-regulated [29]. MPP7 knock-down causes problem in tight junction formation of epithelial cells [19] and our observation of it being down-regulated upon CMS might be an indication of a similar dysfunction of cell-cell interactions. It has been reported that other MAGUK sub-family members (e.g. ZO-1), that regulates tight junction formation are also down-regulated under elevated hydrostatic pressure on HTM cells [30]. MPP7 is known to bind MPP5, a component of crumbs complex [19]. Crumbs complex functions in epithelial cell polarity and has been shown to be involved in retinal degeneration [31]. Whether MPP7 is a crucial member for maintaining tight junctions and cell polarity of ciliary epithelium will be revealed by further functional studies.
Genetic variants in MPP7 have also been implicated in other diseases. Reports suggest that MPP7 is a susceptibility gene for site-specific bone mineral density and osteoporosis [32,33]. Okamoto N et al. identified micro deletions at 10p11.23-p12.1 overlapping with this gene in children with unknown congenital craniofacial anomalies [34]. MPP7 has also been implicated in intellectual disability and/or multiple congenital anomalies (ID/MCA) through identification of single β-Actin was used as loading control. Three independent experiments were done to confirm the expression. b This plot represents the expression of MPP7 gene transcripts in primary HTM cells with (HTM_stressed) and without cyclic mechanical stress (HTM_control) in a publicly available dataset (GSE14768). X-axis represents MPP7 shows down-regulation under cyclic mechanical stress with significant FDR-corrected p-value of 0.018 and fold change in stressed HTM was observed to be 2.6 (log Fc = −1.38) gene de novo copy number variations [35]. MAGUKs have been implicated in synaptic development and plasticity including processes in the retina [36,37]. This study suggests a novel association of MPP7 with POAG. To confirm these results, further studies need to be undertaken in larger cohorts from different populations.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2013-01-08T00:00:00.000
|
16952788
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00723-012-0427-5.pdf",
"pdf_hash": "e0fce5a44f3530432efb2a130f7249f8a9a4dfc0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43184",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"sha1": "1791b6d2480a223799ea21eb441db644003e38fd",
"year": 2013
}
|
pes2o/s2orc
|
Analysis of Uniformity of Magnetic Field Generated by the Two-Pair Coil System
In this paper we use a simple analysis based on properties of the axial field generated by symmetrical multipoles to reveal all possible distributions of two coaxial pairs of circular windings, which result in systems featuring zero octupole and 32 pole magnetic moments (six-order systems). Homogeneity of magnetic field of selected systems is analyzed. It has been found that one of the derived systems generates homogenous magnetic field whose volume is comparable to that yielded by the eight-order system. The influence of the current distribution and the windings placement on the field homogeneity is considered. The table, graphs and equations given in the paper facilitate the choice of the most appropriate design for a given problem. The systems presented may find applications in low field electron paramagnetic resonance imaging, some functional f-MRI (nuclear magnetic resonance imaging) and bioelectromagnetic experiments requiring the access to the working space from all directions.
In many applications it is required that the field be highly homogenous over some specified volume. This is of particular importance in magnetic resonance imaging experiments. The systems used for in vivo medical diagnostic studies most often employ solenoidal superconducting electromagnets that are expensive and in certain applications pose disadvantages associated with the limited access to the region of uniform field. In electron paramagnetic resonance imaging (EPRI) [5] and in some functional nuclear magnetic resonance imaging (MRI) experiments [6], electromagnets generating low fields and/or allowing access to the working space from all directions and not just axial are desirable. A classic example of systems satisfying these conditions are air-core assemblies comprising a number of circular or square windings placed co-axially and distributed so that the leading perturbation terms in the field series expansion are eliminated.
In this paper, we consider the system consisting of two coaxial pairs of circular loops with the same radius. The use of properly distributed windings of the same radius makes the radial access to the uniform field possible and does not impose restriction on the axial access, which may sometime occur in systems based on spherical harmonic expansion [4,10] with outer pair of windings of smaller radius or having a single loop in the mid plane.
We use a simple analysis based on properties of the axial magnetic field with the aim to reveal the possible distributions of windings, that result in systems featuring zero octupole and 32 magnetic moments, i.e., generating the central magnetic field in which the sixth-order term is the first one non-vanishing in the field expansion. The table, formulae, and graphs given in the paper facilitate the choice of design, which is the most suitable for the problem at hand.
The system presented generates extended volume of uniform magnetic field, which can be accessed from all directions. It may be suitable for very-low field MRI and EPRI as well as bioelectromagnetic experiments [7]. The high-field system can be easily shielded by confinement in other with larger radius, which cancels the total dipole moment and results in reduction of the stray field at the expense of a slight decrease of strength of the very homogenous central field.
System Configuration Analysis
Consider a system of two circular current loops of the same radius R, with current I flowing in the same sense in each loop. Let the loops encircle z-axis, be located symmetrically with respect to the origin, and separated by the distance 2d. Using the Biot-Savart law we calculate that the axial field B z is: Let us now focus the attention on how the axial field behaves outside the coil system and in its central region. To show the behavior of the distant axial B z field, we may expand in the Taylor series the function in square brackets of Eq. (1) in inverse powers of z. This yields: It is seen the distant axial field may be viewed as the superposition of magnetic multipoles at the system origin. Due to symmetry of current distribution all coefficients c kÀ1 for an even k will vanish. This means that in the field expansion there will be no 2 k poles with k even, just dipole (k ¼ 1), octupole (k ¼ 3), 32 pole (k ¼ 5) and so on.
The first coefficients are found to be: The first term in the field expansion c 0 z À3 is evidently the axial field due to the dipole at origin (k ¼ 1). The second term c 2 z À5 is the octupole term (k ¼ 3), the third (k ¼ 5) 32 pole term and so on. But we see from Eqs. (2) and (3b) that the octupole term becomes zero if 2d ¼ R (the Helmholtz condition). Now as Purcell [9] pointed out, the field of any symmetrical multipole cannot be zero along the symmetry axis unless it is zero off the axis everywhere as well. Consequently, the entire octupole field, not just B z on the axis, vanishes, if 2d ¼ R. This means that every Helmholtz pair has the zero octupole moment.
By expanding in the Taylor series the function in square brackets of Eq. (1) around the center, we can readily show that the central field generated by the system is: where: Owing to the symmetry of current distribution the axial magnetic field in the central region of the system is an even function of z. But for the Helmholtz condition (2d ¼ R) the c 2 coefficient vanishes, which maximizes the volume and uniformity of the field in the central region of the system. It is possible to make this field still more uniform and to extend its useful volume by using additional pairs of coils, which help to nullify the next coefficient in the field expansion [1].
To eliminate both octupole and 32 pole terms, we consider a system of two coaxial pairs of circular current loops of the same radius R, i.e., a four-coil system with number of Amp-turns ðNIÞ 1 and ðNIÞ 2 , respectively, and current flowing in the same sense in each loop. Let the loops be located symmetrically with respect to the origin of z-axis at distances 2d 1 and 2d 2 , respectively. From the Biot-Savart law the axial field B z generated by the system is: Expanding in the Taylor series the function in square brackets around the center we have: where c ð1Þ kÀ1 and c ð2Þ kÀ1 are the coefficients in the multipole field expansion for respective coil pairs. By analogy to Eqs. (5a, 5b, 5c) we may write: It is seen that both octupole (k ¼ 3) and 32 pole (k ¼ 5) terms in the field expansion will be zero if: The set of Eqs. (9a, 9b) contains three variables d 1 ; d 2 ; ðNIÞ 1 =ðNIÞ 2 È É , one of which can be treated as a parameter. In our calculation the chosen parameter has been the Amp-turn ratio.
The numerical solutions of Eqs. (9a, 9b) may be divided into two families. The first one, which contracts the length of the coils system, and the second one, which extracts the length. Table 1 Coil spacings and design specifications for contracting systems Uniformity of Magnetic Field Generated by the Two-Pair Coil System 609 Table 1 continued Table 1 lists the Amp-turn ratios and the corresponding coil spacing obtained for the family of solutions contracting the coil length. Using the LS approximation we have found that these solutions may be approximated by:
Numerical Results
¼ 0:25153 þ 0:06065e Àt À 0:00173t À 0:00001t 2 ; ð10aÞ d 2 R ¼ 0:96173 À 0:06781e Àt À 0:00466t þ 0:22810te Àt þ 0:00003t 2 ; ð10bÞ ðNIÞ 2 =ðNIÞ 1 ¼ 2:12 þ 0:02t ð10cÞ for 1 t 82. The obtained solutions are shown graphically in Fig. 1. For the family extracting the length, the solutions may be approximated by: for 1 t 394. The latter solutions are shown graphically in Fig. 2a, b. The derived solutions allow easy construction of the four-coil system featuring zero octupole and 32 pole terms. For the family of solutions contracting the coils length one set of the derived variables corresponds well to that obtained previously by means of the Bessel functions formalism by Lee-Whiting [8]. As it is seen from Hereafter, we consider in more detail the NI ð Þ 2 = NI ð Þ 1 ¼ 9=4 system, which has the potential of being power supply in series. For this Amp-turn ratio the windings are positioned at d 1 =R ¼ 0:24483 and d 2 =R ¼ 0:94485. To compare performance of the system with that of Lee-Whiting [8], we have analyzed the spatial distribution of the magnetic field generated by both systems. The evaluation of the field homogeneity involved the use of the Biot-Savart relation applied to small segments of windings. Results of analysis are presented in Fig. 3a, b, which show contours of constant magnetic field relative to the field at center (field error contours) defined as (B given point -B center )/B center plotted at ±1, ±5, and ±10 ppm intervals for the ideal eight-order Lee-Whiting design and our 9/4 system, respectively. It is seen that that the field error contours of both systems compare excellently and the volume of the homogenous magnetic field is similar. In Ref. [11] the Amp-turn ratio of 2.2604 given by Lee-Whiting is approximated by 9/4 (2.25) and the windings placements are left unchanged. Such an approximation expressing the numerator and denominator of the Amp-turn ratio by integers allows the design of systems having coils connected in series and fed by one common source, which is most preferable in practical applications. Unfortunately, as we show in Fig. 4 this small change of only one parameter has a significant impact on the system performance, leading to a very substantial reduction of the generated field homogeneity.
In Table 1 we specify the coil's position with an accuracy of 5 significant digits. With the same accuracy the location of windings is given in the Lee-Whiting paper. Such a precision, however, would be difficult to achieve when constructing the real system. To evaluate the influence of the location of coils on the field homogeneity, we have calculated the field error contours of both systems assuming the coils placement specified with the accuracy of 3 significant digits, i.e., we have assumed d 1 =R ¼ 0:243 and d 2 =R ¼ 0:941 for the Lee-Whiting system and d 1 =R ¼ 0:245 and d 2 =R ¼ 0:945 for the system presented in this paper. The results of calculation are plotted in Fig. 5a, b for the Lee-Whiting arrangement and our 9/4 system, respectively. When comparing pair of Figs. 3a, 5a with the pair 3b and 5b, we see that the 9/4 restricted system generates the greater volume of homogenous magnetic field and is less sensitive to the precision of the location of coils. The real assemblies with coils connected in series consist of many turns of wire fed with equal current and their Amp-turn ratio (NI) 2 /(NI) 1 depends solely on the number of windings of the outer and inner coils. As an example of such assemblies, we have analyzed the arrangements comprising nine circular current loops in the outer coils and four loops in the inner coils. The loops are deposited side by side around z coordinates corresponding to the Lee-Whiting and our 9/4 contracted designs. The expected distributions of the magnetic field generated by the two designs are shown in Fig. 6a, b, respectively. It is seen that the setup suitable for the serial power supply based on our 9/4 system produces much greater volume of the homogeneous magnetic field than that using the Lee-Whiting coordinates.
The setup based on the 9/4 contracted system having the unit radius (R = 1 m) and fed with the current 1 A generates magnetic field whose value in the center is 89.5 9 10 -7 T (89.5 mG). The field homogeneous to 50 ppm, which meets, e.g., EPRI requirements, extends axially to 0.22R and radially to 0.3R from the center. In the case of setup that uses the Lee-Whiting coordinates, the field value is 89.9 9 10 -7 T and the field homogeneous to 50 ppm extends axially to 0.11R and radially to 0.17R from the center.
The magnetic field homogeneity has been analyzed for all systems listed in Table 1. The maximum volume of the homogeneous field is generated by the NI ð Þ 2 = NI ð Þ 1 ¼ 2:26 system. When the Amp-turn ratio changes the volume of the uniform magnetic field slowly decreases. For the last system listed in Table 1 the field homogeneous to 1 ppm extends axially to 0.11R and radially to 0.13R. The value of field is 142 9 10 -7 T.
Shielding of the System
The distant (stray) field of the two-pair coil system may be easily shielded. To show the behavior of the distant axial B z ; we may expand Eq. (6) in inverse powers of z. This yields: Uniformity of Magnetic Field Generated by the Two-Pair Coil System 615 The first three turns in the expansion are readily found to be those for k ¼ 1, k ¼ 3 and k ¼ 5 with coefficients: So, we see that the external (stray) field of the system is dominated by the dipolar term and decreases as z À3 . However, in some applications, it is required that the central field be very homogenous and at the same time the stray field be falling off rapidly. Could we somehow suppress the dipolar term of the four-coil system? Yes, and it is rather easy to do. We can surround the system with another one of larger radius with opposed currents arranged so dipolar moments of the systems given by Eqs. (13a, 13b, 13c) cancel.
For example, if the outer system has twice the radius of the inner one, we need the current flowing in it to be four times smaller than that in the inner system to achieve cancellation. Adding the outer system cancels the total dipole moment not effecting the homogeneity of the central field, but slightly reducing the field value. For the case considered it amounts to 12.5 %.The behavior of the stray field is shown in Fig. 7, it does fall off faster to zero when the whole nested system is energized compared with the inner four-coil system alone.
It is clear that the confinement provides a simple system with the reduced stray field, which in some circumstances may prove to be useful.
Conclusions
The possible distributions of two coaxial pairs of circular windings resulting in systems featuring zero octupole and 32 pole moments are given. Analysis given in the paper shows that one of the derived system, which has the Amp-turn ratio of 9/4, generates homogeneous magnetic field whose volume compares excellently with that of the eight-order Lee-Whiting design, but is less sensitive to the impact of current distribution and the precision of windings location. Moreover, the system may be more easily adopted for the serial power supply. The ideal 9/4 system of the unit radius fed with 1 A current generates the magnetic field of 89.5 mG, which is about 10 times stronger than that yielded by the Helmholtz pair. The field is proportional to the number of Amp-turns and scales with R -1 , hence systems of smaller radius and/or carrying higher currents will generate higher magnetic fields. Yet, to avoid heating problems they will require thermal and current stabilization. This refers particularly to long-time experiments. Based on the design presented, we plan to build 90 G magnet for micro-EPRI experiments, which should shed some light on the problems associated with practical implementation of the system. Finally, we would like to add that the high-field version of the system can be easily shielded by confinement of the system inside another one with bigger radius. This exerts no significant influence on the center field, but reduces greatly the stray field outside.
|
v3-fos-license
|
2022-02-28T16:10:25.561Z
|
2022-02-01T00:00:00.000
|
247154108
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.inoche.2022.109284",
"pdf_hash": "efa3e051da4a180998505d8739b03efe549f9f92",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43187",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"sha1": "cb7052b8be59560ddda11524c369098284756f44",
"year": 2022
}
|
pes2o/s2orc
|
Selective binding of ReO 4– and PtCl 42– by a Pd 2 L 4 cage in water
Titration with KReO4..................................................................................................................................... 3 Titration with K2PtCl4 .................................................................................................................................... 4 Titration with cisplatin 5 .............................................................................................................................. 5 Titration with oxaliplatin 6 ........................................................................................................................... 6 Titration with nedaplatin 7 .......................................................................................................................... 7 Titration with biotin 8 .................................................................................................................................. 8 Titration with sodium chloride .................................................................................................................... 9 Titration with sodium acetate ................................................................................................................... 10 Titration with caffeine 9 ............................................................................................................................. 11 Titration with uridine 10 ............................................................................................................................ 12 Titration with cytidine 11 ........................................................................................................................... 13 Titration with phenylalanine 12 ................................................................................................................. 14 Titration with glutathione 13 ..................................................................................................................... 15 Stability study with histidine 14 ................................................................................................................. 16
Section S1. Materials and methods All solvents and chemicals were purchased from commercial suppliers and used without further purification. Cage 3 was synthesized as previously reported. [1] NMR Nuclear Magnetic Resonance (NMR) spectra were recorded on a Bruker DRX 500 operating at 298 K for NOE and 1 H NMR titration experiments.
Acidity measurements
The acidity was measured with a SI analytics Handylab 100 with the pH electrode Blueline 14 pH (pH 0-14; -5-100 °C; 3 mol/L KCl referenced; catalogue number 285129140) or the pH electrode ScienceLine micro N 6003 (pH 0-14; -5-100 °C; Ag/AgCl referenced; catalogue number 285105176). The pH meter was calibrated beforehand with Sigma-Aldrich/Merck buffered reference standards of pH 4.00 (red colourcoded; catalogue number B5020), pH 7.00 (yellow colour-coded; catalogue number B4770) and pH 10.00 (blue colour-coded; catalogue number B4895). The acidity in D 2 O was measured with the electrode calibrated in H 2 O and reported as pH* as detailed by Krężel and Bal. [2] Titration studies A 0.83 mM solution of 3 in D 2 O was prepared, pH was adjusted to pH 7.0-7.4, sonicated for 30 minutes and then used as such. [1] During the titration, known aliquots of the stock solution of titrant in D 2 O was added to an NMR tube containing 600 µL of this 0.83 mM solution of 3 in D 2 O and a 1 H NMR spectra was recorded.
Association constants (K a ) were determined by monitoring the change in chemical shift (Δδ) for a selected hydrogen resonance of 3 and fitting these shifts to a 1:1 binding model using a non-linear least squares fitting implemented in Excel or HypNMR. [3] Computations All models were generated manually and geometry optimized with Spartan 2016 without any (geometrical) constraints. Following an initial MMFF optimization, the resulting coordinates were subjected to a computation using density functional theory (DFT) at the ωB97X-D / 6-31G* level of theory with an explicit water solvation model as implemented in Spartan 2016. Energies in kcal•mol -1 are derived from the energies in hartrees by ignoring entropy using the simple subtraction of (E adduct -E anion -E cage ) x 627.509608. Section S2. NMR binding studies Titration with KReO 4 Figure S1: Top: 1 H NMR spectra and assignment of a binding study of cage 3 with KReO 4 in D 2 O at pH* 7.0. The guest stock solution concentration was 0.021 M. Initial concentration of host = 0.83 mM. The vertical red dashed lines were added as a guide to the eye. Bottom: HypNMR binding analysis following hydrogen signals d, f, a and c of the cage. The chemical shifts could be fitted to a 1:1 binding model (left) with K a = 434 (rsd = 0.6%) with a reasonable fit or r 2 = 0.99963 on all 120 data points. This could be improved to r 2 = 0.9993 by assuming a 1:3 binding model (right) with K a 1:1 = 434 M -1 and the next constants set to 10 M -1 . A 1:3 stoichiometry implies one strongly bound interior ReO 4 ] 2-and two more loosely associated [ReO 4 ] 2-complexes to further compensate the charges of the cage (likely on the cage's exterior). On this basis the 1:1 binding is assessed as 434 M -1 . The modelled species distributions is also shown as coloured lines with 'Host' = green and 'Host-Guest' = blue and 'Host-Guest 2 ' = brown.
Titration with K 2 PtCl 4 Figure S2: Top: 1 H NMR spectra and assignment of a binding study of cage 3 with K 2 PtCl4 in D 2 O at pH* 7.0. The guest stock solution concentration was 0.020 M. Initial concentration of host = 0.83 mM. The vertical red dashed lines were added as a guide to the eye. Bottom: HypNMR binding analysis following hydrogen signals a, c, d, f and g of the cage. The chemical shifts could be fitted to a 1:1 binding model (left) with K a = 6,901 (rsd = 2.1%) with a reasonable fit or r 2 = 0.9738 on all 90 data points. This could be improved to r 2 = 0.9959 by assuming a 1:3 binding model (right) with K a 1:1 = 31,600 M -1 (10 4.5 ) and the next constants set to 10 M -1 . A 1:3 stoichiometry implies one strongly bound interior [PtCl 4 ] 2and two more loosely associated [PtCl 4 ] 2-complexes to further compensate the charges of the cage (likely on the cage's exterior). On this basis the 1:1 binding is assessed as in the order of 10 4 M -1 . The modelled species distributions are also shown as coloured lines with 'Host' = green and 'Host-Guest' = blue.
Titration with cisplatin 5 Figure S3: 1 H NMR spectra and assignment of a binding study of cage 3 with cisplatin (5) in D 2 O at pH* 7.3. The guest stock solution concentration was 8.3 mM (due to solubility limitations). Initial concentration of host = 0.91 mM. The vertical red dashed lines were added as a guide to the eye. Bottom left: the very small shifts observed at the end of the titration could be modelled (not fitted) with HypNMR to a 1:2 model with stepwise constants of 1.3 and 2.5 M -1 . The modelled species distributions is also shown as coloured lines with 'Host' = green, 'Host-Guest' = blue, and 'Host-Guest 2 ' = brown. Due to the lack of saturation, we interpret these shifts as the onset of genuine 1:1 binding with an affinity close to the detection limit of about 3 M -1 and report such shifts as 'not binding' in the paper. That these shifts are very small is illustrated by the rescaled plot in the bottom right, which can be contrasted with the shifts observed with the strongly binding K 2 [PtCl 4 ] with Δδ max = ± 0.1 p.p.m. (see Figure S2).
Titration with oxaliplatin 6 Figure S4: 1 H NMR spectra and assignment of a binding study of cage 3 with oxaliplatin (6) in D 2 O at pH* 7.1. The guest stock solution concentration was 10.1 mM (due to solubility limitations). Initial concentration of host = 0.91 mM. The vertical red dashed lines were added as a guide to the eye. Bottom left: the very small shifts observed at the end of the titration could be modelled (not fitted) with HypNMR to a 1:2 model with stepwise constants of 1.3 and 2.5 M -1 . The modelled species distributions is also shown as coloured lines with 'Host' = green, 'Host-Guest' = blue, and 'Host-Guest 2 ' = brown. Due to the lack of saturation, we interpret these shifts as the onset of genuine 1:1 binding with an affinity close to the detection limit of about 3 M -1 and report such shifts as 'not binding' in the paper. That these shifts are very small is illustrated by the rescaled plot in the bottom right, which can be contrasted with the shifts observed with the strongly binding K 2 [PtCl 4 ] with Δδ max = ± 0.1 p.p.m. (see Figure S2).
Titration with nedaplatin 7 Figure S5: 1 H NMR spectra and assignment of a binding study of cage 3 with nedaplatin (7) in D 2 O at pH* 7.4. The guest stock solution concentration was 0.010 M. Initial concentration of host = 0.24 mM The vertical red dashed lines were added as a guide to the eye. Bottom left: the very small shifts observed at the end of the titration could be modelled (not fitted) with HypNMR to a 1:2 model with stepwise constants of 1.3 and 2.5 M -1 . The modelled species distributions is also shown as coloured lines with 'Host' = green, 'Host-Guest' = blue, and 'Host-Guest 2 ' = brown. Due to the lack of saturation, we interpret these shifts as the onset of genuine 1:1 binding with an affinity close to the detection limit of about 3 M -1 and report such shifts as 'not binding' in the paper. That these shifts are very small is illustrated by the rescaled plot in the bottom right, which can be contrasted with the shifts observed with the strongly binding K 2 [PtCl 4 ] with Δδ max = ± 0.1 p.p.m. (see Figure S2). Binding of up to four equivalents of chloride seems reasonable to compensate the charges of both palladium cations. Nevertheless, the exact stoichiometry remains uncertain, but from these models one can infer that 1:1 binding is in the order of 10-20 M -1 .
|
v3-fos-license
|
2024-05-29T15:03:28.267Z
|
2024-05-25T00:00:00.000
|
270076620
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1648-9144/60/6/865/pdf?version=1716646810",
"pdf_hash": "f4679057e071c792220469ec83dc28706d6e45d8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43188",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "1779bfdb267078440c347492dcd8b20a9e2880c7",
"year": 2024
}
|
pes2o/s2orc
|
Custom-Made Artificial Iris and Toric-Intraocular Lens Intrascleral Flange Fixation: A Case Report
Different techniques for artificial iris implantation with or without an intraocular lens, depending on lens status, are described in the literature. We describe a surgical technique for a custom-made artificial iris and toric-intraocular lens intrascleral flange fixation. We modified the “Backpack” artificial iris implantation surgical technique to facilitate an accurate alignment of the toric-intraocular lens in a patient with aphakia, aniridia, and high asymmetric astigmatism secondary to blunt trauma. Two months after the surgery, uncorrected visual acuity was 20/30, corrected to 20/25 with a refraction of −2.00 in the diopter sphere with no residual astigmatism. The artificial iris implant and toric-intraocular lens were well-centered. The patient was satisfied with the visual and cosmetic outcomes. This procedure, however, is not complication-free as our patient developed uveitis and increased intraocular pressure during the postoperative period, which was treated successfully.
Introduction
Iris defects can be congenital or acquired.In traumatic cases, the extent of iris defects can range from traumatic mydriasis and partial iris loss to complete aniridia.The eye often has additional alterations, such as wound scarring, aphakia or a traumatic cataract, corneal astigmatism, and possibly retinal damage [1].Colored contact lenses, corneal tattooing, or merely sunglasses are among the conservative treatments [2,3].In the cases of iris defects that are limited to approximately two clock hours, reconstruction of the pupil using iris sutures is of consideration.The artificial iris (AI) is a relatively new solution [4].The implant can be fixated using several techniques with or without intra-ocular lens (IOL) implantation.While there is a variety of existing techniques for AI implantation, the reported technique addresses the challenge of precise toric-IOL axis alignment when the IOL is fixated on the posterior aspect of the AI.
Material and Methods
A 42-year-old male patient had sustained traumatic penetration of the right eye globe due to blunt trauma.The patient had no previous medical or ocular history.An initial examination revealed a ruptured sclera with the loss of the iris diaphragm and crystalline lens.The patient underwent immediate primary closure of the globe followed by a pars-plana vitrectomy with laser barrage 2 weeks later due to a vitreous hemorrhage.At one month, uncorrected visual acuity (UCVA) was one-meter finger count, corrected to 20/20-partial with a refraction of +10.00 D/−2.25 D×65 • .On biometry, a +4.05 D @ 165 • astigmatism was measured compared to only +0.59 D @ 133 • in the other eye.Tomography showed a regular bow-tie astigmatism of +3.78 D @ 167 • compatible with the biometry values.In addition to being aphakic, the patient also suffered from photophobia and glare secondary to the aniridia.After a discussion with the patient, a toric-intraocular lens (IOL) Medicina 2024, 60, 865 2 of 5 correction was chosen, along with an AI implant.The following surgery was conducted ten months after the primary repair to allow complete wound healing.Biometry and tomography measurements remained stable.
The process for assembling an AI implant-toric-IOL complex is as follows: A trephine was used to cut the customized AI implant (Customflex ® , HumanOptics, Erlangen, Germany) according to the patient's eye white-to-white measurements followed by iridectomy (Supplemental Video S1).A small amount of the cohesive ophthalmic viscosurgical device (OVD, Biolon ® , Bio-Technology General Ltd., Be'er Tuvia, Israel) was placed on the back surface of the AI implant to facilitate the stable placement of a one-piece toric-IOL prior to suturing: First, the IOL, Acrysof ® , IQ Toric SN6AT9 +18.00 Diopter, (Alcon, Vernier, Switzerland) was fixated to the AI implant using 10-0 polypropylene sutures (PROLENE ® , Ethicon LLC, San Lorenzo, PR, USA) that ran within the iris implant material at the optichaptic junction, keeping it discrete from the front surface of the AI, and around the IOL haptics on both sides.The IOL was set on the AI implant so that the iridectomy could be positioned superiorly.Second, each suture was threaded beneath its haptic form another knot, tying the suture to the haptic itself, thereby preventing slippage.In the same manner, another suture was added to each distal end of the haptics so that the haptics' edge was restricted from chafing the ciliary body (Figure 1A).
values.In addition to being aphakic, the patient also suffered from photophobia and glare secondary to the aniridia.After a discussion with the patient, a toric-intraocular lens (IOL) correction was chosen, along with an AI implant.The following surgery was conducted ten months after the primary repair to allow complete wound healing.Biometry and tomography measurements remained stable.
The process for assembling an AI implant-toric-IOL complex is as follows: A trephine was used to cut the customized AI implant (Customflex ® , HumanOptics, Erlangen, Germany) according to the patient's eye white-to-white measurements followed by iridectomy (Supplemental Video S1).A small amount of the cohesive ophthalmic viscosurgical device (OVD, Biolon ® , Bio-Technology General Ltd.Be'er Tuvia, Israel) was placed on the back surface of the AI implant to facilitate the stable placement of a one-piece toric-IOL prior to suturing: First, the IOL, Acrysof ® , IQ Toric SN6AT9 +18.00 Diopter, (Alcon, Vernier, Swi erland) was fixated to the AI implant using 10-0 polypropylene sutures (PRO-LENE ® , Ethicon LLC, San Lorenzo, PR, USA) that ran within the iris implant material at the optic-haptic junction, keeping it discrete from the front surface of the AI, and around the IOL haptics on both sides.The IOL was set on the AI implant so that the iridectomy could be positioned superiorly.Second, each suture was threaded beneath its haptic to form another knot, tying the suture to the haptic itself, thereby preventing slippage.In the same manner, another suture was added to each distal end of the haptics so that the haptics' edge was restricted from chafing the ciliary body (Figure 1A). Figure 1, marking the toric-IOL axis: The axis was marked at the AI rim in accordance with the manufacturers' imprinted three-point axis markings of the IOL.The distance between the mark made on the AI rim to the edge of the iridectomy was then measured for later reference on both sides (Figure 1B).
The intra-scleral flange fixation of the AI implant-toric-IOL complex: A 27-G needle was used to pierce through the peripheral posterior surface of the AI implant, and a 6-0 polypropylene suture (PROLENE ® , Ethicon LLC, San Lorenzo, PR, USA) was threaded into the needle and through the implant.A flange was created at the anterior surface of the implant using a low-temperature cautery (Kirwan Surgical Products LLC, Marshfield, MA, USA).The same procedure was repeated for the remaining quadrants of the AI. Figure 1, marking the toric-IOL axis: The axis was marked at the AI rim in accordance with the manufacturers' imprinted three-point axis markings of the IOL.The distance between the mark made on the AI rim to the edge of the iridectomy was then measured for later reference on both sides (Figure 1B).
The intra-scleral flange fixation of the AI implant-toric-IOL complex: A 27-G needle was used to pierce through the peripheral posterior surface of the AI implant, and a 6-0 polypropylene suture (PROLENE ® , Ethicon LLC, San Lorenzo, PR, USA) was threaded into the needle and through the implant.A flange was created at the anterior surface of the implant using a low-temperature cautery (Kirwan Surgical Products LLC, Marshfield, MA, USA).The same procedure was repeated for the remaining quadrants of the AI.
The location of the main incisions and the IOL axis were marked on the eye according to the 0-, 180-, and 270-degree pre-marks that were performed while the patient was in a sitting position.The correct location for placing the iridectomy edges (implicating the correct axis position) was marked using the reference distance measured earlier, the distance between the IOL axis, and the edges of the iridectomy on both sides (Figure 1C).The implant was then placed on the cornea according to the iridectomy marks and the locations for the four flanges were marked on the sclera, about 1.5 mm posterior to the limbus (Figure 1D).After peritomy, a superior 3 mm scleral tunnel was created using a crescent knife and a keratome (MANI, Inc., Tochigi, Japan).Four bent 27-G needles were passed through scleral tunnels into the eye at the 4 locations of the scleral marks for proper AI placement to ensure toric IOL alignment.The 6-0 polypropylene sutures were then introduced into the eye through the main incision and carefully directed into their respective 27-G needles to be led outside the eye.Primary flanges were created at the tip of each suture.The entire AI implant-toric-IOL complex was folded using forceps and inserted into the eye.After the complex was well centered inside the eye, the scleral tunnel was sutured using a 10-0 nylon suture (ETHILON ® , Ethicon LLC, San Lorenzo, PR, USA), and the fixating sutures were gently tensioned and trimmed, creating new flanges.
Results
Two weeks after the surgery, the patient developed anterior uveitis with increased intraocular pressure (IOP), which was treated with topical and oral steroids with the resolution of the inflammatory response.The increased IOP did not respond to maximal medical therapy, and a filtration device (PRESERFLO™ MicroShunt, InnFocus, Inc., Miami, FL, USA) was placed successfully.
Two months after the surgery, UCVA was 20/30, corrected to 20/25 with a refraction of −2.00D in the sphere with no residual astigmatism.IOP was 9 mmHg with an elevated active bleb, deep and quiet anterior chamber, and the AI implant and toric-IOL were well-centered.The patient was satisfied with the visual and cosmetic outcomes (Figure 2).
The location of the main incisions and the IOL axis were marked on the eye according to the 0-, 180-, and 270-degree pre-marks that were performed while the patient was in a si ing position.The correct location for placing the iridectomy edges (implicating the correct axis position) was marked using the reference distance measured earlier, the distance between the IOL axis, and the edges of the iridectomy on both sides (Figure 1C).The implant was then placed on the cornea according to the iridectomy marks and the locations for the four flanges were marked on the sclera, about 1.5 mm posterior to the limbus (Figure 1D).After peritomy, a superior 3 mm scleral tunnel was created using a crescent knife and a keratome (MANI, Inc., Tochigi, Japan).Four bent 27-G needles were passed through scleral tunnels into the eye at the 4 locations of the scleral marks for proper AI placement to ensure toric IOL alignment.The 6-0 polypropylene sutures were then introduced into the eye through the main incision and carefully directed into their respective 27-G needles to be led outside the eye.Primary flanges were created at the tip of each suture.The entire AI implant-toric-IOL complex was folded using forceps and inserted into the eye.After the complex was well centered inside the eye, the scleral tunnel was sutured using a 10-0 nylon suture (ETHILON ® , Ethicon LLC, San Lorenzo, PR, USA), and the fixating sutures were gently tensioned and trimmed, creating new flanges.
Results
Two weeks after the surgery, the patient developed anterior uveitis with increased intraocular pressure (IOP), which was treated with topical and oral steroids with the resolution of the inflammatory response.The increased IOP did not respond to maximal medical therapy, and a filtration device (PRESERFLO™ MicroShunt, InnFocus, Inc., Miami, FL, USA) was placed successfully.
Two months after the surgery, UCVA was 20/30, corrected to 20/25 with a refraction of −2.00D in the sphere with no residual astigmatism.IOP was 9 mmHg with an elevated active bleb, deep and quiet anterior chamber, and the AI implant and toric-IOL were wellcentered.The patient was satisfied with the visual and cosmetic outcomes (Figure 2). Figure 2: ten months after the surgery, the patient developed localized corneal edema due to the proximity of the shunt to the cornea, and the shunt was removed.Target IOP was maintained using medical therapy alone until the last follow-up visit, four months after shunt removal.
Discussion
The rehabilitation of an aphakic, aniridic eye due to penetrating trauma is challenging.It is an even greater challenge when there is a high asymmetric corneal astigmatism in such an eye, requiring the proper alignment of toric-IOL.The described technique provided good functional outcomes and a cosmetically appealing result while addressing Figure 2: ten months after the surgery, the patient developed localized corneal edema due to the proximity of the shunt to the cornea, and the shunt was removed.Target IOP was maintained using medical therapy alone until the last follow-up visit, four months after shunt removal.
Discussion
The rehabilitation of an aphakic, aniridic eye due to penetrating trauma is challenging.It is an even greater challenge when there is a high asymmetric corneal astigmatism in such an eye, requiring the proper alignment of toric-IOL.The described technique provided good functional outcomes and a cosmetically appealing result while addressing these challenges.The procedure is not complication-free.Complications include elevated IOP, secondary glaucoma, persistent inflammation, retinal detachment, and corneal decompensation [5][6][7].An elevated IOP is the most frequent adverse effect after AI implantations, which is often due to severe alterations in the globe caused by trauma, anterior synechia, or narrow angles [5].During the postoperative period, our patient developed uveitis, which was controlled with topical steroid treatment and elevated IOP, requiring filtration surgery.In this case, the mechanism for the high IOP was probably attributed to the uveitis.Angle-closure is less likely, as an iridectomy was performed on the AI implant.Also, a small crescent-shaped space remained between the temporal edge of the implant and temporal angle structures (Figure 2B).It is possible that the sustained control of the postoperative inflammatory response followed by the resolution of trabeculitis allowed for partial recovery of the trabecular meshwork architecture and function, enabling the patient to maintain target IOP using medical therapy alone after shunt removal.
The patient had no previous medical history, but one should bear in mind that acute anterior uveitis could manifest as a systemic disease.For example, ankylosing spondylitis (AS), typically occurring among young men, is a critical etiology of acute anterior uveitis.Uveitis is the most common extra-articular manifestation of AS, and prompt treatment with steroid-sparing agents should be issued in such cases so as to prevent glaucoma [8].
The mainstay of the presented technique is the "Sandwich" or "Backpack" AI implantation in which the IOL is sutured to the back of the AI using the IOL haptics and inserted into the eye as a folded sandwich [9].The advantages of this technique are the small surgical incision due to a foldable AI-IOL package and minimal additional trauma, as most of the surgery takes place outside the eye.In addition, to attain a suture-less scleral fixation of the AI-IOL complex, we adopted the flanged fixation technique developed by Canabrava et al. [10].In this regard, it is noteworthy to keep in mind the risk of postoperative endophthalmitis secondary to flange erosion or extrusion and implement the appropriate steps for flange creation and coverage [11].
Conclusions
We present a technique for anatomic and visual rehabilitation in the case of apakia, aniridia, and significant asymmetric corneal astigmatism, which facilitates the exact alignment of a toric-intraocular lens along with a custom-made artificial iris.Nevertheless, this procedure is not complication-free, and this may impair its long-term effectiveness.
Author Contributions: R.M. was a major contributor in writing the manuscript.E.M.-B. was the assistant surgeon and provided data, images, and video of the surgical technique.G.K. was the chief-surgeon who modified and depicted the surgical technique.All authors have read and agreed to the published version of the manuscript.
Funding: This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement:
Written informed consent for patient information and images to be published was provided by the patient.
Figure 1 .
Figure 1.Toric-IOL is fixed to the back of the AI implant with sutures at the optic-haptic junction and at each distal end of the haptics (A).Measuring the distance between the IOL axis to the edge of the iridectomy (B).Marking the distance between the IOL axis to the edges of the iridectomy on the sclera (C).Marking the locations of the four flanges on the sclera in accordance with the iridectomy marks (D).
Figure 1 .
Figure 1.Toric-IOL is fixed to the back of the AI implant with sutures at the optic-haptic junction and at each distal end of the haptics (A).Measuring the distance between the IOL axis to the edge of the iridectomy (B).Marking the distance between the IOL axis to the edges of the iridectomy on the sclera (C).Marking the locations of the four flanges on the sclera in accordance with the iridectomy marks (D).
Figure 2 .
Figure 2. Preoperative (A) and two-month postoperative (B,C) images after implantation of the AI with toric-IOL in the right eye.
Figure 2 .
Figure 2. Preoperative (A) and two-month postoperative (B,C) images after implantation of the AI with toric-IOL in the right eye.
|
v3-fos-license
|
2022-02-16T06:24:10.560Z
|
2022-02-15T00:00:00.000
|
246826223
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/0886022X.2022.2039194?needAccess=true",
"pdf_hash": "e132545c9683fc3ab096cee7e8cf6137dd07b351",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43189",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "71e354fb884344852f4f7206e0d7af63c17033d5",
"year": 2022
}
|
pes2o/s2orc
|
MicroRNA-122-5p ameliorates tubular injury in diabetic nephropathy via FIH-1/HIF-1α pathway
Abstract Diabetes kidney disease (DKD) affects approximately one-third of diabetes patients, however, the specific molecular mechanism of DKD remains unclear, and there is still a lack of effective therapies. Here, we demonstrated a significant increase of microRNA-122-5p (miR-122-5p) in renal tubular cells in STZ induced diabetic nephropathy (DN) mice. Moreover, inhibition of miR-122-5p led to increased cell death and serve tubular injury and promoted DN progression following STZ treatment in mice, whereas supplementation of miR-122-5p mimic had kidney protective effects in this model. In addition, miR-122-5p suppressed the expression of factor inhibiting hypoxia-inducible factor-1 (FIH-1) in vitro models of DN. microRNA target reporter assay further verified FIH-1 as a direct target of miR-122-5p. Generally, FIH-1 inhibits the activity of HIF-1α. Our in vitro study further indicated that overexpression of HIF-1α by transfection of HIF-1α plasmid reduced tubular cell death, suggesting a protective role of HIF-1α in DN. Collectively, these findings may unveil a novel miR-122-5p/FIH-1/HIF-1α pathway which can attenuate the DN progression.
Introduction
Diabetic kidney disease (DKD) is the leading cause of end-stage renal disease (ESRD) worldwide [1,2]. In addition, DKD also is the leading cause of morbidity and mortality in individuals with diabetes [3]. Generally, DKD is characterized by glomerular hypertrophy, proteinuria, decreased glomerular filtration, and renal fibrosis resulting in the loss of renal function [1]. Accordingly, more than 1/2 of patients with type 2 diabetes and 1/3 of those with type 1 diabetes develop DKD, and DKD is a prime reason for dialysis in many developed countries [4]. Thus, DKD poses a significant economic and health burden to the world. The pathogenesis of DKD is complex and apparently multifactorial, involving inflammation, hypoxia, oxidative stress, and apoptosis [5]. In addition, many new signaling molecules that regulate kidney fibrosis have been found, such as protective nature of endothelial glucocorticoid receptors, endothelial SIRT3, endothelial FGFR1 against renal fibrosis and, podocyte-glucocorticoid receptor signaling in protecting diabetic nephropathy [6][7][8][9]. Besides, some potential drugs have been identified in protecting DN, such as DPP-4 inhibitor linagliptin, empagliflozin, JAK-stat3 inhibitors, glycolysis inhibitors, ROCK inhibitors, mineralocorticoid antagonists, and ACE inhibitors [10][11][12]. Recently, a study has indicated that probucol could ameliorate EMT and lung fibrosis through restoration of SIRT3 expression. Thus, endothelial SIRT3 also could be the potential drug for DKD [13]. However, despite these progresses, the mechanisms of DN remain largely unclear, and effective treatment are still not available.
Usually, glomerular damage is considered the main pathological feature of DKD [14]. However, studies have shown that tubular injury is critical in DKD, and the degree of renal tubular injury is closely related to renal function [15]. A large number of studies have indicated that proximal tubule is uniquely susceptible to a variety of metabolic and hemodynamic factors associated with diabetes [15]. Thus, proximal tubule may be a new therapeutic target for patients with DKD. In addition, it has been indicated that tubular injury was a critical component of the early course of DKD and also suggested to contribute in a primary way, rather than a secondary manner, to the development of early DKD [16].
MicroRNAs (miRNAs) are a group of small non-coding RNA molecules composing of approximate 22 nucleotides. In mammalian cells, miRNAs repress gene expression mainly by binding to the 3 0 -untranslated regions (UTR) of their target gene mRNA, thereby blocking their translation [17]. Accumulating studies suggest that the majority of genes are subjected to miRNA regulation. Moreover, a single microRNA may regulate different genes and different miRNAs can regulate the same gene [18]. Till now, 39,000 miRNAs have been identified in humans, 365 of which are present on the renal cortex [19]. In DKD, alterations in miRNA expression influence the epithelial-to-mesenchymal transition (EMT) and endothelial-to-mesenchymal transition (EndMT) programs [20]. In the epithelial cells and endothelial cells, the antifibrotic miRNAs (such as miR-29 and let-7s) [21][22][23] and profibrotic microRNAs (such as miR-21 and miR-433) [24,25] are expressed physiologically. The differential expressions of microRNAs in the epithelial cells and endothelial cells regulate the biological pathways and signaling events and maintain the homeostasis. However, when healthy cells undergo the mesenchymal transition process, the expression of antifibrotic microRNAs decreases, while pro-fibrotic microRNA expression increases and disrupts the cellular homeostasis. In addition, miR-29 and miR-let-7s show crosstalk regulation by inducing FGFR1 phosphorylation and targeting TGFbR1. FGFR1 phosphorylation is critical for miR-let-7 production. In the presence of higher DPP-4 activity level or absence of AcSDKP, miR-let-7 families are down-regulated, which in turn causes activation of TGFb signaling. Higher levels of TGFb signaling results in suppression of miR-29 family expression and finally influences EndMT and fibrogenesis [23,[26][27][28]. Taking together, these studied showed miRNAs were involved in the pathogenesis of tubular injury in DKD.
In the present study, we have demonstrated that the miR-122-5p was up-regulated in renal tubular cells in diabetic nephropathy (DN) mice. Functionally, miR-122-5p alleviated tubular cell death and kidney injury, finally affording a protective effect in DN. In contrast, suppression of miR-122-5p would aggravate kidney damage. Interestingly, we further identified FIH-1 (factor inhibiting hypoxia-inducible factor-1) was a direct target of miR-122-5p. Collectively, our study indicates that miR-122-5p could ameliorate tubular injury in diabetic nephropathy via FIH-1/HIF-1a pathway.
Animals and DN induction
Eight-week-old male C57BL/6 mice were purchased from the Slaccas Animal Laboratory (Changsha, China) and housed under controlled environmental conditions (temperature of 22C, 12 h darkness period). The protocol was approved by the Institutional Animal Care and Use Committee. DN was induced by STZ (Sigma-Aldrich, St. Louis, MO) injection intraperitoneally. For STZ induction of diabetes, mice at 4 weeks of age were injected with 50 mg/kg body weight STZ for 5 consecutive days according to a standard protocol [29]. Animals with >250 mg/dl fasting blood glucose for two consecutive readings were considered diabetic. Control mice were injected with normal saline. The mice were euthanized after 12 weeks. In some experiments, miR-122-5p mimic (3mg/kg), anti-miR-122-5p LNA (6 mg/kg), or NC oligonucleotide LNA were delivered to mice through tail vein injection every 2 week after STZ injection [30].
Analysis of metabolic and physiological parameters
The body weight and blood glucose level were measured every week, and urine was collected before euthanasia. Urine N-acetyl-b-D glucosaminidase (NAG) was assessed using an automated colorimetric method (Pacific). Urinary creatinine and albumin were measured with a creatinine assay kit and an Albuwell M kit (Exocell) [32].
Fluorescence In Situ Hybridization
Fluorescence in situ hybridization (FISH) was performed according to the manufacturer's instructions. Briefly, kidneys were harvested from control and STZ-treated mice to prepare 4-micron paraffin section. The sections were treated with 20 lg/ml proteinase K for permeabilization, and then incubated with pre-hybridization solution at 78 C for 1 h. Remove pre-hybridization solution and add digoxigenin-labeled mmu-miR-122-5p LNA probe over night at 37 C. At the second day, after wash, bovine serum albumin (BSA) was added for blocking. Then the anti-digoxigenin-HRP was used at 37 C for 1 h. CY3-TSA and DAPI assay were used to indicate the positive areas and cell nucleus respectively. The images were acquired from a fluorescence microscope and the representative figures were exhibited.
Extraction of total RNA and quantitative realtime PCR
Total RNA was isolated from the kidney tissues using TRIzol (Invitrogen; Thermo Fisher Scientific, Inc., Waltham, MA). Reverse transcription was conducted using a TaqMan advanced miRNA cDNA synthesis kit (A28007; Applied Biosystems, Foster City, CA). The qPCR was carried out in the TaqMan miRNA assay kit (4440887; Applied Biosystems). The U6 functioned as the internal control. In this experiment, we performed the Taqman-based real-time PCR and the probe of mmu-miR-122-5p/U6 was purchased from Applied Biosystems (catalog number: 002245/001973). All PCR data were analyzed by the LightCycler 96 SW 1.1 software, and each sample was shown as 2 ÀDDCt values.
Cell culture
The Boston University mouse proximal tubular cell line (BUMPT) was used in this study, and the cells were cultured in medium as described in other studies [33]. To establish the diabetic cell model, the cells were treated with 35 mM glucose for 24 h. Control cells were maintained in normal medium. In some experiments, 200 nM microRNA mimic, HIF-1a plasmid or NC oligonucleotides were transfected into BUMPT cells with Lipofectamine 2000 following the manufacturer's instructions
Cell immunofluorescence
Cells were grown on coverslips, washed three times with PBS, fixed in 4% paraformaldehyde for 20 min, permeabilized with 0.1% Triton X-100, and then incubated in blocking buffer. The cells were subsequently incubated with FIH-1 antibody (dilution: 1:200) overnight. Then, the cells were incubated with FITC-conjugated secondary antibodies (dilution: 1:500) and examined with a fluorescence microscope and the representative figures were exhibited.
Western blot analysis
Cells or kidney tissues were lysed with 2% SDS buffer containing protease inhibitor cocktail (Sigma-Aldrich, P8340). The equal amount of proteins from different samples was separated on SDS-polyacrylamide gels. After being transferred onto polyvinylidene difluoride membrane, membrane was incubated with 5% fat-free milk to reduce unspecific signals and probed subsequently with primary antibodies (dilution: 1:1000) and horseradish peroxidase-conjugated secondary antibodies (dilution: 1:5000). Antigen-antibody complex was visualized with an enhanced chemiluminescence kit (Thermo Fisher Scientific, 32106).
Luciferase microRNA target reporter assay
The 3 0 -UTR of the mouse FIH-1 gene inserted into the 3 0 -UTR of the luciferase gene in the pMIR-REPORT luciferase plasmid. The plasmids with or without the insert were co-transfected with pMIR-REPORT b-gal control plasmid and 200 nM miR-122-5p mimics into BUMPT cells. One day after the transfection, the lysate was collected in reporter lysis buffer from the Luciferase Assay System (Promega, Madison, WI). The luciferase activity was normalized with b-galactosidase activity. The ratio of the normalized value between miR-122-5p and NC group was used for comparison.
Statistical analysis
Student's t-test was used to show the significant difference between two groups and ANOVA analysis was used for multi-group difference analysis. Data are expressed as the means ± SD. p < 0.05 was considered significant. GraphPad Prism 7.0 (GraphPad Software, La Jolla, CA) was used for all calculations.
miR-122-5p is induced in renal tubules in DN mice
To identify specific miRNAs involved in the pathogenesis of DN, we initially tested the mouse model of STZ treatment. As shown in Figure 1(A,B), both the body weights and blood glucose levels of STZ mice were higher than control mice. In addition, a significant increase in the urinary NAG and ACR was also observed in STZ mice compared to control mice (Figure 1(C,D)). Consistently, histological analysis by H&E, PAS and Masson staining also showed the most obvious tubular injury in STZ mice (Figure 1(E)). Briefly, In STZ group, PAS and HE staining showed notable morphological changes, including glomerular hypertrophy, increased mesangial matrix, and increased tubular epithelial disruption; Masson staining showed remarkable renal fibrosis, including glomerulosclerosis and interstitial fibrosis. These results suggest increased tubule injury in STZ induced DN mice. Then we collected kidney tissues for microarray analysis of miRNA expression (every group including 3 mice) and found a series of miRNAs with altered expression (Table 1). Among which, miR-122-5p was significantly induced. By real-time PCR, we further verified the induction of miR-122-5p in the kidneys of the STZ-treated mice as compared with the control mice (Figure 1(G)). In situ hybridization analysis, we conducted the double immunofluorescence staining using the proximal tubule marker LTL (Lotus tetragonolobus lectin) demonstrated miR-122-5p expression in renal proximal tubules (Figure 1(F))
miR-122-5p attenuates STZ-induced DN in mice
How does the role of miR-122-5p in DN? In this regard, we tested the effects of miR-122-5p mimic in STZ induced DN mouse models. In control mice, the miR- 122-5p mimic did not cause structural damages and renal fibrosis in kidneys, but it significantly attenuated the tubular damage and interstitial fibrosis in STZ mice (Figure 2(A-C)). Functionally, miR-122-5p mimic also obviously reduced the levels of urinary NAG and ACR upon STZ treatment (Figure 2(D,E)). In addition, our immunoblot analysis indicated that miR-122-5p mimic could reduce the levels of collagen I and vimentin ( Figure 2(F-H)). Then, we detected tubular cell apoptosis by immunoblot of cleaved-caspase 3. As shown in Figure 2(F,I), immunoblot analysis also detected less cleaved caspase-3 in the kidney tissues of STZ þ miR-122-5p-treated mice than in mice treated with STZ only. These findings support a protective role of miR-122-5p in DN.
Inhibition of miR-122-5p exaggerates STZ-induced DN in mice
To further identify the role of miR-122-5p in DN, we detected the effect of anti-miR-122-5p in STZ induced DN mouse models. Locked nucleic acid-modified (LNAmodified) anti-miR-122-5p or scrambled sequence oligonucleotides (NC) were administered to mice. Similar to the results of miR-122-5p mimic, anti-miR-122-5p also caused neither tubular structural injury nor renal fibrosis in control mice, but it significantly increased the levels of urinary NAG and ACR and aggravated tubular injury and renal fibrosis in STZ-induced DN mice ( Figure 3(A-E)). The immunoblot analysis showed that anti-miR-122-5p could promote the expression of collagen I and vimentin (Figure 3(F-H)). In addition, anti-miR-122-5p promoted cell apoptosis in this model, as shown by cleaved caspase-3 immunoblotting (Figure 3(F,I)). These findings further support the conclusion that miR-122-5p plays a protective role in DN.
FIH-1 is a downstream target of miR-122-5p in DN
To understand the mechanism whereby miR-122-5p contributes to DN, we investigated it downstream of target gene. By using online databases (PITA and Target Scan), we identified a conserved putative miR-122-5p targeting site in the 3'UTR of FIH-1 mRNA (Figure 4(A)). To determine whether FIH-1 is indeed a target of miR-122-5p, we first examined whether overexpression of miR-122-5p in BUMPT cells would affect FIH-1 expression. As shown in Figure 4(B-D), both of the immunofluorescence staining and immunoblot indicated the expression of FIH-1 was significantly repressed by the transfection of miR-122-5p mimics in high glucosetreated BUMPT cells. Further, to determine if FIH-1 is a direct target of miR-122-5p, we prepared a microRNA Luciferase reporter. The FIH-1 3'UTR constructs or empty vectors were transfected into BUMPT cells along with miR-122-5p mimic or negative control oligo (NC). miR-122-5p mimic inhibited the luciferase activity in FIH-1 3'UTR-miR-122-5p transfected cells, whereas the negative control oligo did not (Figure 4(E)). Taking all together, these results indicate FIH-1 is a direct target of miR-122-5p.
miR-122-5p attenuates high glucose-induced tubular injury through targeting FIH-1
It has been indicated that FIH-1 could inhibit the activity of hypoxia-inducible factor 1 (HIF-1) by preventing HIF-1a from binding to p300/CBP [34]. Thus, to determine the role of FIH-1 in tubule injury in DN, we examined the effects of HIF-1a overexpression on apoptosis in high glucose-treated BUMPT cells. First, we confirmed the successfully delivery of HIF-1a plasmid through WB analysis ( Figure 5(A,B)). Then, we found HIF-1a-overexpression cells had lower levels of cleaved caspase-3 activation upon HG treatment, indicative of less apoptosis ( Figure 5(C,D)). Our TUNEL results also indicated that high glucose induced less cell death in HIF-1a-overexpression cells than in the cells transfected with control sequences (Figure 5(E)). Further, we found FIH-1-overexpression cells had higher levels of cleaved caspase-3 activation upon HG treatment, indicative of more apoptosis. However, miR-122-5p transfection could partly reverse these changes and reduce apoptosis (Figure 5(F,G)). Collectively, these results indicated that miR-122-5p could attenuate tubular cell apoptosis in DN through targeting FIH-1.
Discussion
Recently, increasing evidence showed that miRNAs have been implicated in the pathogenies of DN. Therefore, there is a need for further studies to determine the role of miRNAs in the regulation of DN with the hope of discovering new therapies for DN [35]. In this study, we report the following major findings: (1) miR-122-5p is significantly induced in renal tubular cells in STZ induced DN mouse models; (2) functionally, miR-122-5p attenuates tubular injury and cell apoptosis, indicating that miR-122-5p induction in DN is an adaptive or protective mechanism; (3) mechanistically, miR-122-5p directly targets and inhibits FIH-1 expression to enhance the HIF-1a activity, finally ameliorating tubular injury in DN. Together, these findings unveil a novel miR-122-5p/FIH-1/HIF-1a pathway which can delay the DN progression. Essentially, miR-122-5p is induced in DN, leading to the suppression of its target gene FIH-1. The decrease of FIH-1 enhances the activity of HIF-1a, which finally delay the progress of DN, providing an intrinsic protective mechanism. Diabetes is a common chronic metabolic disease which has affected about half a billion people in the world and DN affects approximately one-third of diabetes patients. Thus, DN is not only a health crisis but also a global social disaster [36]. However, the specific molecular mechanism of DN remains unclear, and there is still a lack of effective therapies. Tubular injury is widely recognized to be associated with the pathogenesis of DN. Thus, studies on new targets for tubular injury are necessary for exploring prospective therapies for DN treatment [37]. It has been reported that miRNAs have been implicated in the pathogenesis of DN, including renal tubular epithelial cell injury [36]. For example, Li et al. found that miR-25 inhibited high glucose-induced apoptosis in renal tubular epithelial cells via PTEN/AKT pathway [38]. Based on these findings, specific miRNAs may become new therapeutic target for DN. In the present study, we have indicated that miR-122-5p is significantly induced in kidney tubular cells in DN. Moreover, inhibition of miR-122-5p led to increased cell death and serve tubular injury and promoted DN progression following STZ treatment in mice, whereas supplementation of miR-122-5p mimic had kidney protective effects in this model. These results indicate a protective role of miR-122-5p in DN. Accordingly, miR-122-5p up-regulation in DN is an adaptive or protective response in this disease condition. Of course, there are a few important clinical studies, which are highly relevant to the miR-122-5p [39][40][41][42]. For example, a recent study from Regmi and colleagues showed that serum levels of miR-122-5p were positively associated with FBG, HbA1c and, importantly, with urine albumin, and negatively associated with eGFR. These results seem to contrast the concept that miR-122-5p protects renal function in patients with diabetes [41]. However, this study did not detect the role of miR-122-5p in DN. In fact, whether miR-122-5p is protective or injurious to the kidney, it can rise in the serum or urine. For example, miR-494 was increased in the serum and urine during acute kidney injury in patients, however, it can promote ischemic AKI through targeting ATF3 [43]. In another study, miR-668 was significantly increased in urine in patients with acute kidney injury, and it can protect kidney through targeting MTP18 [44].
How does miR-122-5p contribute to DN? To address this, our in vitro studies identified miR-122-5p could combine with FIH-1 3 0 UTR and negatively regulated the FIH-1 protein levels, indicating FIH-1 as a direct target of miR-122-5p (Figure 4). FIH-1 is an asparagine hydroxylase that interacts with hypoxia-inducible factor 1a (HIF-1a) to regulate transcriptional activity of HIF-1. Generally, FIH-1 inhibits the activity of HIF-1 by preventing HIF-1a from binding to p300/CBP [34,45]. Recently, it has been indicated that FIH-1 played a crucial role in the pathogenesis of CKD through targeting HIF-1a [46]. It is known that HIF-1 is a critical molecule for mitigating hypoxia-induced damage and exists as a heterodimer comprising two subunits: a variable a-subunit and a constitutively expressed b-subunit. In the kidney, HIF-1a is expressed by most renal tubular epithelial cells [47,48]. Increasing studied have shown that an oxygen deficit is present in DN and that enhancing HIF-1 signaling ameliorates the progression of DN [49,50]. For example, a study conducted by Jiang et al. have indicated that HIF-1a could ameliorate tubular injury in DN via HO-1-mediated control of mitochondrial dynamics [51]. In our present study, overexpression of HIF-1a by transfection of HIF-1a plasmid reduced tubular cell death during high-glucose treatment, suggesting a protective role of HIF-1a in DN ( Figure 5). Taking together, these findings unveil a novel miR-122-5p/FIH-1/HIF-1a pathway which can attenuate the DN progression. Essentially, miR-122-5p is induced in DN, leading to the suppression of its target gene FIH-1. The decrease of FIH-1 enhances the activity of HIF-1a, which finally delay the progress of DN, providing an intrinsic protective mechanism.
In conclusion, the present study demonstrated an induction of miR-122-5p in diabetic nephropathy and further investigated its biological function and molecular mechanisms. All the experimental results indicated that miR-122-5p ameliorated tubular injury and delayed the progression of DN by targeting FIH-1/HIF-1a signaling, which may provide a potential diagnostic or therapeutic target for DN.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Data availability statement
The data that support the findings of this study are available from the corresponding author, [LL], upon reasonable request.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2016-04-19T00:00:00.000
|
16974931
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcophthalmol.biomedcentral.com/track/pdf/10.1186/s12886-016-0217-1",
"pdf_hash": "e173554a15fd802150f342904971be7da16a2d76",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43190",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "e173554a15fd802150f342904971be7da16a2d76",
"year": 2016
}
|
pes2o/s2orc
|
Shifts in retinal vessel diameter and oxygen saturation in Chinese type 2 diabetes mellitus patients
Background The aim of this study was to analyze the shifts in retinal vessel diameter and oxygen saturation in diabetic patients with and without diabetic retinopathy (DR), as well as to assess the association between diabetes duration and either vessel diameter or oxygen saturation. Methods In total, 99 Type 2 DM patients were recruited for the study and were divided into three groups: DM with non-obvious retinopathy (DM, n = 29), non-proliferative diabetic retinopathy (NPDR, n = 40), and proliferative diabetic retinopathy (PDR, n = 30). In addition, 78 age-matched healthy individuals were chosen as the control. The diameter and oxygen saturation of the retinal vessels were analyzed using a noninvasive retinal oximeter, and then compared between the three groups and the normal control. Association analysis was applied to analyze the possible influencing factors, including the diameter and oxygen saturation of retinal vessels, on best corrected visual acuity BCVA, as well as the relationship between diabetes duration and the oximetry values. Results All of the diabetic patients showed thinner arterioles, wider venules, and a smaller arteriolar-to-venular ratio (AVR) than the healthy individuals. The AVR results from the controls through to the PDR group were 0.81 ± 0.07, 0.78 ± 0.07, 0.76 ± 0.07 and 0.67 ± 0.07, respectively. Both the NPDR and PDR groups showed significantly smaller AVR than the control. All of the diabetic patients exhibited higher retinal vessel oxygen saturation than the healthy individuals. Among all of the oximetry values, AVR exhibited the most significant correlation with best corrected visual acuity (BCVA) (β = 1.533, P < 0.0001). An increased diabetes duration was associated with decreased arteriolar diameter (slope = −0.082 pixels/year, r2 = 0.085, P = 0.004) and AVR (slope = −0.009/year, r2 = 0.349, P < 0.001), and with increased venular diameter (slope = 0.104 pixels/year, r2 = −0.109, P = 0.001). Conclusions In this Chinese population with type 2 DM, the thinner arterioles and wider venules point to microvascular dysfunction in DR. The increased oxygen saturation of the retinal vessels suggests that retinal oxygen metabolism is affected in diabetic retinopathy.
Background
Diabetes mellitus (DM) is a global disease that does not only concern aged persons. According to estimates by the World Health Organization, the number of people worldwide with DM is expected to rise to approximately 360 million by 2030 [1]. Further, Type 2 diabetes (T2 DM) has now spread to almost every country and region in the world. China has enjoyed rapid economic development over recent decades. However, this development has also resulted in the increasing prevalence of overweight and obesity, which inevitably drives the diabetes epidemic [2][3][4][5]. It has been estimated that 9.6 per 1000 person-years in men and 9.2 in women are subject to T2 DM in China [6].
Diabetic retinopathy (DR) is one of the most common and indeed most severe microvascular complications of DM. It has been shown that the disease is associated with early retinal vascular dysregulation. Also, in the latter stages of the disease, retinal tissue hypoxia is a major trigger of sight-threatening neovascularization. It is therefore important to assess the retinal vascular diameter and retinal oxygenation status of DM patients in order to gain insight into the progression of DR.
Previous studies have primarily focused on the association between retinal vascular calibers and the risk of diabetes or DR. Multiple studies showed that the incidence of both diabetes and DR were associated with narrowing arteriolar [7,8], wider venular [9,10], and a smaller arteriolar-to-venular ratio (AVR) [7,11]. Additionally, Kifley et al. found the increasing severity of DR in persons with diabetes to be associated with a widening of the retinal venular caliber [10]. However, data from the Wisconsin Epidemiologic Study of Diabetic Retinopathy (WESDR) showed that neither retinal arteriolar nor venular calibers as measured at baseline were associated with the incidence or progression of DR [12].
It is certain that hypoxia plays an important role in the pathophysiology of diabetes. Previous studies utilizing oxygen-sensitive microelectrodes have demonstrated that retinal hypoxia exists in the process of diabetes [13,14]. Additionally, several studies using a noninvasive retinal oximeter found increasing oxygen saturation of the retinal vessels in diabetes [15][16][17][18], which indirectly proved retinal hypoxia. Also, Khoobehi and colleagues identified a trend of increasing retinal oxygen saturation from the controls to the NDR group, pointing to increasing levels of DR [17].
Most previous studies have focused on retinal vascular parameter or retinal vessel oxygen saturation separately, and the research subjects were mostly Caucasians. In our study, we analyzed the retinal vessel diameter and vessel oxygen saturation of T2 DM patients with and without retinopathy in China in order to detect shifts associated with the severity level of diabetes compared with healthy individuals. This study was also performed to detect the impacts of vessel diameter and oxygen saturation on visual acuity, as well as to assess the relationship between diabetes duration and either vessel diameter or oxygen saturation. The aim was to identify a more sensitive and noninvasive method for evaluating the severity and prognosis of diabetes.
Methods
The study protocol was reviewed and approved by the Medical Ethics Committee of the Zhongshan Ophthalmic Center, Sun Yat-sen University (No.2013MEKY028). It also strictly adhered to the principles of the World Medical Association Declaration of Helsinki. All subjects signed informed consent forms prior to participation.
Subjects
A total of 99 Type 2 DM patients were recruited from the outpatient clinic at Zhongshan Ophthalmic Center. Patients were excluded if they had obvious cataract or other media opacities, optic nerve disease, other retinal diseases except DR, intraocular pressure (IOP) >21 mmHg, or had previously undergone laser photocoagulation. In addition, 78 age-matched healthy persons were recruited as the control group. The exclusion criteria for the control group were any kind of systemic disease, any history of ocular disease, trauma, or eye surgery, current pregnancy, and breast-feeding. Only the right eye of each subject was used for the analysis. The 99 right eyes of the diabetes patients were divided into three groups: Group 1: DM with non-obvious retinopathy (DM, n = 29), Group 2: non-proliferative diabetic retinopathy (NPDR, n = 40), and Group 3: proliferative diabetic retinopathy (PDR, n = 30). The three groups were constituted according to the international criteria formulated in 2002 [19].
All subjects answered a standardized questionnaire about their duration of diabetes, history of ocular and systemic conditions, and medication use. The basic examinations involved: best corrected visual acuity (BCVA, logMAR visual acuity chart), intraocular pressure (Canon TX-20, Canon Corporation, Tokyo, Japan), slitlamp examination (Suzhou YZ5S, Suzhou Liuliu, China), systolic blood pressure (BPsyst) and diastolic blood pressure (BPdiast), heart rate (BangPu, BF-1100, Shenzhen BangPu Corporation, Shenzhen, China), and finger pulse oximetry (Biolight M70, Biolight Corporation, Zhuhai, China). Further, the mean ocular perfusion pressure (OPPm) driving blood through the retina is calculated using the following equation [20]: The basic information of subjects studied is detailed in Table 1.
Retinal oximetry
The noninvasive retinal oximeter Oxymap T1 (Oxymap, Reykjavik, Iceland) has been described previously [21,22]. Briefly, it is an add-on to the fundus camera (Topcon TRC-50DX; Topcon Corporation, Tokyo, Japan), which combines spectroscopy and multispectral imaging techniques. The Oxymap Analyzer software analyzes the images from the oximeter and automatically returns the relative oxygen saturation and vessel diameter. The method used for the vessel diameter measurements and oxygen saturation calculation has been described previously, and the results proved reliable and reproducible [21,23].
Imaging and analysis
The pupils were dilated with 0.5 % tropicamide (Shenyang Xingqi Corporation, Shenyang, China). All of the fundus images were taken in a dark room and were performed by the same skilled photographers according to consistent parameters. All of the subjects were examined twice, and all of the images were centered on the optic disc, with about one minute's space between images. Further, the best quality image was selected for analysis ( Fig. 1). The pseudocolor fundus images were analyzed according to a standard protocol. The optic disc was first excluded by a circle to avoid highly reflective background. Then, a second circle of three times the optic disc radius was created, and the vessel segments between the two circles were selected for analysis. The mean oxygen saturation and mean width of the selected retinal arteries and veins were automatically analyzed by Oxymap Analyzer version 2.4, a specialized software.
Statistical analysis
The statistical analysis was performed using the R software package, version 3.1.3 (The R Foundation for Statistical Computing, available at http://www.R-project.org/). The Kruskal-Wallis test was applied to test for differences in the diameter of the retinal arterioles (A_diameter) and venules (V_diameter), and the AVR, as well as the oxygen saturation of the retinal arterioles (A_satO 2 ) and venules (V_satO 2 ), and the arteriole-venule difference (AV_difference) between the groups. All data was expressed as mean ± standard deviation (SD). For all of the analyses, P < 0.05 was considered statistically significant.
Meanwhile, a univariate analysis was applied to analyze the possible influencing factors, including the diameter and oxygen saturation of the retinal vessels, on BCVA. The multivariate analysis of the oximetry values included: diabetes duration, age, sex, finger pulse SatO 2 and OPP. The ocular perfusion pressure was derived from the Fig. 2 The diameter of retinal arterioles and venules, AVR in the four groups studied. All diabetic patients showed thinner arterioles, wider venules, and smaller AVR compared with normal control, which varied with severity from DM with no DR to PDR. (**p < 0.01, *p < 0.05) Fig. 3 The oxygen saturation of arterioles, venules and AV-difference in the four groups studied. Both arteriolar and venular oxygen saturation showed increasing trend with increasing severity of disease (**p < 0.01) BPsyst, BPdiast, and IOP, so we chose OPP instead of utilizing all three.
Vessel diameter
In healthy individuals, the diameters of the arterioles and venules were13.47 ± 1.19 pixels, and 16.80 ± 1.68 pixels, respectively, and the AVR was 0.81 ± 0.07. All diabetic patients showed thinner arterioles and wider venules and, therefore, smaller AVR than the healthy individuals, which changed according to the severity of the disease. Only the PDR group exhibited statistically significant thinner arterioles and wider venules when compared with the normal group. However, both the NPDR and PDR patients had significantly smaller AVR than the controls (p < 0.01; Fig. 2).
The vessel diameter values are listed in Table 1.
Oxygen saturation
The retinal oxygen saturation in healthy individuals was 95.0 ± 4.78 % in the arterioles and 58.50 ± 3.76 % in the venules. Compared with the normal control group, all of the diabetes patients showed higher oxygen saturation in both the arterioles and venules. Further, the differences between the NPDR or PDR patients and the normal controls were both statistically significant (p < 0.01; Fig. 3). Additionally, there was an obvious increasing trend in either arteriolar or venular oxygen saturation with the increasing severity of disease, although significance was only reached for the comparison of controls to the NPDR and PDR groups. The AV difference was 36.49 ± 4.35 % in healthy individuals, and the corresponding values for groups 1 to 3 were 35.97 ± 6.41 %, 34.03 ± 6.26 %, 44.5 ± 10.07 %, respectively. There was no significant difference between the diabetes patients, except for the PDR group and normal controls.
The oxygen saturation values are listed in Table 1.
Associations analysis Associations between oximetry values and BCVA
Besides the V_diameter, all of the oximetry values were found to be associated with BCVA (Table 2). Higher A_SatO 2 , V_SatO 2 , and AV_difference were found to be correlated with lower BCVA. Conversely, both wider arterioles and larger AVR were correlated with better BCVA. AVR exhibited the most significant correlation with BCVA (p < 0.001; Fig. 4).
Associations between retinal vessel diameter and candidate variables (diabetes duration, age, sex, finger SatO 2 and OPP) In a separate simple linear regression with retinal vessel diameter in relation to diabetes duration, there was a significant decrease in A_diameter, an increase in V_diameter, and smaller AVR with increasing diabetes duration (p < 0.01; Fig. 5). A multivariate analysis using multiple linear regression was also performed with the following variables included: age, sex, finger SatO 2 and OPP. The shift trends in A_diameter, V_diameter and AVR with diabetes duration were still significant (Table 3).
Associations between retinal vessel SatO 2 and candidate variables (diabetes duration, age, sex, finger SatO 2 and OPP)
In a separate simple linear regression with retinal oxygen saturation in relation to diabetes duration, there was a Fig. 5 Retinal vessel diameter (pixels; left y-axis) of arterioles (red dots) and venules (blue squares), and AVR (right y-axis) with increasing diabetes duration (dark triangles). There was a significant decrease in A_diameter, an increase in V_diameter, and smaller AVR with increasing diabetes duration significant increase in arteriolar oxygen saturation with increasing diabetes duration and a similar trend in AV_difference (p < 0.01; Fig. 6). A multivariate analysis using multiple linear regression was performed with the following variables included: age, sex, finger SatO 2 and OPP. The increases in A_SatO 2 and AV_difference with diabetes duration were still significant (Table 4).
Discussion
Our study showed that the increasing severity from DM with no DR to PDR was accompanied by thinner arterioles and wider venules, although significance was only reached for the comparison of the normal controls to the PDR group. However, both the NPDR and PDR groups showed significantly smaller AVR than the controls. All of the diabetic patients exhibited higher retinal vessel oxygen saturation than healthy individuals。Also, there was an obvious increasing trend in either arteriolar or venular oxygen saturation with an increasing severity of disease. In addition, we found that AVR exhibited a significant correlation with BCVA. In the study, we also found that the duration of diabetes was significantly associated with the retinal vessel diameter and oxygen saturation. The retinal vessels offer a unique and easily accessible window through which to study human microcirculation in health and disease. As an important physiological parameter, the retinal vessel diameter has been found to alter in various diseases [24][25][26][27], as well as to change with age and exercise intensities [28,29]. Researchers have proposed retinal vessel diameter as a biomarker with which to determine the risk and progress of cardiovascular disease [30][31][32][33]. Previous studies also proved the association between retinal vessel width and the risk of diabetes or DR, although the results were inconsistent. Wong et al. found that participants with narrower retinal arteriolar diameters had a higher incidence of diabetes [7]. However, Cheung and colleagues found that persons with diabetes were more likely to have wider arteriolar and venular calibers than those without diabetes, and that subjects with DR had a wider venular caliber than those without retinopathy [34]. In this study, both thinner arterioles and wider venules were observed in all of the diabetic patients, although significance was only reached for the comparison of controls to the PDR group. Also, there was a significant increase in venular diameter, decrease in arteriolar diameter, and smaller AVR with increasing diabetes duration. The main causes of abnormalities in vascular caliber may be dysfunction of the endothelium and the abnormal tone of smooth muscle cells and pericytes. Retinal blood flow is autoregulated by the interaction of myogenic and metabolic mechanisms through the release of vasoactive substances by the vascular endothelium and retinal tissue surrounding the arteriolar wall [21]. Both pericytes and smooth muscle cells represent the myogenic mechanism for the vascular autoregulation of blood flow. They are able to regulate the capillary diameter through both contraction and relaxation. Local vasoactive substances therefore play an important role in regulating the functional dynamics of the vascular wall.
Endothelin-1 (ET-1) is the most potent endogenous vasoconstrictor known. Multiple studies have shown that the endogenous expression of ET-1 is increased in experimental diabetes [35,36]. Moreover, the ET-1 mRNA level in patients with DR was found to be significantly higher than in those without DR. It was therefore suggested that the ET-1 level is associated with the severity of DR in patients with Type 2 DM [37]. It is known that ET-1 tends to decrease retinal arterial diameter, but has no effect on retinal venous diameter [38]. Thus, we speculate that increased ET-1 may be one of the main causes of the decreasing arteriolar diameter in diabetes patients.
Dilated venular, which is another feature of the microvascular dysfunction of diabetes, was found to be associated with both the severity and duration of diabetes. This finding is in line with previous studies. Kifleyalso et al. found increasing severity of DR to be associated with the widening of the retinal venular caliber [10]. Steel et al. found that both increased DR severity and increased diabetes duration were associated with increased vascular width [39]. One possible explanation for this is the impairment of the vasomotor reaction resulting from the dysfunction of pericytes and smooth muscle cells.
Researchers have found that T2 DM reduces retinal vascular vasoconstrictor responses to hyperoxia and retinal vascular vasodilation responses to flicker stimuli [40,41]. All of these findings proved the microvascular dysfunction of diabetes. AVR combines the width of the arterioles and the venules together, which may be more valuable for assessing microcirculation status. In association analysis between the oximetry values and BCVA, AVR exhibited the most significant correlation with BCVA. Larger AVR was correlated with better BCVA. However, larger prospective studies to investigate the relevant causal relationships are warranted.
Previous Caucasian studies also found increased retinal arteriole oxygen saturation in DR patients, which is again consistent with our results [17][18][19]. There are several possible explanations for the elevated retinal arteriolar oxygen saturation in diabetic retinopathy. First, there is a thickening of the vascular basement membrane in diabetes, which directly increases the oxygen transport distance and inevitably hinders oxygen diffusion [42]. Second, there is decreasing blood flow since, as our results, all DR members exhibited decreased arteriole diameters ( Table 1). The decreased arteriole diameter will consequentially slow down the blood flow, resulting in the accumulation of metabolites. Third, there is a greater affinity of hemoglobin for oxygen in diabetic patients [43,44]. This may also help to explain the increased saturation in the retinal venules.
Regarding venular oxygen saturation, all previous studies have found higher values in the DR group than in the normal control group [15][16][17][18]. Khoobehi also showed a trend of increasing retinal oxygen saturation from the controls to the DR group, although significance was only reached for the comparison of the controls to the severe NPDR and PDR groups, and to all DR groups [17]. We propose that There was a significant increase in A_SatO 2 and AV_difference with increasing diabetes duration there are several reasons for this result. First, due to tissue degeneration and cell death, the demand for and consumption of oxygen in the retina is reduced, so the oxygen extraction from the arterioles is decreased, which consequently results in an increasing venule oxygen saturation. Second, the formation of an AV shunt enables the blood to travel directly into the venules without oxygen extraction by the retina tissues. Therefore, the retinal tissues are relatively hypoxic in DR. Previous studies have verified the hypoxic status of the DR retina [13,45]. It is suggested that the elevation of the venous oxygen saturation is more closely related to the retinal metabolism. Furthermore, it refers to the oxygen supply and consumption of the retina, which might be more indicative of diabetic changes to the capillary system than the blood flow velocity [15].
At present, the results of AV differences in DR are still controversial. Hammer [15], Jørgensen [18], and Man [46] all found a decreased AV difference. Additionally, they all considered the decreased oxygen consumption and neuronal metabolism, the occlusions and obliterations in the capillary bead, and the formation of AV shunt vessels to be the main causes. Hardarson [16] and Khoobehi [17] found a similar result between DR and healthy persons. Our study showed a decreasing trend of AV difference from the normal controls to DM with no DR group, to the NPDR group. A decreasing AV difference means decreasing oxygen extraction by the retinal tissue, which is the result of tissue degeneration. However, a significant increasing AV difference was observed in the PDR group when compared with the normal controls, and a significant difference only could be found in the PDR group when comparing with the controls. The reason for this is that the PDR patients showed significantly increasing arteriole oxygen saturation, with some exceeding by as much as 130 %. However, the values of venule oxygen saturation did not increase as much as the arteriole oxygen saturation did. Further, during the severe stage of DR, the formation of neovascular might promote the leakage of substances into the adjacent tissue from the vessels. All of these factors might distort the results. The change trend of AV difference in pathophysiologic process of DR still need further longitudinal study.
In the diabetes patients with no retinopathy, there were no significant differences in the diameter and oxygen saturation of retinal vessels when compared with the controls. In the earlier stages of diabetes, the microvascular, oxygen delivery, and metabolism were not significantly impaired [47], so the values of oxygen saturation and vessel diameter were nearly normal.
The main limits to the present study were a lack of serum markers, such as blood glucose and glycated hemoglobin (HbA1c), and blood rheology data. At present, the oscillatory potentials of the electroretinogram (ERG) are objective indicators for predicting early slight abnormities and the progression of DR. Future studies should take these into account. Additionally, a longitudinal study of diabetes that focuses on the consecutive shifts would present more valuable information.
Conclusions
In this Chinese population with type 2 DM, the thinner arterioles and wider venules point to microvascular dysfunction in DR. The increased oxygen saturation of the retinal vessels suggests that retinal oxygen metabolism is affected in diabetic retinopathy. The retinal vessel diameter and oxygen saturation may potentially play a predictive role in determining the risk and progress of DR.
Availability of data and materials
All the data supporting the findings was contained within the manuscript.
Ethics and consent to participate
The study protocol was reviewed and approved by the Medical Ethics Committee of the Zhongshan Ophthalmic Center, Sun Yat-sen University (No.2013MEKY028). It also strictly adhered to the principles of the Declaration of Helsinki. All subjects signed informed consent forms prior to participation.
|
v3-fos-license
|
2020-01-14T07:33:41.998Z
|
2020-01-09T00:00:00.000
|
210171926
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.ccsenet.org/journal/index.php/jfr/article/download/0/0/41775/43415",
"pdf_hash": "5c0b98566cd4b821bfba120a7a0bbfaa321c9105",
"pdf_src": "Unpaywall",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43191",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "5c0b98566cd4b821bfba120a7a0bbfaa321c9105",
"year": 2020
}
|
pes2o/s2orc
|
A Hospital Based Cross Sectional Study on Dietary Status and Associated Factors among People Living with HIV/AIDS in Kigali, Rwanda
Background
Good nutrition empowers PLWH with the ability to fight against infection ultimately slowing down disease progression. Consequently, nutrition management is a crucial component of HIV treatment, care, and support. This study aimed at assessing dietary status and associated factors among PLWH in Kigali, Rwanda.
Methods
We conducted a cross sectional study in three selected hospitals in Kigali from over a six-week period in July – August, 2019 to collect data from 204 HIV positive adults enrolled using systematic random sampling. Data was collected using an adapted, validated and pre-tested food frequency questionnaire (FFQ). Descriptive and multiple logistic regression analyses were performed using SPSS version 25 for windows.
Results
The proportion of participants with poor dietary status was 15% based on FFQ responses. The study found only three factors to be independently associated with dietary status. There was an association between dietary status and HIV status disclosure (AOR 2.5; CI 1.25 - 4.83; p=0.014). There was an association between dietary status and travel time to place of collection of ARVs (AOR 3.2; CI 1.7 - 5.8; p=0.006). There was an association between dietary status and BMI (AOR 10.2; CI 8.30 – 16.0; p<0.001).
Conclusions
Poor dietary status among PLWH remains a concern. The strong association between dietary status and BMI underlines the need for interventions that target PLWH to improve dietary status and ultimately nutrition status
Introduction
As of 2016, HIV accounted for more than 1.8 million incidents yearly, with most occurring in resource poor countries (UNAIDS, 2017). Earlier data showed that in 2015, 36.7 million people globally were infected with HIV and AIDS, 1.8 million being children under 15 years (Organization, 2016). In 2015, 1.1 million deaths were reported from HIV with 2.1 million new infections, including 150,000 children (WHO, 2018). It has been reported that close to 70 percent of the burden is in Africa (WHO, 2018), a region with the highest rates of food insecurity. Available evidence shows that PLWH who are undernourished when they start ART are 2-6 times more likely to die within 6 months of ART initiation compared to their normal body mass index counterparts (Munthali, Jacobs, Sitali, Dambe, & Michelo, 2015). Even on ART, there is a continuous need for PLWH to consume a nutritious diet to maintain weight and prevent micronutrient deficiencies (Audain, Zotor, Amuna, & Ellahi, 2015). There is also a growing recognition of the role nutritional support within clinical and community services plays in engagement, adherence and retention in care and treatment (Berhe, Tegabu, & Alemayehu, 2013;Kendall et al., 2014;Tang, Jacobson, Spiegelman, Knox, & Wanke, 2005). Proper nutrition complements the properly adhered ART. Closer to Rwanda, randomized control trials conducted in Kenya and Uganda showed nutritional support to significantly decrease mortality among PLWH initiating ART (PrayGod, Friis, & Filteau, 2018).
As of a decade ago, more than 800 million people were chronically undernourished (Ivers et al., 2009). The highest burden of both undernutrition and HIV/AIDS has been reported to be in SSA (UNAIDS, 2016). A study conducted in Brazil by Andrade et al. revealed high levels on undernutrition among PLWH at hospitalization with a reported prevalence of 43% (Andrade et al., 2012). Similar findings were reported in Asia by Hu et al, 2011 with a reported prevalence of malnutrition among PLWH of 37.2% (Hu et al., 2011). In Senegal, slightly lower prevalence was reported. The prevalence of malnutrition among PLWH, defined by BMI, was 19.2% in Dakar and 26.3% in Ziguinchor (Benzekri et al., 2015).
In East Africa, an institution based cross sectional study conducted by Gedle et al., 2015 in Southern Ethiopia reported an overall prevalence of malnutrition was 25.2% of which 49, 19, and 9 patients were mildly, moderately, and severely malnourished, respectively (Gedle, Gelaw, Muluye, & Mesele, 2015). Similarly, a multi-center study in Central Ethiopia reported a prevalence of 23.6% (Gebremichael, Hadush, Kebede, & Zegeye, 2018). These figures raise concerns as lower dietary diversity has been associated with greater mortality and poor clinical outcomes among PLWH (Palermo, Rawat, Weiser, & Kadiyala, 2013;Rawat, McCoy, & Kadiyala, 2013). There is no published literature for dietary status and associated factors among PLWH in Rwanda. The current study aimed to assess dietary status and associated factors among PLWH in Kigali, Rwanda.
Study Design
The study was a cross sectional survey. We adopted this study design given the nature of the research question. To obtain a snapshot of the current proportion of dietary status and associated factors among PLWH in Kigali, Rwanda, such a study design was best to survey the study population and answer the research questions.
Study Setting
Kigali City Province is the Capital City of Rwanda. The province has a total of 42 health facilities spread over the three districts; 21 in Gasabo, 10 in Kicukiro and 11 in Nyarugenge (NISR & Rwanda, 2014) , it has a surface area of 730 km 2 . The population of Kigali was 1,132,686 as of the 2012 national census (NISR & Rwanda, 2014). Among adults 15 -64 years old, HIV prevalence in City of Kigali is 4.3% (ICAP, 2019). One health facility in each district in Kigali Province was purposively selected based on researcher convenience and volumes of ARV clinic attendees.
Sampling and Data Collection Instrument
At each study site, participants were enrolled into the study by using simple random sampling. The questionnaire addressed: socio-demographic characteristics of the interviewee (sex, age, education, religion, marital status, occupation, duration on ART); outcome variable (dietary status). The food frequency questionnaire (FFQ) was adapted from a validation study conducted in the Rwanda context in 2016 (Yanagisawa et al., 2016).
Data Quality
We conducted a pretest of the survey questionnaire on 15 respondents in a non-sampled hospital. Consistency, understandability, and flow of questions was tested and revised accordingly. To ensure high data quality, the interviewers (GP and EJU) received a survey specific training for one-week. TD provided close supervision of the interviewers during data collection and the questionnaires were thoroughly edited to make sure that relevant questions have been responded to and coded according to the code designed for the study.
Data Entry and Statistical Analysis
Data coding and verification of responses were made on the same day and any missing information corrected. The cleaned data were entered into the computer using SPSS version 25 platform. Quantitative data was analyzed using descriptive statistics. The FFQ assessed consumption of energy, protein, iron and vitamin A. Based on the participant responses a mean food frequency assessment score of 49.5% was calculated. Participants that scored less than the mean were classified as poor dietary status while mean and over were classified as good dietary status. This allowed for the dependent variable to be dichotomous with subsequent analysis. The student t-test or chi-square test and logistic regression were used to determine study variables associated with dietary status. All statistical tests were concluded at 5% level of significance.
Ethical Considerations
This study has been ethically reviewed and approved by the University Teaching Hospital of Kigali Ethics Committee (Approval number: EC/CHUK/0129/2019).
Socio-demographic Characteristics
A total of 204 participants were enrolled with average age of 30.3 and majority (55%) being female. Of these, 88% reported to have ever attended school while only 5% were currently enrolled in school. 61% were educated up to primary level only while 38% reported to have worked in the past 12 months. 18% were married and of these only 3% reported to have more than one wife/husband. Participant demographic characteristics are presented in Table 1.
Dietary Status
Of the 204 participants, 15% had poor dietary status based on their FFQ responses.
Factors Associated with Dietary Status
The study found only three factors to be independently associated with dietary status. There was an association between dietary status and HIV status disclosure (AOR 2.5; CI 1.25 -4.83; p=0.014). There was an association between dietary status and travel time to place of collection of ARVs (AOR 3.2; CI 1.7 -5.8; p=0.006). There was an association between dietary status and BMI (AOR 10.2; CI 8.30 -16.0; p<0.001). More information is presented on Figure 1.
Discussion
The current study revealed 15% of PLWH in Kigali, Rwanda to have poor dietary status. The findings of the current study mainly noted that some of the main factors, which are associated with poor dietary status include HIV status disclosure, travel time to clinic and BMI. Globally, multi-organizational efforts have been launched and recommendations made by The U.S. President's Emergency Plan for AIDS Relief (PEPFAR), and endorsed by WHO, UNAIDS, and the World Food Program. The nutrition assessment, counseling and support model is known as NACS (Tang, Quick, Chung, & Wanke, 2015). As of 2017, South Africa, Mozambique and Nigeria were in early planning phases. Cote d'Ivoire, Ghana, Ethiopia, Tanzania, Namibia and Zambia were at program expansion stage. Only Kenya and Malawi were in full implementation at national scale. Findings of the current study underscore the need for nutritional management of PLWH to be prioritized in Rwanda.
In the current study, 15% of the participants had poor dietary status based on their FFQ responses. A slightly higher prevalence of 25.2% was reported by Gedle et al. 2015 in Ethiopia (Gedle et al., 2015). However, a study by Hailemariam et al. in Ethiopia established that 12.3% of PLWH had poor dietary status (Hailemariam, Bune, & Ayele, 2013). The findings of the study that was done by Gebremichael et al. noted that the prevalence of poor dietary status was 23.6% (Gebremichael et al., 2018). In Senegal, Benzekri et al. established that 19.2% in Dakar and 26.3% in Ziguinchor had poor dietary status (Benzekri et al., 2015). It is worth noting that the prevalence the current study reports is comparable to what has been reported by Argemi et al. who established that 11.2% adults initiating ART were malnourished (Argemi et al., 2012). The difference between these findings and the current study findings could be attributed to the lower BMI lower limit set by Argemi et al of 16 (Argemi et al., 2012). However, much higher prevalence have been reported elsewhere. Mulu et al. noted a much higher prevalence [46.8%] in comparison to the findings of the current study (Mulu, Hamza, & Alemseged, 2016). Hadgu et al. reported a prevalence of 42.3% (Hadgu, Worku, Tetemke, & Berhe, 2013). This is way higher than the findings of the current study.
Most of the studies have tried to explore some of the main factors that are associated with poor dietary status among PLWH. For instance, the findings of Gedle et al. revealed that living within the rural areas, anemia, as well as intestinal parasitic co-infection was significantly linked to poor dietary status (Gedle et al., 2015). The study also made the conclusion that the prevalence of malnutrition among PLWH getting ART in Butajira was very high. Similarly, the study, which was carried out by Hailemariam et al. indicated that some of the factors, which were closely linked to poor dietary status among PLWH include unemployment, WHO clinical stage four, gastrointestinal symptoms, as well as past opportunistic infections (Hailemariam et al., 2013). Some of the main factors, which have been linked to poor dietary status among the HIV/AIDS patients included unemployment, clinical stages of AIDS progression, CD4 count less than 350 cells/μl, tuberculosis, the duration on antiretroviral therapy as well as household food insecurity (Gebremichael et al., 2018). The study by Benzekri et al. noted that severe food insecurity was linked to the missing of clinic appointments and also to the failure of the PLWH to take antiretroviral therapy because of hunger (Benzekri et al., 2015). The findings of the study that was done by Hadgu et al. noted that household food insecurity, inadequate dietary diversity, anemia, as well as the general lack of nutritional support were the major independent predictors of poor dietary status (Hadgu et al., 2013). It can be noted that most of the previous studies have given different factors to be associated with dietary status.
Conclusion
Based on the findings of the current study, poor dietary status among PLWH remains a concern. The strong association between dietary status and BMI underlines the need for interventions that target PLWH to improve dietary status and ultimately nutrition status. Larger studies with designs of more scientific rigour evaluating dietary status and associated factors may help shed more light into the research problem. This study has presented baseline findings which may be used to guide future research.
Limitations
Cross sectional studies by nature are not able to demonstrate causality. Further, the findings may not be generalized for the entire population. Additionally, the data collection tool to be used for dietary assessment collected descriptive qualitative information only. However, it is anticipated the tool yielded useful findings as the tool used was adapted from a Rwanda validated food FFQ with guidelines from WHO. Finally, the research did not include biomarkers, the gold standard for any nutritional assessment to validate the self-reported dietary status.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.